New architecture uses 3D memories to enable greater speed and energy-efficiency in parsing grammar for virtual assistants such as Amazon’s Echo “Alexa” and Google Home
UCF researchers have developed a novel in-memory computing architecture that enables computing systems to perform graph transitive closure problems more efficiently. The invention also requires less energy and is faster at accessing and processing data than traditional von Neumann architectures. Many applications require graph transitive closure solutions in order to perform complex procedures such as resolving database queries, parsing grammars for use by virtual assistants, and determining data dependencies at compilation.
By leveraging 3D nanoscale crossbar memory circuits, the new logic-in-memory architecture avoids the memory-processor bottleneck issue that occurs when high-performance machines use traditional von Neumann-based architectures. Another problem with the traditional architectures, which use volatile DRAM and SRAM memories, is that the stored data is lost when the system power turns off. Thus, they require a constant supply of power to refresh stored data, resulting in poor energy efficiency. Furthermore, in von Neumann architecture, separation of memory and processing units poses serious restrictions on performance. According to the International Data Corporation (IDC), the world will produce 44 zettabytes or 44 trillion gigabytes of data by 2020. This growth will add to the challenge of power-efficient computing with structured big data.
The new invention specifies how emerging non-volatile memory units in a computing system can interact with each other to compute graph transitive closure more energy-efficiently. The computational fabric of the invention is a 3D crossbar memory with at least two layers of 1-diode 1-resistor (1D1R) interconnects. With the new approach, the top and bottom rows or nanowires of the crossbar are connected to each other using individual external connections. Thus, the new logic-in-memory architecture only requires the addition of external feedback loops to already existing 3D memories. The invention improves on both energy and time, ensuring the ability to analyze huge networks efficiently. For example, in one study, the new solution analyzed real-world networks with as many as 1.9 million nodes. The graph transitive closure calculation took less than 15 minutes and used about 16 kilojoules (kJ) of power–a small fraction of the energy spent by a standard desktop computer.
- Faster and more energy-efficient than traditional von Neumann architecture
- Scalable for use with large graphs
- High-performance computing machines and computational data science
- Cybersecurity and low-energy military solutions
- ASIC (application-specific integrated circuit) implementations of graph algorithms in hardware