Efficient computing ways to processing and storing vast volumes of data are essential in this burgeoning era of AI. Current computer designs, on the other hand, have intrinsic performance constraints.
Alternative computing architectures that resemble the brain have been the focus of research in recent years. These devices, known as neuromorphic computers, avoid many of the problems that plague the classic von Neumann design, which has been in use since 1945 and consists of processing and memory units.
Because these units are physically separated, data must be transmitted between them via a network of wires or conductors known as the “memory bus.” This slows down the entire computing system, uses a lot of power, and is a significant impediment to efficient performance.
In the recent decade, the area of neuromorphic computing has blossomed, circumventing these obstacles through an integrated unit that combines memory storage and calculations — hence the term “in-memory computing.” This novel architecture avoids the large distances that data must travel in typical computer architectures by using memory cells and processing units that are akin to the biological synapse and neuron.
Most in-memory computing, on the other hand, is based on a concept known as resistor-based memory, in which data is stored and processed through the use of regulated electrical resistance. While this allows for brain-like memory processing, these devices nevertheless have a number of drawbacks, such as high energy consumption and a complicated system setup.
Researchers at the Georgia Institute of Technology, led by Shimeng Yu, developed a new type of electrical artificial synapse that uses capacitor-based memory to get past these problems.
Capacitors use an electrical charge to record and store data. They have the extra benefit of being non-conductive, which means that electrical charges cannot easily penetrate the capacitive synapse, in addition to requiring less power to operate. This eliminates an issue known as a creeping leakage current, which has plagued artificial synaptic systems for years.
There is no need for an additional circuit component termed a “selector” to limit leakage if there is no problem with a creeping leakage current. Because of the fabrication requirements, selectors can only be included into the bottom layer of a computer chip, making vertical stacking of an artificial synapse extremely challenging. The storage density and performance of these systems are both improved. Finding the correct material to do this has been a difficulty.
The team was able to produce the capacitive artificial synapse using hafnium oxide, a material that has long been employed in the semiconductor sector. The material displayed varied capacitance values based on the electrical charges stored in it, and the fact that it is widely used suggests that commercialization of this technology will be simple.
In a system-level performance test at the array level, the capabilities of the new hafnium-based capacitive synapses were demonstrated, showing their potential real-world applications.
The scientists stated that while this novel synapse was successful on a systems level, there is still opportunity for improvement. It must, for example, be reduced down to a few tens of nanometers, which is within the range of existing fabrication parameters. This scale is about 1000-10000 times thinner than human hair. Furthermore, further structural alteration or device shape engineering of the capacitors can result in a more dependable synapse with consistent data states.
While this novel (though immature) technology may achieve comparable or even better performance than other mature synaptic array technologies, it will be exciting to investigate and further optimise capacitive synapse device structures and circuitry to continue to improve in-memory computing performance.