Artificial neural networks, which power many AI tools, may soon process time-dependent information like audio and video more efficiently thanks to a new development.
Researchers at the University of Michigan have created the first memristor with a tunable “relaxation time,” as reported in Nature Electronics.
Memristors are electrical components that store information in their resistance levels.
They could reduce AI’s energy use by about 90 times compared to today’s graphics processing units (GPUs).
AI is expected to consume about half a percent of the world’s electricity by 2027, which could increase as more AI tools are developed.
Professor Wei Lu from the University of Michigan explained that current AI approaches involve increasing the size of networks to process more data, which is not very efficient.
GPUs differ from the artificial neural networks they support because they must load the entire network from external memory, consuming time and energy.
Memristors, however, mimic how both artificial and biological neural networks function, saving energy as they don’t need external memory. This way, memristors can effectively embody the artificial neural network.
The new material system created by Lu’s team could improve the energy efficiency of AI chips six times more than current state-of-the-art materials. Biological neural networks use a process called relaxation for timekeeping.
Neurons send signals once they reach a certain threshold in a set time. If too much time passes, the neuron relaxes, and its electrical energy dissipates. Different relaxation times in neurons help us understand event sequences.
In memristors, relaxation works differently. The resistance increases over time, allowing more of the electrical signal to pass through when exposed to a signal. Previously, controlling the relaxation time in memristors was challenging.
However, Lu and Heron’s team found that varying a base material could provide different relaxation times, enabling memristor networks to mimic this timekeeping mechanism.
The researchers used a superconductor material, YBCO, made of yttrium, barium, carbon, and oxygen, for its crystal structure.
This helped organize magnesium, cobalt, nickel, copper, and zinc oxides in the memristor material.
By adjusting these oxides’ ratios, the team achieved time constants ranging from 159 to 278 nanoseconds (trillionths of a second). Their simple memristor network learned to recognize spoken numbers from zero to nine, identifying each number before the audio finished.
Although creating these memristors required a lot of energy due to the need for perfect crystals, the team believes mass production could be simpler and more affordable. The materials used are earth-abundant, non-toxic, and cheap, with the potential to be easily manufactured.
“This is just the beginning, but there are ways to make these materials scalable and affordable,” said Professor John Heron. “We can almost spray them on.”
This breakthrough could lead to AI systems that process data more efficiently, significantly reducing energy consumption and paving the way for more sustainable AI technologies.
Source: University of Michigan.