Scientists find way to boost computer speed and efficiency by skipping the CPU

Credit: Unsplash+.

Researchers at Technion have created a groundbreaking software package that lets computers perform calculations directly in memory, skipping the CPU.

This innovation could make computing faster and more energy-efficient by avoiding the usual time-consuming data transfers between the CPU and memory.

This approach is called “in-memory computing” and marks a big change in how computers work.

Traditionally, the CPU handles all the calculations, using data stored in memory.

But transferring data back and forth between the CPU and memory can slow things down and use up a lot of energy.

Over recent decades, CPUs have become much faster, and memory units have become much larger.

However, this progress has only intensified the problem, as data transfer has become a major bottleneck limiting the computer’s overall speed.

Professor Shahar Kvatinsky from Technion’s Faculty of Electrical and Computer Engineering has been working for years on this “memory wall” problem—the challenge of having computations depend on two separate components.

He has previously explored hardware technologies to allow some calculations to happen directly in memory, which helps ease the “traffic jams” between memory and CPU in regular computers.

This shift in computing architecture has potential benefits across many fields, including artificial intelligence, finance, bioinformatics, and more.

Researchers worldwide are investigating this concept, working on new types of memory units and basic computational methods that could make in-memory computing feasible on a large scale.

However, while the hardware side of in-memory computing has seen significant progress, the software side has received less attention.

For decades, software has been written for traditional computers, where calculations are managed by the CPU.

Professor Kvatinsky explains, “Since we’re now handling some operations directly in memory, we need software that can support this new type of computing.”

This means developing entirely new code to work with in-memory systems, which can be a time-consuming task for software developers.

To address this challenge, Professor Kvatinsky’s research group, led by Ph.D. student Orian Leitersdorf with researcher Ronny Ronen, developed a new platform called PyPIM. The name combines “Python” and “Processing-in-Memory.”

PyPIM enables developers to write software for in-memory computing systems in Python, a popular programming language. This platform includes libraries that convert Python commands into machine-level instructions that can be carried out directly in memory, making the development process much easier for programmers.

Additionally, the researchers created a simulation tool to help measure the performance gains of this new computing approach, allowing developers to see how much faster code could run on an in-memory system compared to a traditional one.

In their paper, the researchers showed that this platform could handle complex calculations with short, simple code, achieving impressive improvements in speed and efficiency.

This research was presented at the IEEE/ACM International Symposium on Microarchitecture in Austin, Texas, and the paper is available on the arXiv server.