
Artificial intelligence (AI) is getting stronger every year—but also more power-hungry.
Running large AI models takes a lot of electricity, raising concerns about the environmental impact of data centers and cloud systems that power today’s digital world.
Now, a team of Cornell researchers has found a way to make the chips behind AI run more efficiently, cutting energy use and improving performance.
The breakthrough, developed by researchers at Cornell Tech and Cornell Engineering, focuses on a special kind of chip called a Field-Programmable Gate Array (FPGA).
Unlike standard chips that are fixed at the factory, FPGAs can be reprogrammed after manufacturing.
This flexibility makes them useful in fast-changing industries like AI, cloud computing, and wireless communication.
You can even find FPGAs in everyday devices such as ultrasound machines, CAT scanners, and washing machines.
Inside an FPGA are small computing units called logic blocks. Each block has two main parts: Lookup Tables (LUTs), which handle logical operations, and adder chains, which carry out arithmetic tasks like adding numbers.
In traditional chip designs, the adder chains can only be used through the LUTs. This setup slows things down and wastes resources—especially for AI systems that rely heavily on arithmetic.
To solve this problem, the team designed a new chip architecture called “Double Duty.”
It lets the LUTs and adder chains work independently and at the same time. In simple terms, the chip can now do more work with the same parts, making it both faster and more energy-efficient.
This design is especially valuable for deep neural networks, which are the backbone of modern AI. These networks are often laid out directly onto FPGAs to speed up processing.
By giving each chip more flexibility, Double Duty makes it possible to run these networks more effectively while using less space and power.
In testing, the new design cut the space required for certain AI tasks by more than 20% and boosted performance by almost 10% across a wide range of circuits. That means fewer chips are needed to do the same work, which translates into lower energy use.
The benefits extend beyond AI. Industries like telecommunications, chip verification, and medical imaging could also gain from this architecture because it allows larger programs to fit into smaller chips, saving both space and power.
The project began as an undergraduate idea and grew into a collaboration between Cornell researchers, Canadian universities, and engineers from Altera, formerly part of Intel.
The team’s work earned a Best Paper Award at the International Conference on Field-Programmable Logic and Applications (FPL 2025) in the Netherlands.
As AI becomes part of everyday devices and services, innovations like Double Duty could help ensure that smarter machines don’t come at the cost of higher carbon footprints.