China solves 'century-old problem' with new analog chip that is 1,000 times faster than high-end Nvidia GPUs
Researchers from Peking University say their resistive random-access memory chip may be capable of speeds 1,000 faster than the Nvidia H100 and AMD Vega 20 GPUs.
 
Scientists in China have developed a new chip, with a twist: it's analog, meaning it performs calculations on its own physical circuits rather than via the binary 1s and 0s of standard digital processors.
What’s more, its creators say the new chip is capable of outperforming top-end graphics processing units (GPUs) from Nvidia and AMD by as much as 1,000 times.
In a new study published Oct. 13 in the journal Nature Electronics, researchers from Peking University said their device tackled two key bottlenecks: the energy and data constraints digital chips face in emerging fields like artificial intelligence (AI) and 6G, and the "century-old problem" of poor precision and impracticality that has limited analog computing.
When put to work on complex communications problems — including matrix inversion problems used in massive multiple-input multiple-output (MIMO) systems (a wireless technological system) — the chip matched the accuracy of standard digital processors while using about 100 times less energy.
By making adjustments, the researchers said the device then trounced the performance of top-end GPUs like the Nvidia H100 and AMD Vega 20 by as much as 1,000 times. Both chips are major players in AI model training; Nvidia's H100, for instance, is the newer version of the A100 graphics cards, which OpenAI used to train ChatGPT.
The new device is built from arrays of resistive random-access memory (RRAM) cells that store and process data by adjusting how easily electricity flows through each cell.
Unlike digital processors that compute in binary 1s and 0s, the analog design processes information as continuous electrical currents across its network of RRAM cells. By processing data directly within its own hardware, the chip avoids the energy-intensive task of shuttling information between itself and an external memory source.
Get the world’s most fascinating discoveries delivered straight to your inbox.
"With the rise of applications using vast amounts of data, this creates a challenge for digital computers, particularly as traditional device scaling becomes increasingly challenging," the researchers said in the study. "Benchmarking shows that our analogue computing approach could offer a 1,000 times higher throughput and 100 times better energy efficiency than state-of-the-art digital processors for the same precision."
Old tech, new tricks
Analog computing isn't new — quite the opposite, in fact. The Antikythera mechanism, discovered off the coast of Greece in 1901, is estimated to have been built more than 2,000 years ago. It used interlocking gears to perform calculations.
For most of modern computing history, however, analog technology has been written off as an impractical alternative to digital processors. This is because analog systems rely on continuous physical signals to process information — for example, a voltage or electric current. These are much more difficult to control precisely than the two stable states (1 and 0) that digital computers have to work with.
Where analog systems excel is in speed and efficiency. Because they don't need to break calculations down into long strings of binary code — instead representing them as physical operations on the chip's circuitry — analog chips can handle large volumes of information simultaneously while using far less energy.
This becomes particularly significant in data- and energy-intensive applications like AI, where digital processors face limitations in how much information they can process sequentially, as well as in future 6G communications — where networks will have to process huge volumes of overlapping wireless signals in real time.
The researchers said that recent advances in memory hardware could make analog computing viable once again. The team configured the chip's RRAM cells into two circuits: one that provided a fast but approximate calculation, and a second that refined and fine-tuned the result over subsequent iterations until it landed on a more precise number.
Configuring the chip in this way meant that the team was able to combine the speed of analog computation with the accuracy normally associated with digital processing. Crucially, the chip was manufactured using a commercial production process, meaning it could potentially be mass-produced.
Future improvements to the chip's circuitry could boost its performance even more, the researchers said. Their next goal is to build larger, fully integrated chips capable of handling more complex problems at faster speeds.
Owen Hughes is a freelance writer and editor specializing in data and digital technologies. Previously a senior editor at ZDNET, Owen has been writing about tech for more than a decade, during which time he has covered everything from AI, cybersecurity and supercomputers to programming languages and public sector IT. Owen is particularly interested in the intersection of technology, life and work – in his previous roles at ZDNET and TechRepublic, he wrote extensively about business leadership, digital transformation and the evolving dynamics of remote work.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.

