What Is the Future of Computers?

Integrated circuit from an EPROM memory microchip showing the memory blocks and supporting circuitry.
Integrated circuit from an EPROM memory microchip showing the memory blocks and supporting circuitry. (Image credit: Creative Commons Attribution-Share Alike 3.0 Unported | Zephyris)

In 1958, a Texas Instruments engineer named Jack Kilby cast a pattern onto the surface of an 11-millimeter-long "chip" of semiconducting germanium, creating the first ever integrated circuit. Because the circuit contained a single transistor — a sort of miniature switch — the chip could hold one "bit" of data: either a 1 or a 0, depending on the transistor's configuration.

Since then, and with unflagging consistency, engineers have managed to double the number of transistors they can fit on computer chips every two years. They do it by regularly halving the size of transistors. Today, after dozens of iterations of this doubling and halving rule, transistors measure just a few atoms across, and a typical computer chip holds 9 million of them per square millimeter. Computers with more transistors can perform more computations per second (because there are more transistors available for firing), and are therefore more powerful. The doubling of computing power every two years is known as "Moore's law," after Gordon Moore, the Intel engineer who first noticed the trend in 1965.

Moore's law renders last year's laptop models defunct, and it will undoubtedly make next year's tech devices breathtakingly small and fast compared to today's. But consumerism aside, where is the exponential growth in computing power ultimately headed? Will computers eventually outsmart humans? And will they ever stop becoming more powerful?

The singularity

Many scientists believe the exponential growth in computing power leads inevitably to a future moment when computers will attain human-level intelligence: an event known as the "singularity." And according to some, the time is nigh.

Physicist, author and self-described "futurist" Ray Kurzweil has predicted that computers will come to par with humans within two decades. He told Time Magazine last year that engineers will successfully reverse-engineer the human brain by the mid-2020s, and by the end of that decade, computers will be capable of human-level intelligence.

The conclusion follows from projecting Moore's law into the future. If the doubling of computing power every two years continues to hold, "then by 2030 whatever technology we're using will be sufficiently small that we can fit all the computing power that's in a human brain into a physical volume the size of a brain," explained Peter Denning, distinguished professor of computer science at the Naval Postgraduate School and an expert on innovation in computing. "Futurists believe that's what you need for artificial intelligence. At that point, the computer starts thinking for itself." [How to Build a Human Brain]

What happens next is uncertain — and has been the subject of speculation since the dawn of computing.

"Once the machine thinking method has started, it would not take long to outstrip our feeble powers," Alan Turing said in 1951 at a talk entitled "Intelligent Machinery: A heretical theory," presented at the University of Manchester in the United Kingdom. "At some stage therefore we should have to expect the machines to take control." The British mathematician I.J. Good hypothesized that "ultraintelligent" machines, once created, could design even better machines. "There would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make," he wrote.

Buzz about the coming singularity has escalated to such a pitch that there's even a book coming out next month, called "Singularity Rising" (BenBella Books), by James Miller, an associate professor of economics at Smith College, about how to survive in a post-singularity world. [Could the Internet Ever Be Destroyed?]

Brain-like processing

But not everyone puts stock in this notion of a singularity, or thinks we'll ever reach it. "A lot of brain scientists now believe the complexity of the brain is so vast that even if we could build a computer that mimics the structure, we still don't know if the thing we build would be able to function as a brain," Denning told Life's Little Mysteries. Perhaps without sensory inputs from the outside world, computers could never become self-aware.

Others argue that Moore's law will soon start to break down, or that it has already. The argument stems from the fact that engineers can't miniaturize transistors much more than they already have, because they're already pushing atomic limits. "When there are only a few atoms in a transistor, you can no longer guarantee that a few atoms behave as they're supposed to," Denning explained. On the atomic scale, bizarre quantum effects set in. Transistors no longer maintain a single state represented by a "1" or a "0," but instead vacillate unpredictably between the two states, rendering circuits and data storage unreliable. The other limiting factor, Denning says, is that transistors give off heat when they switch between states, and when too many transistors, regardless of their size, are crammed together onto a single silicon chip, the heat they collectively emit melts the chip.

For these reasons, some scientists say computing power is approaching its zenith. "Already we see a slowing down of Moore's law," the theoretical physicist Michio Kaku said in a BigThink lecture in May.

But if that's the case, it's news to many. Doyne Farmer, a professor of mathematics at Oxford University who studies the evolution of technology, says there is little evidence for an end to Moore's law. "I am willing to bet that there is insufficient data to draw a conclusion that a slowing down [of Moore's law] has been observed," Farmer told Life's Little Mysteries. He says computers continue to grow more powerful as they become more brain-like.

Computers can already perform individual operations orders of magnitude faster than humans can, Farmer said; meanwhile, the human brain remains far superior at parallel processing, or performing multiple operations at once. For most of the past half-century, engineers made computers faster by increasing the number of transistors in their processors, but they only recently began "parallelizing" computer processors. To work around the fact that individual processors can't be packed with extra transistors, engineers have begun upping computing power by building multi-core processors, or systems of chips that perform calculations in parallel."This controls the heat problem, because you can slow down the clock," Denning explained. "Imagine that every time the processor's clock ticks, the transistors fire. So instead of trying to speed up the clock to run all these transistors at faster rates, you can keep the clock slow and have parallel activity on all the chips." He says Moore's law will probably continue because the number of cores in computer processors will go on doubling every two years.

And because parallelization is the key to complexity, "In a sense multi-core processors make computers work more like the brain," Farmer told Life's Little Mysteries.

And then there's the future possibility of quantum computing, a relatively new field that attempts to harness the uncertainty inherent in quantum states in order to perform vastly more complex calculations than are feasible with today's computers. Whereas conventional computers store information in bits, quantum computers store information in qubits: particles, such as atoms or photons, whose states are "entangled" with one another, so that a change to one of the particles affects the states of all the others. Through entanglement, a single operation performed on a quantum computer theoretically allows the instantaneous performance of an inconceivably huge number of calculations, and each additional particle added to the system of entangled particles doubles the performance capabilities of the computer.

If physicists manage to harness the potential of quantum computers — something they are struggling to do — Moore's law will certainly hold far into the future, they say.

Ultimate limit

If Moore's law does hold, and computer power continues to rise exponentially (either through human ingenuity or under its own ultraintelligent steam), is there a point when the progress will be forced to stop? Physicists Lawrence Krauss and Glenn Starkman say "yes." In 2005, they calculated that Moore's law can only hold so long before computers actually run out of matter and energy in the universe to use as bits. Ultimately, computers will not be able to expand further; they will not be able to co-opt enough material to double their number of bits every two years, because the universe will be accelerating apart too fast for them to catch up and encompass more of it.

So, if Moore's law continues to hold as accurately as it has so far, when do Krauss and Starkman say computers must stop growing? Projections indicate that computer will encompass the entire reachable universe, turning every bit of matter and energy into a part of its circuit, in 600 years' time.

That might seem very soon. "Nevertheless, Moore's law is an exponential law," Starkman, a physicist at Case Western University, told Life's Little Mysteries. You can only double the number of bits so many times before you require the entire universe.

Personally, Starkman thinks Moore's law will break down long before the ultimate computer eats the universe. In fact, he thinks computers will stop getting more powerful in about 30 years. Ultimately, there's no telling what will happen. We might reach the singularity — the point when computers become conscious, take over, and then start to self-improve. Or maybe we won't. This month, Denning has a new paper out in the journal Communications of the ACM, called "Don't feel bad if you can't predict the future." It's about all the people who have tried to do so in the past, and failed.

This story was provided by Life's Little Mysteries, a sister site to LiveScience. Follow Natalie Wolchover on Twitter @nattyover or Life's Little Mysteries @llmysteries. We're also on Facebook & Google+.

Natalie Wolchover

Natalie Wolchover was a staff writer for Live Science from 2010 to 2012 and is currently a senior physics writer and editor for Quanta Magazine. She holds a bachelor's degree in physics from Tufts University and has studied physics at the University of California, Berkeley. Along with the staff of Quanta, Wolchover won the 2022 Pulitzer Prize for explanatory writing for her work on the building of the James Webb Space Telescope. Her work has also appeared in the The Best American Science and Nature Writing and The Best Writing on Mathematics, Nature, The New Yorker and Popular Science. She was the 2016 winner of the  Evert Clark/Seth Payne Award, an annual prize for young science journalists, as well as the winner of the 2017 Science Communication Award for the American Institute of Physics.