Vanishing point: the rise of the invisible computer
The long read: For decades, computers have got smaller and more powerful, enabling huge scientific progress. But this cant go on for ever. What happens when they stop shrinking?
In 1971, Intel, then an obscure firm in what would only later come to be known as Silicon Valley, released a chip called the 4004. It was the worlds first commercially available microprocessor, which meant it sported all the electronic circuits necessary for advanced number-crunching in a single, tiny package. It was a marvel of its time, built from 2,300 tiny transistors, each around 10,000 nanometres (or billionths of a metre) across about the size of a red blood cell. A transistor is an electronic switch that, by flipping between on and off, provides a physical representation of the 1s and 0s that are the fundamental particles of information.
In 2015 Intel, by then the worlds leading chipmaker, with revenues of more than $55bn that year, released its Skylake chips. The firm no longer publishes exact numbers, but the best guess is that they have about 1.5bn2 bn transistors apiece. Spaced 14 nanometres apart, each is so tiny as to be literally invisible, for they are more than an order of magnitude smaller than the wavelengths of light that humans use to see.
Everyone knows that modern computers are better than old ones. But it is hard to convey just how much better, for no other consumer technology has improved at anything approaching a similar pace. The standard analogy is with cars: if the car from 1971 had improved at the same rate as computer chips, then by 2015 new models would have had top speeds of about 420 million miles per hour. That is roughly two-thirds the speed of light, or fast enough to drive round the world in less than a fifth of a second. If that is still too slow, then before the end of 2017 models that can go twice as fast again will begin arriving in showrooms.
This blistering progress is a consequence of an observation first made in 1965 by one of Intels founders, Gordon Moore. Moore noted that the number of components that could be crammed onto an integrated circuit was doubling every year. Later amended to every two years, Moores law has become a self-fulfilling prophecy that sets the pace for the entire computing industry. Each year, firms such as Intel and the Taiwan Semiconductor Manufacturing Company spend billions of dollars figuring out how to keep shrinking the components that go into computer chips. Along the way, Moores law has helped to build a world in which chips are built in to everything from kettles to cars (which can, increasingly, drive themselves), where millions of people relax in virtual worlds, financial markets are played by algorithms and pundits worry that artificial intelligence will soon take all the jobs.
But it is also a force that is nearly spent. Shrinking a chips components gets harder each time you do it, and with modern transistors having features measured in mere dozens of atoms, engineers are simply running out of room. There have been roughly 22 ticks of Moores law since the launch of the 4004 in 1971 through to mid-2016. For the law to hold until 2050 means there will have to be 17 more, in which case those engineers would have to figure out how to build computers from components smaller than an atom of hydrogen, the smallest element there is. That, as far as anyone knows, is impossible.
Yet business will kill Moores law before physics does, for the benefits of shrinking transistors are not what they used to be. Moores law was given teeth by a related phenomenon called Dennard scaling (named for Robert Dennard, an IBM engineer who first formalised the idea in 1974), which states that shrinking a chips components makes that chip faster, less power-hungry and cheaper to produce. Chips with smaller components, in other words, are better chips, which is why the computing industry has been able to persuade consumers to shell out for the latest models every few years. But the old magic is fading.
Shrinking chips no longer makes them faster or more efficient in the way that it used to. At the same time, the rising cost of the ultra-sophisticated equipment needed to make the chips is eroding the financial gains. Moores second law, more light-hearted than his first, states that the cost of a foundry, as such factories are called, doubles every four years. A modern one leaves little change from $10bn. Even for Intel, that is a lot of money.
The result is a consensus among Silicon Valleys experts that Moores law is near its end. From an economic standpoint, Moores law is dead, says Linley Gwennap, who runs a Silicon Valley analysis firm. Dario Gil, IBMs head of research and development, is similarly frank: I would say categorically that the future of computing cannot just be Moores law any more. Bob Colwell, a former chip designer at Intel, thinks the industry may be able to get down to chips whose components are just five nanometres apart by the early 2020s but youll struggle to persuade me that theyll get much further than that.
One of the most powerful technological forces of the past 50 years, in other words, will soon have run its course. The assumption that computers will carry on getting better and cheaper at breakneck speed is baked into peoples ideas about the future. It underlies many technological forecasts, from self-driving cars to better artificial intelligence and ever more compelling consumer gadgetry. There are other ways of making computers better besides shrinking their components. The end of Moores law does not mean that the computer revolution will stall. But it does mean that the coming decades will look very different from the preceding ones, for none of the alternatives is as reliable, or as repeatable, as the great shrinkage of the past half-century.
Moores law has made computers smaller, transforming them from room-filling behemoths to svelte, pocket-filling slabs. It has also made them more frugal: a smartphone that packs more computing power than was available to entire nations in 1971 can last a day or more on a single battery charge. But its most famous effect has been to make computers faster. By 2050, when Moores law will be ancient history, engineers will have to make use of a string of other tricks if they are to keep computers getting faster.
There are some easy wins. One is better programming. The breakneck pace of Moores law has in the past left software firms with little time to streamline their products. The fact that their customers would be buying faster machines every few years weakened the incentive even further: the easiest way to speed up sluggish code might simply be to wait a year or two for hardware to catch up. As Moores law winds down, the famously short product cycles of the computing industry may start to lengthen, giving programmers more time to polish their work.
Another is to design chips that trade general mathematical prowess for more specialised hardware. Modern chips are starting to feature specialised circuits designed to speed up common tasks, such as decompressing a film, performing the complex calculations required for encryption or drawing the complicated 3D graphics used in video games. As computers spread into all sorts of other products, such specialised silicon will be very useful. Self-driving cars, for instance, will increasingly make use of machine vision, in which computers learn to interpret images from the real world, classifying objects and extracting information, which is a computationally demanding task. Specialised circuitry will provide a significant boost.
Read more: http://www.theguardian.com/us