Monthly Archives: June 2017

The Future of Computing Depends on Making It Reversible

For more than 50 years, computers have made steady and dramatic improvements, all thanks to Moore’s Law—the exponential increase over time in the number of transistors that can be fabricated on an integrated circuit of a given size. Moore’s Law owed its success to the fact that as transistors were made smaller, they became simultaneously cheaper, faster, and more energy efficient. The ­payoff from this win-win-win scenario enabled reinvestment in semi­conductor fabrication technology that could make even smaller, more densely packed transistors. And so this ­virtuous ­circle continued, decade after decade.

Now though, experts in industry, academia, and government laboratories anticipate that semiconductor miniaturization won’t continue much longer—maybe 5 or 10 years. Making transistors smaller no longer yields the improvements it used to. The physical characteristics of small transistors caused clock speeds to stagnate more than a decade ago, which drove the industry to start building chips with multiple cores. But even multicore architectures must contend with increasing amounts of “dark silicon,” areas of the chip that must be powered off to avoid overheating.

Heroic efforts are being made within the semiconductor industry to try to keep miniaturization going. But no amount of investment can change the laws of physics. At some point—now not very far away—a new computer that simply has smaller transistors will no longer be any cheaper, faster, or more energy efficient than its predecessors. At that point, the progress of conventional semiconductor technology will stop.

What about unconventional semiconductor technology, such as carbon-nanotube transistors, tunneling transistors, or spintronic devices?Unfortunately, many of the same fundamental physical barriers that prevent today’s complementary metal-oxide-semiconductor (CMOS) technology from advancing very much further will still apply, in a modified form, to those devices. We might be able to eke out a few more years of progress, but if we want to keep moving forward decades down the line, new devices are not enough: We’ll also have to rethink our most fundamental notions of computation.

Let me explain. For the entire history of computing, our calculating machines have operated in a way that causes the intentional loss of some information (it’s destructively overwritten) in the process of performing computations. But for several decades now, we have known that it’s possible in principle to carry out any desired computation without losing information—that is, in such a way that the computation could always be reversed to recover its earlier state. This idea of reversible computing goes to the very heart of thermo­dynamics and information theory, and indeed it is the only possible way within the laws of physics that we might be able to keep improving the cost and energy efficiency of general-purpose computing far into the future.

In the past, reversible computing never received much attention. That’s because it’s very hard to implement, and there was little reason to pursue this great challenge so long as conventional technology kept advancing. But with the end now in sight, it’s time for the world’s best physics and engineering minds to commence an all-out effort to bring reversible computing to practical fruition.

The history of reversible computing begins with physicist Rolf Landauer of IBM, who published a paper in 1961 titled “Irreversibility and Heat Generation in the Computing Process.” In it, Landauer argued that the logically irreversible character of conventional computational operations has direct implications for the thermodynamic behavior of a device that is carrying out those operations.

Landauer’s reasoning can be understood by observing that the most fundamental laws of physics are reversible, meaning that if you had complete knowledge of the state of a closed system at some time, you could always—at least in principle—run the laws of physics in reverse and determine the system’s exact state at any previous time.

To better see that, consider a game of billiards—an ideal one with no friction. If you were to make a movie of the balls bouncing off one another and the bumpers, the movie would look normal whether you ran it backward or forward: The collision physics would be the same, and you could work out the future configuration of the balls from their past configuration or vice versa equally easily.

The same fundamental reversibility holds for quantum-scale physics. As a consequence, you can’t have a situation in which two different detailed states of any physical system evolve into the exact same state at some later time, because that would make it impossible to determine the earlier state from the later one. In other words, at the lowest level in physics, information cannot be destroyed.

The reversibility of physics means that we can never truly erase information in a computer. Whenever we overwrite a bit of information with a new value, the previous information may be lost for all practical purposes, but it hasn’t really been physically destroyed. Instead it has been pushed out into the machine’s thermal environment, where it becomes entropy—in essence, randomized information—and manifests as heat.

Returning to our billiards-game example, suppose that the balls, bumpers, and felt were not frictionless. Then, sure, two different initial configurations might end up in the same state—say, with the balls resting on one side. The frictional loss of information would then generate heat, albeit a tiny amount.

Today’s computers rely on erasing information all the time—so much so that every single active logic gate in conventional designs destructively overwrites its previous output on every clock cycle, wasting the associated energy. A conventional computer is, essentially, an expensive electric heater that happens to perform a small amount of computation as a side effect.

Someone Else’s Computer: The Prehistory of Cloud Computing

“There is no cloud,” goes the quip. “It’s just someone else’s computer.”

The joke gets at a key feature of cloud computing: Your data and the software to process it reside in a remote data center—perhaps owned by Amazon, Google, or Microsoft—which you share with many users even if it feels like it’s yours alone.

Remarkably, this was also true of a popular mode of computing in the 1960s, ’70s, and ’80s: time-sharing. Much of today’s cloud computing was directly prefigured in yesterday’s time-sharing. Users connected their terminals—often teletypes—to remote computers owned by a time-sharing company over telephone lines. These remote computers offered a variety of applications and services, as well as data storage. The key to such systems was the operating system, built to rapidly switch among the tasks for the many users, giving the illusion of a dedicated machine.

The pioneering firm Tymshare produced the button shown above along with the largest commercial computer network of its era. Called Tymnet, it spanned the globe and was by the late 1970s larger than the ARPANET. Compare this schematic of Tymnet, detailing all of its nodes, with the sparser schematics of the ARPANET [PDF] from the same era. By 1975, Tymshare was handling about 450,000 interactive sessions per month.

Ann Hardy is a crucial figure in the story of Tymshare and time-sharing. She began programming in the 1950s, developing software for the IBM Stretch supercomputer. Frustrated at the lack of opportunity and pay inequality for women at IBM—at one point she discovered she was paid less than half of what the lowest-paid man reporting to her was paid—Hardy left to study at the University of California, Berkeley, and then joined the Lawrence Livermore National Laboratory in 1962. At the lab, one of her projects involved an early and surprisingly successful time-sharing operating system.

In 1966, Hardy landed a job at Tymshare, which had been founded by two former General Electric employees looking to provide time-sharing services to aerospace companies. Tymshare had planned to use an operating system that had originated at UC Berkeley, but it wasn’t designed for commercial use, and so Hardy rewrote it.

We Folded: AI Bests the Top Human Poker Pros

Roughly a year ago, to the day, Google researchers announced their artificial intelligence, AlphaGo, had mastered the ancient game of Go. At the time, Discover wrote that there was still one game that gave computers fits: poker.

Not anymore.

Late Monday night, a computer program designed by two Carnegie Mellon University researchers beat four of the world’s top no-limit Texas Hold’em poker players. The algorithm, named Libratus by its creators, collected more than $1.5 million in chips after a marathon 20-day tournament in Pittsburgh. The victory comes only two years after the same researchers’ algorithm failed to beat human players.

Tough Nut to Crack

In the past few decades, computer scientists’ algorithms have surpassed human prowess in checkers, chess, Scrabble, Jeopardy! and Go—our biological dominance in recreational pastimes is dwindling. But board games are played with a finite set of moves, the rules are clear-cut and your opponent’s strategy unfolds on the board. Computers are well-adapted to sort through and make the most optimal choices in these logical, rules-based games.

Poker was seen as a stronghold for human minds because it relies heavily on “imperfect intelligence”—we don’t know which cards our opponents hold, and the number of possible moves is so large it defies calculation. In addition, divining an opponent’s next move relies heavily on psychology. The best players have refined bluffing into an art form, but computers don’t fare very well when asked to intuit how humans will react.

These hurdles were obviously no match for the improved algorithm designed by Tuomas Sandholm and Noam Brown. While they haven’t yet released the specific details of their program, it seems that they relied on the well-worn tactic of “training.” Libratus ran trillions of simulated poker games, building its skills through trial and error, until it discovered an optimal, winning strategy. This allowed the AI to learn the nuances of bluffing and calling all by itself, and meant that it could learn from its mistakes.

“The best AI’s ability to do strategic reasoning with imperfect information has now surpassed that of the best humans,” Sandholm said in a statement.

Better Every Day

Sandholm says that the Libratus would review each day’s play every night and address the three most problematic holes in its strategy. When play began the next day, the human players were forced to try new strategies in their attempt to trick the machine. The poker pros would meet every night as well to discuss strategies, but their efforts couldn’t match the processing power of the Pittsburgh Supercomputing Center’s Bridges computer, which drew upon on the equivalent of 3,300 laptops worth of computing power.

Libratus seemed to favor large, risky bets, which initially made the human players balk. They soon learned that it was best to try and defeat the AI early on in a hand, as that’s when the most cards are unseen and uncertainty is greatest. As more cards are flipped and decisions made, the computer was able to further refine its decision making.

The algorithm isn’t limited to poker either. While this version of the program was trained specifically on the rules of Texas Hold ‘Em, it was written broadly enough that it could conceivably learn to master any situation that contains imperfect information, such as negotiations, military strategy and medical planning.

Libratus isn’t quite ready for the World Poker Tour yet. The version of the game it played only included two opponents at a time, unlike most tournaments. Games with more players compound the number of variables at play, making it significantly more difficult for a computer to choose the best course of action.

Why Distracted Drivers Matter for Automated Cars

When a 2015 Tesla Model S collided with a tractor trailer at highway intersection west of Williston, Florida, the resulting crash killed the Tesla driver. An investigation of the May 7, 2016 incident by federal investigators found that the Tesla car’s Autopilot driver-assist system was not at fault and showed that the driver had at least seven seconds to spot the tractor trailer prior to the crash. But the tragedy emphasized the fact that the latest automated cars still require drivers to pay attention and be ready to take back control of the wheel.

The need for human drivers to take control at least some of the time will last until automakers roll out the first fully driverless commercial vehicles. Most studies have understandably focused on how quickly drivers can take back control from future self-driving cars in emergency situations. But researchers at the University of Southampton in the UK found very few studies that looked at takeover reaction times in normal driving situations such as getting on and off of highways. Their new paper found such different takeover reaction times among individual drivers—anywhere from two seconds to almost half a minute—that they suggested automated cars should give drivers the flexibility to choose how much time they need.

“It is evident that there is a large spread in the takeover reaction times, which when designing driving automation should be considered, as the range of performance is more important than the median or mean, as these exclude a large portion of drivers,” researchers wrote.

This matters because the handover between automated and manual steering could prove dangerous if the human drivers remain distracted or unprepared. Makers of automated cars might feel tempted to simply find the average control transition times among drivers and develop technology standards based on the average. But such systems would not work so well for drivers who react much more quickly or slowly than the average, according to the paper published in the online Jan. 26 issue of the Journal of the Human Factors and Ergonomics Society.

The Road to Safer Automated Cars

Most past studies only reported the average takeover reaction times instead of the full spread of takeover reaction times. The University of Southampton study found a fairly large spread despite involving a small sample size of just 26 drivers—10 women and 16 men—operating a Jaguar XJ 350 driving simulator.

The UK researchers tested how well the drivers handled transitions between automated driving and manual driving both with and without the distraction of reading an issue of National Geographic magazine. Having the added distraction added a delay of 1.5 seconds on average to the takeover reaction times. That means automakers may want to consider making self-driving cars that can adjust takeover times if they sense the driver is distracted.

“In light of these results, there is a case for “adaptive automation” that modulates [takeover reaction times] by, for example, detecting whether the driver gaze is off road for a certain period and
providing the driver with a few additional seconds before resuming control,” researchers said.

TheUniversity of Southampton study also took the unprecedented step of looking at how long the human drivers needed to switch from manual driving to automated car control. That spread of times ranged from just under three seconds to almost 24 seconds.