We Folded: AI Bests the Top Human Poker Pros

Roughly a year ago, to the day, Google researchers announced their artificial intelligence, AlphaGo, had mastered the ancient game of Go. At the time, Discover wrote that there was still one game that gave computers fits: poker.

Not anymore.

Late Monday night, a computer program designed by two Carnegie Mellon University researchers beat four of the world’s top no-limit Texas Hold’em poker players. The algorithm, named Libratus by its creators, collected more than $1.5 million in chips after a marathon 20-day tournament in Pittsburgh. The victory comes only two years after the same researchers’ algorithm failed to beat human players.

Tough Nut to Crack

In the past few decades, computer scientists’ algorithms have surpassed human prowess in checkers, chess, Scrabble, Jeopardy! and Go—our biological dominance in recreational pastimes is dwindling. But board games are played with a finite set of moves, the rules are clear-cut and your opponent’s strategy unfolds on the board. Computers are well-adapted to sort through and make the most optimal choices in these logical, rules-based games.

Poker was seen as a stronghold for human minds because it relies heavily on “imperfect intelligence”—we don’t know which cards our opponents hold, and the number of possible

Why Distracted Drivers Matter for Automated Cars

When a 2015 Tesla Model S collided with a tractor trailer at highway intersection west of Williston, Florida, the resulting crash killed the Tesla driver. An investigation of the May 7, 2016 incident by federal investigators found that the Tesla car’s Autopilot driver-assist system was not at fault and showed that the driver had at least seven seconds to spot the tractor trailer prior to the crash. But the tragedy emphasized the fact that the latest automated cars still require drivers to pay attention and be ready to take back control of the wheel.

The need for human drivers to take control at least some of the time will last until automakers roll out the first fully driverless commercial vehicles. Most studies have understandably focused on how quickly drivers can take back control from future self-driving cars in emergency situations. But researchers at the University of Southampton in the UK found very few studies that looked at takeover reaction times in normal driving situations such as getting on and off of highways. Their new paper found such different takeover reaction times among individual drivers—anywhere from two seconds to almost half a minute—that they suggested automated cars should give drivers the flexibility to choose how much time they need.

How to Train Your Robot with Brain Oops Signals

Baxter the robot can tell the difference between right and wrong actions without its human handlers ever consciously giving a command or even speaking a word. The robot’s learning success relies upon a system that interprets the human brain’s “oops” signals to let Baxter know if a mistake has been made.

The new twist on training robots comes from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University. Researchers have long known that the human brain generates certain error-related signals when it notices a mistake. They created machine-learning software that can recognize and classify those brain oops signals from individual human volunteers within 10 to 30 milliseconds—a way of creating instant feedback for Baxter the robot when it sorted paint cans and wire spools into two different bins in front of the humans.

“Imagine being able to instantaneously tell a robot to do a certain action, without needing to type a command, push a button or even say a word,” said Daniela Rus, director of CSAIL at MIT, in a press release. “A streamlined approach like that would improve our abilities to supervise factory robots, driverless cars and other technologies we haven’t even invented yet.”

Cultivating Common Sense

Nestled among Seattle’s gleaming lights on a gloomy September day, a single nonprofit wants to change the world, one computer at a time. Its researchers hope to transform the way machines perceive the world: to have them not only see it, but understand what they’re seeing.

At the Allen Institute for Artificial Intelligence (AI2), researchers are working on just that. AI2, founded in 2014 by Microsoft visionary Paul Allen, is the nation’s largest nonprofit AI research institute. Its campus juts into the northern arm of Lake Union, sharing the waterfront with warehouses and crowded marinas. Across the lake, dozens of cranes rise above the Seattle skyline — visual reminders of the city’s ongoing tech boom. At AI2, unshackled by profit-obsessed boardrooms, the mandate from its CEO Oren Etzioni is simple: Confront the grandest challenges in artificial intelligence research and serve the common good, profits be damned.

 AI2’s office atmosphere matches its counterculture ethos. Etzioni’s hand-curated wall of quotes is just outside the table tennis room. Equations litter ceiling-to-floor whiteboards and random glass surfaces, like graffiti. Employees are encouraged to launch the company kayak for paddle breaks. Computer scientist Ali Farhadi can savor the Seattle

A Glimpse of a Microchip’s Delicate Architecture

Computer chips continue to shrink ever smaller, but we still wring more processing power out of them.

One of the problems that comes with taking our technology to the nanoscale, however, is that we can no longer see what’s going on with them. Computer chips, with their arrays of transistors laid out like cities, have components that measure as little as 14 nanometers across, or about 5,000 times smaller than a red blood cell. Checking out these wonders of engineering without using expensive and destructive imaging techniques is a challenge, to say the least. 

Viewing Technology With Technology

Researchers from the Paul Scherrer Institut in Switzerland say that they may have found a way to look into microchips without ever touching them. Using an imaging technique similar to Computed Tomography (CT) scans, they bombarded a chip with X-rays and used a computer to assemble a 3-D reconstruction of its delicate architecture. The process works by taking a series of 2-D images based on how the X-rays bounce off of the structures, which is then combined into a realistic model.

In a paper published Wednesday in Nature, say that they can resolve details as small as 14.6 nanometers, or about

Google Street View Cars Are Mapping Methane Leaks

Natural gas pipeline leaks that pose a safety hazard are quickly addressed. But what about leaks too small to pose a threat? These mall leaks are often overlooked and they collectively release tons of methane, a greenhouse gas 84 times more potent than carbon dioxide.

However, thanks to researchers from Colorado State University, the University of Northern Colorado, and Conservation Science Partners—who’ve teamed up with the Environmental Defense Fund—a small unit of Google Street View cars are turning into mobile methane sensors to monitor leaks that have flown under the radar.

A Mobile Amalgamation For Methane Measurement

Lead researcher Joe von Fischer, a biologist by training, originally bought a laser spectrograph, which scans invisible gases that are opaque under infrared light, a decade ago to use on the Arctic tundra. That is, until he one day decided to put it in his car and drive around Fort Collins. He ended up finding a local methane leak with his mobile amalgam of a methane sensor.

“At the same time, Google was interested in putting some of these new methane analyzer analogs in their vehicles, and the Environmental Defense Fund was interested in methane because it’s so

Designing a Moral Machine

Back around the turn of the millennium, Susan Anderson was puzzling over a problem in ethics. Is there a way to rank competing moral obligations? The University of Connecticut philosophy professor posed the problem to her computer scientist spouse, Michael Anderson, figuring his algorithmic expertise might help.

At the time, he was reading about the making of the film 2001: A Space Odyssey, in which spaceship computer HAL 9000 tries to murder its human crewmates. “I realized that it was 2001,” he recalls, “and that capabilities like HAL’s were close.” If artificial intelligence was to be pursued responsibly, he reckoned that it would also need to solve moral dilemmas.

In the 16 years since, that conviction has become mainstream. Artificial intelligence now permeates everything from health care to warfare, and could soon make life-and-death decisions for self-driving cars. “Intelligent machines are absorbing the responsibilities we used to have, which is a terrible burden,” explains ethicist Patrick Lin of California Polytechnic State University. “For us to trust them to act on their own, it’s important that these machines are designed with ethical decision-making in mind.”

The Andersons have devoted their careers to that challenge, deploying the first ethically programmed

Any Ban on Killer Robots Faces a Tough Sell

Fears of a Terminator-style arms race have already prompted leading AI researchers and Silicon Valley leaders to call for a ban on killer robots. The United Nations plans to convene its first formal meeting of experts on lethal autonomous weapons later this summer. But a simulation based on the hypothetical first battlefield use of autonomous weapons showed the challenges of convincing major governments and their defense industries to sign any ban on killer robots.

In October 2016, the Chatham House think tank in London convened 25 experts to consider how the United States and Europe might react to a scenario in which China uses autonomous drone aircraft to strike a naval base in Vietnam during a territorial dispute. The point of the roleplaying exercise was not to predict which country would first deploy killer robots, but instead focused on exploring the differences in opinion that might arise from the U.S. and European side. Members of the expert group took on roles representing European countries, the United States and Israel, and certain institutions such as the defense industry, non-governmental organizations (NGOs), the European Union, the United Nations and NATO.

The results were not encouraging for anyone hoping to achieve a ban on killer robots.

Emerging Editing Technologies Obscure the Line Between Real and Fake

The image is modest, belying the historic import of the moment. A woman on a white sand beach gazes at a distant island as waves lap at her feet — the scene is titled simply “Jennifer in Paradise.”

This picture, snapped by an Industrial Light and Magic employee named John Knoll while on vacation in 1987, would become the first image to be scanned and digitally altered. When Photoshop was introduced by Adobe Systems three years later, the visual world would never be the same. Today, prepackaged tools allow nearly anyone to make a sunset pop, trim five pounds or just put celebrity faces on animals.

Though audiences have become more attuned to the little things that give away a digitally manipulated image — suspiciously curved lines, missing shadows and odd halos — we’re approaching a day when editing technology may become too sophisticated for human eyes to detect. What’s more, it’s not just images either — audio and video editing software, some backed by artificial intelligence, are getting good enough to surreptitiously rewrite the mediums we rely on for accurate information.

The most crucial aspect of all of this is that it’s getting easier. Sure, Photoshop pros have been

Microsoft AI Notches the Highest ‘Ms. Pac-Man’ Score Possible

A Microsoft artificial intelligence has achieved the ultimate high score in Ms. Pac-Man, maxing out the counter at just under a million points.

With its randomly-generated ghost movements, Ms. Pac-Man has proven a tough nut for AI to crack, as it cannot simply learn the patterns that govern the ghosts’ movements. Maluuba, an artificial intelligence company recently acquired by the tech giant, succeeded in outwitting the hungry ghosts by breaking their gaming algorithm into around 160 different parts. They say it took less than 3,000 rounds of practice to achieve the feat, something never done by a human. The researchers didn’t even know what would happen when they hit seven figures, Wired reports. They were somewhat disappointed to find the score simply resets to zero.

Arcade games like Pac-Man, along with many other types of games have a nearly infinite amount of possible scenarios. Ms. Pac-Man has somewhere around 1o^77 feasible game states, far too many to map out best practices for each one. Absent a perfect strategy, the next best thing is to narrow down movements that are the least worst for any situation. Maluuba’s AI does this by assigning each edible bar, fruit and