Monthly Archives: May 2017

How to Train Your Robot with Brain Oops Signals

Baxter the robot can tell the difference between right and wrong actions without its human handlers ever consciously giving a command or even speaking a word. The robot’s learning success relies upon a system that interprets the human brain’s “oops” signals to let Baxter know if a mistake has been made.

The new twist on training robots comes from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University. Researchers have long known that the human brain generates certain error-related signals when it notices a mistake. They created machine-learning software that can recognize and classify those brain oops signals from individual human volunteers within 10 to 30 milliseconds—a way of creating instant feedback for Baxter the robot when it sorted paint cans and wire spools into two different bins in front of the humans.

“Imagine being able to instantaneously tell a robot to do a certain action, without needing to type a command, push a button or even say a word,” said Daniela Rus, director of CSAIL at MIT, in a press release. “A streamlined approach like that would improve our abilities to supervise factory robots, driverless cars and other technologies we haven’t even invented yet.”

Cultivating Common Sense

Nestled among Seattle’s gleaming lights on a gloomy September day, a single nonprofit wants to change the world, one computer at a time. Its researchers hope to transform the way machines perceive the world: to have them not only see it, but understand what they’re seeing.

At the Allen Institute for Artificial Intelligence (AI2), researchers are working on just that. AI2, founded in 2014 by Microsoft visionary Paul Allen, is the nation’s largest nonprofit AI research institute. Its campus juts into the northern arm of Lake Union, sharing the waterfront with warehouses and crowded marinas. Across the lake, dozens of cranes rise above the Seattle skyline — visual reminders of the city’s ongoing tech boom. At AI2, unshackled by profit-obsessed boardrooms, the mandate from its CEO Oren Etzioni is simple: Confront the grandest challenges in artificial intelligence research and serve the common good, profits be damned.

 AI2’s office atmosphere matches its counterculture ethos. Etzioni’s hand-curated wall of quotes is just outside the table tennis room. Equations litter ceiling-to-floor whiteboards and random glass surfaces, like graffiti. Employees are encouraged to launch the company kayak for paddle breaks. Computer scientist Ali Farhadi can savor the Seattle skyline from the windows of his democratically chosen office; researchers vote on the locations of their workspaces. It’s where he and I meet to explore the limits of computer vision.

At one point, he sets a dry-erase marker on the edge of his desk and asks, “What will happen if I roll this marker over the edge?”

“It will fall on the floor,” I reply, wondering if Farhadi could use one of those kayak breaks.

Narrow AI systems are like savants. They’re fantastic at single, well-defined tasks: a Roomba vacuuming the floor, for example, or a digital chess master. But a computer that can recognize images of cats can’t play chess. Humans can do both; we possess general intelligence. The AI2 team wants to pull these computer savants away from their lonely tasks and plant seeds of common sense. “We still have a long way to go,” Etzioni tells me.

Etzioni’s 20-year vision is to build an AI system that would serve as a scientist’s apprentice. It would read and understand scientific literature, connecting the dots between studies and suggesting hypotheses that could lead to significant breakthroughs. When I ask Etzioni if IBM’s Watson is already doing this, I feel I’ve struck a nerve. “They’ve made some very strong claims, but I’m waiting to see the data,” he says.

But there’s also a darker side to this noble endeavor. If we grow to depend on these emerging technologies, certain skills could become obsolete. I can’t help but wonder: If smarter AIs gobble up more human-driven tasks, how can we keep up with them?

A Glimpse of a Microchip’s Delicate Architecture

Computer chips continue to shrink ever smaller, but we still wring more processing power out of them.

One of the problems that comes with taking our technology to the nanoscale, however, is that we can no longer see what’s going on with them. Computer chips, with their arrays of transistors laid out like cities, have components that measure as little as 14 nanometers across, or about 5,000 times smaller than a red blood cell. Checking out these wonders of engineering without using expensive and destructive imaging techniques is a challenge, to say the least. 

Viewing Technology With Technology

Researchers from the Paul Scherrer Institut in Switzerland say that they may have found a way to look into microchips without ever touching them. Using an imaging technique similar to Computed Tomography (CT) scans, they bombarded a chip with X-rays and used a computer to assemble a 3-D reconstruction of its delicate architecture. The process works by taking a series of 2-D images based on how the X-rays bounce off of the structures, which is then combined into a realistic model.

In a paper published Wednesday in Nature, say that they can resolve details as small as 14.6 nanometers, or about the size of the smallest components in today’s commercial chips. They tested their technique first on a chip with a familiar layout, and then one that wasn’t — both times they successfully reconstructed a model of the chip’s inner workings with enough detail to see how it functioned, including the transistors and interconnects. The images show the intricate patterns of interconnected transistors on the silicon surface — some chips today can contain upwards of 5 billion transistors.

Google Street View Cars Are Mapping Methane Leaks

Natural gas pipeline leaks that pose a safety hazard are quickly addressed. But what about leaks too small to pose a threat? These mall leaks are often overlooked and they collectively release tons of methane, a greenhouse gas 84 times more potent than carbon dioxide.

However, thanks to researchers from Colorado State University, the University of Northern Colorado, and Conservation Science Partners—who’ve teamed up with the Environmental Defense Fund—a small unit of Google Street View cars are turning into mobile methane sensors to monitor leaks that have flown under the radar.

A Mobile Amalgamation For Methane Measurement

Lead researcher Joe von Fischer, a biologist by training, originally bought a laser spectrograph, which scans invisible gases that are opaque under infrared light, a decade ago to use on the Arctic tundra. That is, until he one day decided to put it in his car and drive around Fort Collins. He ended up finding a local methane leak with his mobile amalgam of a methane sensor.

“At the same time, Google was interested in putting some of these new methane analyzer analogs in their vehicles, and the Environmental Defense Fund was interested in methane because it’s so poorly quantified,” says von Fischer.  Naturally he was put in charge of the project.

Since then, their collaboration has seen sensor-equipped Street View cars map methane leaks in pipelines beneath the roads of 14 cities throughout America, releasing the data publicly as citywide maps online. The initiative has even helped spur one New Jersey utility provider to commission a $905 million upgrade to gas lines.