Monthly Archives: April 2017

Designing a Moral Machine

Back around the turn of the millennium, Susan Anderson was puzzling over a problem in ethics. Is there a way to rank competing moral obligations? The University of Connecticut philosophy professor posed the problem to her computer scientist spouse, Michael Anderson, figuring his algorithmic expertise might help.

At the time, he was reading about the making of the film 2001: A Space Odyssey, in which spaceship computer HAL 9000 tries to murder its human crewmates. “I realized that it was 2001,” he recalls, “and that capabilities like HAL’s were close.” If artificial intelligence was to be pursued responsibly, he reckoned that it would also need to solve moral dilemmas.

In the 16 years since, that conviction has become mainstream. Artificial intelligence now permeates everything from health care to warfare, and could soon make life-and-death decisions for self-driving cars. “Intelligent machines are absorbing the responsibilities we used to have, which is a terrible burden,” explains ethicist Patrick Lin of California Polytechnic State University. “For us to trust them to act on their own, it’s important that these machines are designed with ethical decision-making in mind.”

The Andersons have devoted their careers to that challenge, deploying the first ethically programmed robot in 2010. Admittedly, their robot is considerably less autonomous than HAL 9000. The toddler-size humanoid machine was conceived with just one task in mind: to ensure that homebound elders take their medications. According to Susan, this responsibility is ethically fraught, as the robot must balance conflicting duties, weighing the patient’s health against respect for personal autonomy. To teach it, Michael created machine-learning algorithms so ethicists can plug in examples of ethically appropriate behavior. The robot’s computer can then derive a general principle that guides its activity in real life. Now they’ve taken another step forward.

“The study of ethics goes back to Plato and Aristotle, and there’s a lot of wisdom there,” Susan observes. To tap into that reserve, the Andersons built an interface for ethicists to train AIs through a sequence of prompts, like a philosophy professor having a dialogue with her students.

The Andersons are no longer alone, nor is their philosophical approach. Recently, Georgia Institute of Technology computer scientist Mark Riedl has taken a radically different philosophical tack, teaching AIs to learn human morals by reading stories. From his perspective, the global corpus of literature has far more to say about ethics than just the philosophical canon alone, and advanced AIs can tap into that wisdom. For the past couple of years, he’s been developing such a system, which he calls Quixote — named after the novel by Cervantes.

Riedl sees a deep precedent for his approach. Children learn from stories, which serve as “proxy experiences,” helping to teach them how to behave appropriately. Given that AIs don’t have the luxury of childhood, he believes stories could be used to “quickly bootstrap a robot to a point where we feel comfortable about it understanding our social conventions.”

Any Ban on Killer Robots Faces a Tough Sell

Fears of a Terminator-style arms race have already prompted leading AI researchers and Silicon Valley leaders to call for a ban on killer robots. The United Nations plans to convene its first formal meeting of experts on lethal autonomous weapons later this summer. But a simulation based on the hypothetical first battlefield use of autonomous weapons showed the challenges of convincing major governments and their defense industries to sign any ban on killer robots.

In October 2016, the Chatham House think tank in London convened 25 experts to consider how the United States and Europe might react to a scenario in which China uses autonomous drone aircraft to strike a naval base in Vietnam during a territorial dispute. The point of the roleplaying exercise was not to predict which country would first deploy killer robots, but instead focused on exploring the differences in opinion that might arise from the U.S. and European side. Members of the expert group took on roles representing European countries, the United States and Israel, and certain institutions such as the defense industry, non-governmental organizations (NGOs), the European Union, the United Nations and NATO.

The results were not encouraging for anyone hoping to achieve a ban on killer robots.

Emerging Editing Technologies Obscure the Line Between Real and Fake

The image is modest, belying the historic import of the moment. A woman on a white sand beach gazes at a distant island as waves lap at her feet — the scene is titled simply “Jennifer in Paradise.”

This picture, snapped by an Industrial Light and Magic employee named John Knoll while on vacation in 1987, would become the first image to be scanned and digitally altered. When Photoshop was introduced by Adobe Systems three years later, the visual world would never be the same. Today, prepackaged tools allow nearly anyone to make a sunset pop, trim five pounds or just put celebrity faces on animals.

Though audiences have become more attuned to the little things that give away a digitally manipulated image — suspiciously curved lines, missing shadows and odd halos — we’re approaching a day when editing technology may become too sophisticated for human eyes to detect. What’s more, it’s not just images either — audio and video editing software, some backed by artificial intelligence, are getting good enough to surreptitiously rewrite the mediums we rely on for accurate information.

The most crucial aspect of all of this is that it’s getting easier. Sure, Photoshop pros have been able to create convincing fakes for years, and special effects studios can bring lightsabers and transformers to life, but computer algorithms are beginning to shoulder more and more of the load, drastically reducing the skills necessary to pull such deceptions off.

In a world where smartphone videos act as a bulwark against police violence and relay stark footage of chemical weapons strikes, the implications of simple, believable image and video manipulation technologies have become more serious. It’s not just pictures anymore — technology is beginning to allow us to edit the world.

Microsoft AI Notches the Highest ‘Ms. Pac-Man’ Score Possible

A Microsoft artificial intelligence has achieved the ultimate high score in Ms. Pac-Man, maxing out the counter at just under a million points.

With its randomly-generated ghost movements, Ms. Pac-Man has proven a tough nut for AI to crack, as it cannot simply learn the patterns that govern the ghosts’ movements. Maluuba, an artificial intelligence company recently acquired by the tech giant, succeeded in outwitting the hungry ghosts by breaking their gaming algorithm into around 160 different parts. They say it took less than 3,000 rounds of practice to achieve the feat, something never done by a human. The researchers didn’t even know what would happen when they hit seven figures, Wired reports. They were somewhat disappointed to find the score simply resets to zero.

Arcade games like Pac-Man, along with many other types of games have a nearly infinite amount of possible scenarios. Ms. Pac-Man has somewhere around 1o^77 feasible game states, far too many to map out best practices for each one. Absent a perfect strategy, the next best thing is to narrow down movements that are the least worst for any situation. Maluuba’s AI does this by assigning each edible bar, fruit and ghost its own function, and each and every one gets a voice in determining how Ms. Pac-Man will behave. Their recommendations are weighted, and the final decision comes from a central aggregator which weighs each component and decides what to do. They go into further detail in a paper posted to their website.

This gave it an edge in a game where the movements of life-ending ghosts and reward-giving fruits are partially random, meaning that the algorithm must change its mind on the fly. This is somewhat similar to the challenges faced by poker-playing AI’s, who must deal with both randomness and human irrationality in making their decisions.

While notable, Maluuba’s achievement is for the moment merely a flashy headline. They say that their decentralized approach to AI could see applications in the worlds of sales and finance in the future though, where algorithms must similarly deal with a host of random variables at once. In addition, having decisions made by a collection of actors, instead of just one, makes it easier to track the algorithm’s thinking. This could alleviate pressing ethical concerns regarding a future where life-altering decisions could fall into the hands of AI. Knowing how and why a computer reached a particular decision allows us to better judge whether its thinking lines up with our own morals and values.