Ethics

Teaching robots right from wrong

lady robot
lady robot

Robots can perform a lot of tasks — some almost as well as humans, if not better. But one area where they stumble: Choosing among a series of options.

You’re rushing across the school parking lot to get to your first class on time when you notice a friend is in trouble. She’s texting and listening to music on her headphones. Unawares, she’s also heading straight for a gaping hole in the sidewalk. What do you do?

The answer seems pretty simple: Run over and try to stop her before she hurts herself. Who cares if you might be a little late for class?

To figure out the best solution, such a decision balances the effects of your choice. It’s an easy decision. You don’t even have to think hard about it. You make such choices all the time. But what about robots? Can they make such choices? Should a robot stop your friend from falling into the hole? Could it?

Not today’s robots. They simply aren’t smart enough to even realize when someone is in danger. Soon, they might be. Yet without some rules to follow, a robot wouldn’t know the best choice to make.

So robot developers are turning to philosophy. Called ethics, it’s a field in which people study differences between right and wrong. And with it, they are starting to develop robots that can make basic ethical decisions.

One lab’s robot is mastering the hole scenario. Another can decide not to follow a human’s instructions if they seem unsafe. A third robot is learning how to handle tricky situations in a nursing home.

Such research should help robots of the future figure out the best action to take when there are competing choices. This ethical behavior may just become part of their programming. That will allow them to interact with people in safe, predictable ways. In time, robots may actually begin to understand the difference between right and wrong.

The three laws

The most famous set of rules for robots comes not from research but from a science fiction story by Isaac Asimov. “Runaround,” published in 1942, features two men and Robot SPD-13, nicknamed “Speedy.” They’re sent to the planet Mercury in the year 2015. Speedy is programmed with three basic rules:

1) A robot can’t hurt a person or, through inaction, allow a person to get hurt.

2) A robot must obey people, as long as this doesn’t break the first law.

3) A robot must protect itself, as long as this doesn’t break the first two laws.

In later robot stories, Asimov added a “zeroth” law: A robot can’t harm humanity or, through inaction, allow harm to humanity.

Asimov’s rules sound good. But the story shows that such simple rules may not be enough.

The men gave Speedy an order to get some materials to repair their space station. But along the way, Speedy ran into danger. Rules 2 and 3 now contradicted each other. The robot found itself in an endless loop of indecision. And, it turns out, there were some other problems. These rules would certainly compel a robot to rescue your friend. But they wouldn’t help a robot decide what to do if two people were about to fall and it could only save one. The robot also wouldn’t try to rescue a kitten.

It’s very difficult to write a set of rules that will apply in all possible situations. For this reason, some scientists instead build robots with the ability to learn ethical behavior.

A robot watches examples of people doing the right thing in different situations. Based on the examples, it then develops its own rules. The robot might, however, learn behaviors its creators do not like.

No matter where a robot’s ethical principles come from, it must have the ability to explain its actions. Imagine that a robot’s task is to walk a dog. It lets go of the leash in order to save a human in danger. When the robot returns home later without the dog, it needs to be able to explain what happened. (Its ethics also should prompt it to go look for the lost dog!)

For many scientists working on such issues, their robot of choice is one named Nao. This humanoid robot is about the size of a doll. It can be programmed to walk, talk and dance. And in this research, Nao can even learn to do the right thing.

An ethical zombie

Winfield robot
Alan Winfield shows off some of the robots he’s programmed to make basic ethical decisions.

Alan Winfield used to believe that building an ethical robot was impossible. This roboticist — an engineer who builds robots — works at University of the West of England in Bristol. A robot would need a human-like ability to think and reason in order to make ethical decisions, he thought. But over the past few years, Winfield has changed his mind. Scientists should be able to create a robot that can follow ethical rules without thinking about them, he now concludes.

Its programming would compel it to do the right thing without the robot ever making a “choice.” In a sense, he says, it would be an “ethical zombie.”

In some cases, the ethical choice is the easy part of a robot’s programming. The hard part is getting the robot to notice a problem or danger.

Remember your texting friend who was about to fall in a hole? Deciding to save her requires more than just a sense of right and wrong, Winfield says. “You also have the ability to predict what might happen to that friend.” You know your friend is likely to keep on walking in the same direction. You also know that falling into a hole would hurt. Finally, you can predict whether you have enough time to run over and stop her.

This all seems completely obvious. In fact, it is pretty amazing behavior. You’re predicting the future and then taking action to stop a bad outcome.

Winfield and his team wrote a program to give the Nao robot this predict-the-future super power. They named their new Nao A-Robot, after Asimov. (“Runaround” appeared in a 1950 book of short stories called I, Robot). A-Robot can recognize other robots in its environment. It can predict what might happen to them and to itself in the near future. Finally, it automatically takes action to help in a way that will cause the least harm to itself and…