Env

· · learn more

◌  Machine ethics

5 March 2009


This account of an excessively affectionate robot is making the rounds. I’ll bet you a dollar it’s made up, but what’s interesting is that people think it’s interesting. It reminds me of a This Way Up last week where a well-meaning journalist interviewed an expert on military robots with questions assuming that (1) machine vision is ready for prime time and (2) machine ethics is pretty much intractable. This is backwards.

Moral strictness and purity of emotion are easy. It’s things like vision that are hard. The military systems that have been supposed to recognize aircraft since at least Gulf War I are a fine example: it’s easy to make them refuse to fire on something they think is friendly. But their record at telling the difference between friend and enemy is terrible, with failure rates (at least in the early days) in the teen percentages. That’s a problem with shapes, not morals. At least for the machines.

Commentators worried about military robots seem to delight in coming up with confounding machine ethics puzzles. If there’s a hostage taker with two hostages, and one of the hostages has its own hostage, and the robot only has one bullet, but it has reason to believe its commander is evil, and it’s the local sabbath, and there’s a runaway bus …. But these are not materially harder than the same puzzles minus the robots. If anything, a robot with the same situational awareness as a human (that’s the hard part) can act far more ethically. Whether they will depends on the competance and ethics of their human masters, which is a problem we’re already supposed to be thinking about with, say, surveillance and nuclear weapons. I’m not scared of robots that think like people, I’m scared of people who make robots that think like half-people.

What machines like the presumably fictional hug-bot should be teaching us is that the dictionary definitions of emotions are simple. Cheap. They are barely a step on the way to personhood. For a machine, unconditional love is easier than conditional love. A robot that loves is even less news than a dog that loves. What would be news is a robot that can love but be a little selfish, love but unjustifiedly doubt its love, love in secret, imagine loving, learn from love, recognize love, have strange conflicts between kinds of love, and so on, and not just with love but with loathing and envy and sympathy and ambition and boredom, overlapping and interfering but making a person. These kinds of things, quite the opposite of love in isolation, are what make people people.

I think it might be hard for us to see the potential ethical reliability of robots because they don’t make it look hard. Humans are exceedingly complex, and we tend to mistrust the ones who don’t admit it. If I meet someone who doesn’t show evidence of having had to work at their ethics, of managing and sythesizing conflicting hopes, I take it as lack of character – that even if they’re very good in daily life, in edge cases, in emergency or in private, they might be brittle. A robot, by showing no doubt, looks like a human who’s faking ethics. Actually, it’s in the edge cases, the unforseen and unseen corners of circumstance, where machines can be inhumanly steady.

So I say that machine ethics, while not trivial, is tractable. It’s machine situational awareness, the ability to figure out what’s happening and what’s going to happen, that will hold them back (for a while) from making the kinds of decisions that we sometimes call ethics when we mean personhood.