I recently saw a discussion of designing robot security guards. The writer suggested that they would have to get angry, or at least simulate anger, in order to do the job properly. His interest, I think, was in robot emotions: if robots could be angry, maybe they could also be sad, or happy, or loving. While I'll grant that, simulating an emotion is not the same as feeling it: anyone who has trained a puppy, or acted in a play, can testify to that.

Angry robots, and robot security guards, led me to think of Asimov's Three Laws of Robotics. In particular, the first law: "No robot shall harm a human being or, through inaction, allow a human being to come to harm." A robot security guard that followed that rule might be able to say "stop or I’ll shoot," but it wouldn't be able to shoot. Even tranquilizer darts aren't risk-free: the person shot might be allergic, or might fall down and get a concussion.

Take one positronic robot. Give it an unlimited travel budget, a passport, and some spending money. Give it, also, today's newspaper. What does it do?

The news is mostly a list of problems: crime, natural disasters, war. Human suffering, much of which could be prevented or, at least, alleviated. Not easily, perhaps, but do positronic robots have circuits to decide that something is too much trouble? "Or, through inaction, allow a human being to come to harm." A robot crime-fighter is an interesting image, but in real life it's hard to do superhero-style crime-fighting without hurting anyone. (That goes double for the sort of police work that involves bullets.) Neighborhood watch, sure, but would that be enough? Robots don't need to sleep, remember. What does it do during the day, when there are plenty of other people out on the streets? Hop on a plane to the nearest war zone or disaster area? An unarmed robot might not do any good in Kosovo, but would it know that? Even if it did know that, would it be able to take that into account, even in selecting some other mission of mercy? Or would soldiers wind up telling robots things like "I'm going to shoot this gun. If you stand in the way, the bullet will ricochet and injure me" and trying to convince the robot that it can't take the gun out of their hands without harming them?

A robot that follows the three laws could be a useful domestic, clerical employee, or factory worker--or even schoolteacher or accountant--only if something akin to Utopia has been achieved, or if it is kept in a state of near-total ignorance about the outside world. It's certainly not going to vacuum your floors if it could be out there saving starving children, rescuing trapped coal miners, or aiding refugees. Day to day, though, I think more people want robot housecleaners than robot firefighters or rescue workers: we want to offload the drudgery, not the heroism.

On the other hand, positronic robots might make fine medics: caring for heart attack patients or the victims of gunshot wounds would satisfy the need to prevent humans from coming to harm, and they wouldn't get tired after a long shift and make dangerous mistakes. You couldn't get one to stop and ask patients for their insurance information, though.

--VR


Copyright 1999 Vicki Rosenzweig.

Letters of comment are welcome, and may be printed unless you say otherwise; please send them to vr@interport.net.

Letters of Comment on Quipu 9

Back to my home page.