C.G. Masi

March 11, 2015

2 Min Read
Azimov's laws in perspective

 

In the early 1940s, when science-fiction master Isaac Azimov laid out his three (and later four) "Laws of Robotics," robots, themselves, were entirely fictitious. At that time, automation technology had barely progressed beyond the Jacquard loom. Half a century later, things that can be legitimately characterized as "robots" are all around us.

 

Periodically I get treated to the spectacle of my wife screaming into her cellphone at some slug-brained automated system with whom she's trying to negotiate an online transfer of funds to pay some bill. She screams because the obstinacy of even the politest automated clerk inspires anger and frustration.

 

Notice the words I use to describe what's going on: "slug-brained," "negotiate," "obstinate," "polite," "angry" and "frustrated." These are all words that we use to describe behavior. Robots now exist, and, moreover, we've found that they "behave."

 

Sometimes they behave well, and sometimes they behave badly. Just like people.

 

Since robots can now be said to "behave," it's now appropriate to take a critical look at Azimov's rules that define appropriate robot behavior.

 

As laid out in his short story "Runaround," Azimov's three rules—which he called "laws"—are:

 

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.


2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.


3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Sometime later, the writer added a fourth rule: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

 

Azimov made a lot of money over his career imagining situations in which adherence to the three laws led robots into ethical dilemmas. Since he imagined that these robots would not be programmed with a way to resolve the dilemma, their behavior would break down in some way.

 

He imagined that would lead to a type of behavior twenty-first century humans describe as "going postal."

 

Okay, so what are we, from the vantage point of people surrounded by real robots running around exhibiting real robot behavior, to think of Azimov's four laws?

 

Well, first of all, we should recognize that the laws aren't laws at all. They're rules. Rules, unlike laws, are made to be broken.

 

Then, we should notice that they aren't rules for robots, they're rules for robot programmers.

 

Finally, we should recognize that the four rules do a pretty good job of defining acceptable robot behavior. We'd feel about an engineer who programmed an automated system to violate any of Azimov's rules about the way we'd feel about someone flying an airplane into a skyscraper. We'd say they were pretty impolite! We'd probably get mad enough about it to hunt them down and lock them away in an old Navy base in Cuba.

 

.

Sign up for the Packaging Digest News & Insights newsletter.

You May Also Like