Robots



Robots:

I want to explain the future of robots by 2 examples. "Data" from "Star Trek" and Asimov's "I'Robot":

Data has 100,000 terabytes of memory (equiv to 100,000,000 one-GB hard drives). When on trial, he stated that he had a storage capacity of 800 quadrillion bits (100 quadrillion bytes). Data processes 60 trillion computations per second. If you'd like to compare Data's 100,000 terabytes of storage capacity to something real-world, someone mentioned a chart that set the maximum storage capacity of the human brain to approximately 3 teraBITS, which would mean that Data's brain could contain everything from over 260,000 human brains.

The television program Star Trek: The Next Generation included an android character, Data, who we are specifically told (in the episode 'Datalore') was created in an attempt to bring 'Asimov's dream of a positronic robot' to life. Unfortunately, the producers of the show locked onto the 'positronic' aspect as if that were the key quality to Asimov's robots. Asimov's view was exactly the opposite -- his robots are 'positronic' because positrons had just been discovered when he started writing robot stories and the word had a nice science-fictiony ring to it. The use of positrons was just an engineering detail and relatively unimportant to him.

Asimov's key insight was that, inasmuch as we engineer our tools to be safe to use, we would do the same with robots once we start making them -- and that the main safeguards for an intelligent being are its ethics. We would, therefore, build ethics into our robots to keep them going off on uncontrollable killing sprees.

In some sense, the specific Three (Four) Laws are themselves an engineering detail, the robotic equivalent of the Ten Commandments -- it is a specific ethical system but not the only one possible. In Asimov's universe, they are the basis for robotic ethics and so absolutely fundamental to robotic design that it is virtually impossible to build a robot without them.

Asimov tended not to let other people use his specific Laws of Robotics, but his essential insight -- that robots will have in-built ethical systems -- is freely used.

In particular, Data is an 'Asimovian' robot because he does have an in-built ethical system. He does not have the Three Laws, however (witness the episode 'Measure of Man' in which he refuses to follow a direct order from a superior officer [Second Law] without invoking either danger to a specific human [First Law] or the higher needs of all of humanity [Zeroth Law]). Moreover, his ethical programming is not fundamental to his design (his prototype, Lore, lacks it altogether, and Data's ethical program is turned off for much of 'Descent, part II').



What are the Laws of Robotics, anyway?
The Three Laws of Robotics are:
1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
(From Handbook of Robotics, 56th Edition, 2058 A.D., as quoted in I, Robot.)
In Robots and Empire (ch. 63), the 'Zeroth Law' is extrapolated, and the other Three Laws modified accordingly: 0. A robot may not injure humanity or, through inaction, allow humanity to come to harm. Unlike the Three Laws, however, the Zeroth Law is not a fundamental part of positronic robotic engineering, is not part of all positronic robots, and, in fact, requires a very sophisticated robot to even accept it.

Asimov claimed that the Three Laws were originated by John W. Campbell in a conversation they had on December 23, 1940. Campbell in turn maintained that he picked them out of Asimov's stories and discussions, and that his role was merely to state them explicitly.

The Three Laws did not appear in Asimov's first two robot stories, 'Robbie' and 'Reason', but the First Law was stated in Asimov's third robot story 'Liar!', which also featured the first appearance of robopsychologist Susan Calvin. (When 'Robbie' and 'Reason' were included in I, Robot, they were updated to mention the existence of the first law and first two laws, respectively). Yet there was a hint of the three laws in 'Robbie', in which Robbie's owner states that 'He can't help being faithful, loving, and kind. He's a machine - made so.' The first story to explicitly state the Three Laws was 'Runaround', which appeared in the March 1942 is sue of Astounding Science Fiction.