I, Robot by Isaac Asimov
4 out of 5 stars
Last read in March 2009
I re-read this classic science fiction anthology for the Science Fiction & Fantasy Book Club at GoodReads (in March 2009). Here is a link to the discussion on the Three Laws and another for readers to post their favorites. My favorite stories include “Runaround,” “Liar!” and “Escape!”
Below you will find my mini-reviews of each story (Spoiler Warning)
March 29, 2009: Re-reading this for a book club at GoodReads. I’m slightly annoyed with the library because they held the wrong version of this book for me. It’s the shorter version re-released at the same time as the Will Smith movie.
On “Robbie” . . . A demonstration of the First Law – Robbie, a non-speaking robot assigned as a nursemaid to a six-year old girl. The mother cracks under the mounting social pressure against robots and convinces her husband to have Robbie removed. The girl is devastated and doesn’t give up looking for Robbie until she finds him on a tour of a US Robots facility. The family dynamics are very dated (they scream 1950s) but otherwise it’s a good story, especially the relationship between Robbie and the girl.
On “Runaround” . . . It’s always a good idea to be precise when giving instructions to a robot. This story demonstrates the irratic and irrational robotic behavior that can occur when the Second Rule and the Third Rule are in balance. It took two “brilliant” scientists several hours to reason out that only the First Law would break the cycle.
On “Reason” . . . Reason reunited us with the same two “brilliant” scientists from the Mercury mining mission in Runaround. This time Powell and Donovan are running a Solar Station #5 that beams solar energy to Earth. They have just assembled a new robot, designation QT-1, “Cutie” colloquially. The hope for this new model series was to replace the executive level humans on the Solar Stations (i.e. Powell and Donovan) so that humans were only required to visit the stations to make repairs. Cutie waxes philosophical and culminates his own theology, evangelizing the other robots. Donovan and Powell struggle to break the obsession but eventually come to terms with it’s potential. This story reminded me of Cylons but without the darkness, danger and threat to humans.
On “Catch That Rabbit” . . . This story was entertaining but a bit weak on the “what if” premise. Donavon and Powell are back at a mining facility, testing a new model of robot – a multirobot – a master robot with six subsidiaries. As long as the robot(s) are watched by the humans (and the robots know they are being watched), they perform flawlessly. But when they are unwatched, they appear to go bonkers, losing track of time, unresponsive to radio hails, etc. Powell and Donavon eventually “catch the rabbit” i.e. the trigger point for the breakdown, but it just doesn’t have the impact of the other two stories.
On “Liar!” . . . I like this story because it is very emotionally charged and for the “what if” of what the definition of harm is.
On “Little Lost Robot” . . . This story changed the rules, literally. The “what if” deals with a modified First Law that contained only the positive aspect of the Law – “No robot may harm a human being” – leaving off the latter portion – “or through inaction, allow a human being to come to harm.” The scientists involved in the Hyperatomic Drive project felt they needed robots with a modified First Law because they were constantly putting themselves in harm’s way, which forced the non-modified robots to “save” them. Dr. Calvin eventually convinced them of consequences.
On “Escape!” . . . This was another test of the First Law. Dr. Calvin also inadvertently made matters worse by trying to help the engineers solve the interstellar jump problem but also protect The Brain from destroying itself with a dilemma.
On “Evidence” . . . This story dealt with the “what if” a robot looked and acted exactly like a human. Reminded me of the “skin job” references in Battlestar Galactica (reimagined) but with less violence. A politician is accused of being a robot and refuses to submit to testing. The argument is raised that if a human follows the Golden Rule, he basically also follows the Three Laws. So without physical examination to prove otherwise, a good decent human could not be disproven a robot.
On “The Evitable Conflict” . . . This story finally gets to the crux of the matter in the evolution of the Three Laws. It’s an expansion of the First Law by the Machines (large super brain robots that shepherd the four Regions of Earth) as articulated by Dr. Susan Calvin: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” It is commonly referred to as the Zeroth Law of Robotics.