Monday, July 19, 2004

Robots, Metaphysics, and REAL Artificial Intelligence

What a catchy title...

I watched the latest craze I, Robot last night, and it was impressive. I really enjoy Will Smith's movies, and they even had that guy who played Zefram Cochrane in Star Trek: First Contact to play Dr. Lanning (sp?). Set in 2035, the plot of the movie is about how robots attempt to take over the world, but not in the same way or for the same reason as The Matrix.

Before I go any farther, I need to make sure and put out a disclaimer - kids make sure you check the content of the movie with your parents, conferring with a site such as www.kidsinmind.com, before venturing to see this movie. I do not want to be accidentally responsible for offending someone because I endorsed a movie they would not pre-approve. Be smart and check out what you're thinking about watching.

--- WARNING: SPOILER AHEAD!!! ---

The following should be read at your own spoiled risk. It might give away some of the storyline of I, Robot (henceforth referred to as IR).

Robots in this story are governed by the immutable three laws (precisely quoted fromt the oh-so-interesting Wikipedia):

Law 1: A robot may not harm a human being, or, through inaction, allow a human being to come to harm.

Law 2: A robot must obey the orders given to it by the human beings, except where such orders would conflict with the First Law.

Law 3: A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.

However, in IR we see a robot evolution in the extremely complex AI called VIKI, which effectively decides that for humanity's own protection against itself must take charge and, possibly-good intentions aside, control the world. VIKI, who indirectly is in charge of all the NS-5 robots, apparently uses logic to conclude that accomplishing this goal must necessitate the forced removal and execution of certain humans who cause too much danger to her idealistic robot-driven society.

This makes a very cool story, no doubt. Fortunately, we don't have to worry, for even if we were able to invent sufficient artificial intelligence that might mirror those of robots in IR I seriously doubt that sort of robot uprising has any possibility of occurring.

The metaphsyical, scientific, and AI question I have concerning this is the following: can the supposed "Ghost code" of Dr. Lanning be sufficient to provide the leap of logic necessary for VIKI to violate the first law to harm individual humans for the good of humanity? Another way of posing the question is HOW does "Ghost code" allow VIKI to take over the world.

To *hopefully* start some discussion and wake up our friendly-neighborhood-blog-circle, I offer my initial answer. The "Ghost code" supposedly allows a fundamental switch in the first law, possibly caused by an ambiguity. The robot must fully re-interpret the phrase "a human being" to "the whole of humanity" or simply restate the first law into "A robot must harm the fewest number of human beings to protect the well-being of the balance number of human beings." Robots must effectively redefine a number. Considering that it must ultimately think in binary form, this seems unlikely. Also, I think William Dembski's Law of Conservation of information ultimately prevents artificial intelligence from moving beyond their original programming. In short, it restates what everyone understands as the "no free lunch" principle, that you only get out what you put in. Thus, there is no way for a robot to evolve beyond what is programmed into it.

Go see the movie, I really enjoyed it and I bet you will too.