Cogito Ergo Sum: Moral Dilemmas Regarding Artificial Life
as Seen in Works of Science Fiction
Kyle Stewart
June 4, 2002
Honors 69 Class Project
In the history of the science fiction genre, there are many instances in which science fiction in one time period becomes science fact for the next. Flying machines, submarines and space travel were all created in worlds of science fiction before the actual inventions were accomplished. As work into artificial intelligence and adaptive, evolutionary programming continues, science fiction involving artificial life and sentient machines becomes increasingly less fantastic and more possible. The question we must ask ourselves is whether we can derive any practical information from these works of fiction regarding problems in the field of artificial intelligence which have yet to happen, or may not happen at all. Certainly, if science fiction has the ability to occasionally predict the future of science then it is worth the effort to investigate dilemmas faced in current works of science fiction as possible dilemmas we may face in the future. Specifically, I intend to investigate the following question: when is a machine no longer a machine? That is, when does a collection of programs and algorithms become sentient, such that it deserves rights just like any biological being? My guide to these and other related questions regarding artificial life will be the following works of science fiction: the movie Artificial Intelligence, the television series Star Trek: The Next Generation (season 2, episode 8, “Measure of a Man”), and the novel Permutation City.
The movie Artificial Intelligence focuses on a robotic child named David who, unlike any other robot previously manufactured, has the ability to display love akin to that of a child for his mother. Interestingly, however, is the fact that David, although clearly programmed with intelligence (to make him lifelike) is also designed to have the innocence of a child, causing the viewer to question the depth of David’s intelligence. Whether it is from programmed obsession, or from the genuine desire to be loved by his “mother,” David spends the majority of the movie in search of the magic blue fairy from the tale of Pinocchio so that he may be turned into “a real boy.” One of the most philosophically interesting scenes in the movie takes place at its very beginning, as the great mind behind the robots’ production, Professor Hobby, discusses David’s construction. Reviewing the company’s progress so far, he says, “The artificial being is a reality of perfect simulacrum, articulated in limb, articulate in speech and not lacking in human response.” He then stabs the robot named Sheila in the hand, causing her to give a short scream in response. He continues, “and even pain memory response…” as he tries to stab her again. But she moves her hand out of the way in anticipation. The key factor in this scene is the fact that, throughout the speech being given and even after just having been stabbed through the hand, Sheila fails to show any emotional response at all. Clearly, her actions were dictated by programming, for while she certainly appears as human as anyone else in the room, her responses to being stabbed are uniquely inhuman.
Professor Hobby points out this flaw within seconds, as he asks her about what just happened:
Hobby: “How did that make you feel? Angry? Shocked?”
Sheila:“I don’t understand.”
Hobby: “What did I do to your feelings?”
Sheila: “You did it to my hand.”
Hobby [turning his attention towards his audience once more]: “Ay… there’s the rub.”
As he has pointed out, the mecha feels no emotion at all, continuing to act completely neutral in all situations. Even as he tells the female mecha to undress, while an audience of dozens of scientists continues to observe his speech, she immediately begins to comply, unbuttoning her shirt. Not only does this mecha have no emotion, but it has no independent thought at all. It obeys every order given it, just as a computer is programmed to do. In essence, what the speaker is attempting to communicate to the audience is that all robots being produced, though useful, are mindless, thoughtless drones. He continues to describe his desire to build a robot who can love, suggesting that the ability to love will “be the key, by which they [mecha] will acquire a kind of subconscious never before achieved: an inner world of metaphor, of intuition, self-motivated reasoning… of dreams.” Essentially, what he is saying is that, by being able to program a robot with the ability to feel human emotions, specifically the ability to love, he will be creating a robot that is sentient, at least at some level. The robot will no longer be neutral and emotionless, which, as he just pointed out, is their one simulation flaw.
The moment he makes his argument for building a loving robot child, however, one of the audience members brings up an important point. She notes that “It isn’t simply a question of creating a robot who can love, but isn’t the real conundrum, can you get a human to love them back…. If a robot could genuinely lovea person, what responsibility does that person hold towards that mecha in return?” Essentially what this argument develops into is the following: once it is possible to create an artificial entity that can genuinely love a person, there is no way that person can continue to regard that robot as simply a machine. Once an artificial device has the ability to love, it no longer seems artificial. At least in the eyes of the person being loved, the machine is as real and as alive as any other person. Thus, one answer to the question of when a machine is no longer just a machine is that once an artificial entity can genuinely love someone or some thing, it can no longer be a mere machine. At least at some level, it is alive. The problem with this solution is the difficulty in determining what “genuine love” is, and whether or not a machine is displaying genuine love as opposed to simulated love. Also, we must consider the possibility that an artificial life form need not love someone in order to be alive. This leads to another question: is emotion required at all in order for an artificial being to be alive? Captain Jean-Luc Picard of the starship Enterprise would say that it is not.
In the Star Trek: the Next Generation episode “Measure of a Man,” a cyberneticists from Starfleet Command, Commander Bruce Maddox, wishes to disassemble Lt. Commander Data, the android officer aboard the U.S.S. Enterprise, in an attempt to help understand how Data was constructed in hopes of creating more androids like him. When Data, after reviewing the procedure, chooses to resign rather than submit to the experiment (on the grounds that Commander Maddox lacks the basic research required in order to risk performing such a dangerous procedure), Maddox informs, Captain Picard that Data cannot resign: Data is the property of Starfleet. This leads to a trial in which Picard must argue before an arbiter that Data is a sentient life form, and therefore (under Starfleet’s laws) has all the rights thereof and cannot be owned by anyone. Essentially, the theme of this episode is identical to the main question I am attempting to answer by looking at science fiction. When is a machine more than just a machine? At what point is an artificial being actually alive? Or, in this case, the question becomes, “At what point is an android sentient?”
The problem with acknowledging an artificial being as being alive based on whether it is sentient or not is that it is impossible to tell whether any being other than oneself is sentient. One can never know for certain whether some one else is conscious, simply by looking at him. Even by engaging someone in conversation, the only thing one can ascertain is whether some one appears to be sentient. As soon as Picard begins his argument, he raises this point. Having called Commander Maddox as a hostile witness, Picard asserts the opposition’s point of view, to make clear that what must be refuted is the assertion that Data is not sentient. Picard then asks Maddox, a leading mind in robotics, what the exact definition of sentience is. Maddox responds with three conditions: intelligence, self-awareness, and consciousness. Immediately, Picard notices what the problem with proving Data’s sentience is and, in fact, where the problem lies in proving that anyone other than oneself is sentient: there is no way to prove that something is conscious. The only way to know whether someone is conscious is to be that someone, in which case the truth is self-evident. In the words of the philosopher Rene Descartes, Cogito ergo sum: “I think, therefore I am.” This does little to solve the problem of showing another individual to be conscious, however. Taking advantage of this dilemma, Picard continues with Maddox:
Picard: “Prove to the court that I am sentient.”
Maddox [looks confused for several seconds]: “This is absurd! We all know you’re
sentient.”
Picard: “So I am sentient but Commander Data is not?”
Maddox: “Yes.”
Picard: “Why?”
Maddox looks puzzled and fails to respond for several more seconds, before he states that Picard is sentient and Data is not because Picard is self-aware. Picard just hit on the single most difficult part in determining whether an artificial being is sentient or not: we cannot prove that anyone is sentient. We simply assume that all humans are sentient, because they act as though they are. Thus, when we decide that we need to prove something for artificial life that cannot be proven even for human, biological life, we create a problem with no solutions.
Merely mentioning the point that no creature can be proved sentient is not sufficient for convincing and arbiter that Data is sentient, unfortunately, so Picard takes each of Maddox’s criteria for sentience one at a time. Maddox immediately concede that Data is intelligent: “It has the ability to learn and understand and to cope with new situations.” Even computers today are becoming increasingly intelligent. With the ability to process millions of operations per second, and to store vast amount of information, computers can do many more calculations than humans. Coupled with heuristic algorithms, and programming which allows it to learn, even computers of today could be considered intelligent enough to warrant sentience. Next, Picard seeks to disprove the notion that Data is not self-aware:
Picard: “What does that mean? Why… why am I self-aware?”
Maddox: “Because you’re conscious of your existence and actions. You are aware of
yourself and your own ego.”
Picard [Turning to Data]: “Commander Data, what are you doing now?”
Data: “I am taking part in a legal hearing to determine my rights and status. Am I a
person or property?”
Picard: “And what’s at stake?”
Data: “My right to choose. Perhaps my very life.
Picard [to Maddox]: “‘My rights,’ ‘my status,’ ‘my right to choose,’ ‘my life’… Well, he
seems reasonably self-aware to me, Commander.”
This is the attribute that Picard shows Data to have (or, at least, to appear to have) that modern computers do not have. No computers today (none that I know of, anyway) are aware of themselves, or of where they are (except GPS tracking systems, if that counts). In fact, most computers don’t even have an interface through which one can communicate with the computer itself to ask such a question. Nevertheless, even when questioning a machine, it would seem that it is easier for humans to accept apparent self-awareness as indicative of actually being self-aware. That is, if a computer can tell you where it is and what is going on around it, it is fairly easy to decide from this evidence that the computer has knowledge (defining knowledge for the time being to be mere information) of where it is and what is happening in its surroundings.
Now, having reached the final criterion, consciousness, Picard knows that nothing he will say can prove that Data is conscious, since nothing short of experiencing someone’s thoughts can prove that person is conscious. So Picard takes a different, very interesting approach to this problem. He asks Maddox why he wants to dismantle Data, the response to which was “to learn from it and construct more.” When Picard follows up by asking how many more are to be constructed, Maddox responds, “Hundreds… Thousands if necessary… There is no limit.” Picard, having lead the topic in the right direction, then continues to make his ultimate point in the argument:
A single Data is… a curiosity, a wonder even, but thousands of Datas… Isn’t that becoming a race? And won’t we be judged by how we treat that race? … [talking to the arbiter] Sooner or later this man [Maddox] or others like him will succeed in replicating Commander Data. The decision you reach here today will determine how we regard this ‘creation of our genius.’ … It will reach far beyond this courtroom and this one android. … Are you prepared to condemn him and all who come after him to servitude and slavery?
Picard can’t prove that Data is conscious, so he makes the previous argument. These are exactly the points we need to make before ruling that an artificial being is not sentient, but is only simulating human behavior well enough to appear sentient. The question we must ask ourselves is this: What if we are wrong? What if the machine really is alive, really is sentient? We wouldn’t just be enslaving an entire race, as Picard suggests, we would be enslaving and entire species, an entire form of life on the basis that we’re not sure whether they are sentient or merely simulating a sentient being so well that they act the same way a sentient being would in their place. We would be enslaving and entire species on the basis that we can’t know for sure that they are sentient, when, in all honesty, we can’t know for sure whether anyone other than ourselves are sentient either.
Wishing to know the answers to these unanswerable questions is exactly what causes Paul Durham to copy himself in the novel Permutation City, by Greg Egan. In this novel, the technology exists to create a copy of a person’s consciousness and memory, which can be stored in computer memory and run through virtual realities. These virtual people are referred to as Copies. The reader, experiencing most of the novel through various Copies’ points of view, realizes right away that the Copies are exactly the same as normal people, with the small difference that they have no bodies. The environments they inhabit, the bodies they experience, what they see, what they hear, all of their experiences as Copies are simply programming. They exist solely in virtual reality. Other than this fact, however, they are identical to the “real” person from whom they were scanned. Most of the novel deals with the various permutations (hence the title) the Copies create of themselves. Existing solely in virtual reality, as mere programming, they can alter their own mood at the moment, take snapshots of experiences for later use, delete unneeded emotions like anger or boredom, alter the simulated needs of their bodies such as eating or sleeping, or edit themselves in numerous other ways. That, however, is an entirely different topic than the one I chose to explore. By far the most important scene in the novel occurs when the Copy of Paul Durham, finally beginning to cope with the fact that the suicide option has been removed from the program (by the “real” Paul Durham) and begins to think back on why “he” wanted to scan a copy of “himself” in the first place. He had wanted to determine, experimentally, whether Copies were truly sentient, whether they had true, original thought, or whether they were mere simulations. This causes him to remember the philosophical debate which ensued when the first Copy was created:
Copies soon passed the Turing test: no panel of experts quizzing a group of Copies and humans … could tell which were which. But some philosophers and psychologists continued to insist that this demonstration was no more than “simulated consciousness”… A computer running a Copy might be able to generate plausible descriptions of human behavior in hypothetical situations… but that hardly made the machine itself conscious. Page 44-45.
Essentially, this side of the argument, as the Copy of Durham remembers it, is exactly the case made in “Measure of a Man” against Commander Data. Simply because a vast amount of algorithms and programs cause Data to act the same way a sentient being would, doesn’t imply that he is, in fact, sentient. The problem with holding this side of the argument is that there is no way to disprove the idea of apparent consciousness due to simulation instead of true consciousness, since the only thing we can tell is whether someone appears conscious. The Copy of Durham also remembers what the opposing argument was in this debate:
Supporters of Strong AI Hypothesis insisted that consciousness was a property of certain algorithms – a result of information being processed a certain way, regardless of what machine, or organ, was used to perform the task… “Simulated consciousness” was as oxymoronic as “simulated addition.” Page 44.