The Ice Man Cometh:
Lt. Commander Data and the Turing Test

Dr. James F. Sennett
McNeese State University

draft -- please do not use or quote without permission
draft date: September 1996


Sections

  1. Introduction
  2. Demythologizing the Turing Test
  3. Revision #1: Raising the Stakes
  4. Revision #2: The Ice Man Hypothesis
  5. Revision #3: Coming out of the Closet
  6. The Turing Test and Behaviorism
  7. RTT and the Problem of Other Minds

“It is morally impossible that there should be sufficient diversity in any machine to allow it to act in all the events of life in the same way as our reason causes us to act.”

--Rene Descartes1

“Perhaps it won’t be all that long before scientists begin to wonder why anyone ever bothered to distinguish artificial intelligence from the ‘real’ thing.”

--David H. Freedman2

1. Introduction

Is Data a person? Data, as many of my readers undoubtedly know, is the lovable and quite impressive (albeit purely fictitious) android on the television and movie series Star Trek: The Next Generation (STNG). Data is a most remarkable invention. He is a commissioned officer in Star Fleet (the exploratory and military wing of the United Federation of Planets). He carries on conversations flawlessly. He enters into relationships with others. He has friends. He ponders, hopes, and is sometimes confused and bewildered. He even dreams. To top it all off, this conscious mental activity is provided via an Asimov-esque positronic brain featuring web-linked neural networks to warm the cockles of any connectionist’s heart.3

Once we assume the coherence of the claim that such an android is theoretically possible, the question of its metaphysical status becomes quite intriguing. That Data is not a human being is both obvious and irrelevant.4 The powerful and debatable question is, is Data a person? Does Data hold that same metaphysical status that renders human beings worthy of such privileges as moral rights, economic and political freedoms, and expectation of just treatment?5

It is not surprising that several episodes of the STNG television series dealt directly with the question of Data’s metaphysical status. In “The Measure of a Man” he is put on trial to determine if he is a free, sentient being or the property of the state. At issue is the question of whether or not Data can resign his commission in order to avoid a transfer to a post where he will be dismantled and studied. The verdict by Star Fleet’s judge advocate general is that Data is indeed free, sentient, and deserving of protection of his moral and legal rights. In “The Offspring” (my personal favorite), Data builds another android of identical sophistication to himself. He names her “Lol” and refers to her as his daughter. When Star Fleet insists that she be separated from Data and taken to Star Fleet Research for training and study, Data’s captain Jean-Luc Picard risks his career to defy the order, charging that the move constitutes “ordering a man to turn his child over to the state.”6

One of the most intriguing of these episodes is entitled “Thine Own Self.” Due to an electronic overload of his neural pathways (the most common of android maladies), Data develops the android equivalent of amnesia. He does not know his name. He does not know where he is from or anything about his life. He does not even know that he is an android. In this condition he wanders into a village on a pre-industrial planet, where the villagers are puzzled by his strange appearance (pale skin, yellow eyes), his strange speech (void of contractions, stilted, scientific, unemotional) and his incredible strength (he lifts an anvil with ease that has fallen off its platform onto the leg of a blacksmith).

Data is examined by the village medicine woman, who purports to know the explanation for his strange appearance and behaviors. “You,” she declares emphatically, “are an Ice Man.” The Ice Men are rumored to live in the vast frozen mountain wastes many miles from the village. Of course, no one from the village has ever been to the mountains or seen an Ice Man, but their existence is, in the words of the medicine woman, “common knowledge.” Everyone (perhaps even Data himself) accepts this explanation as the most plausible one available – until Data is struck in the face by a pick ax and his “flesh” tears away to reveal his metallic superstructure and electronic circuitry. At that point the villagers label him a “monster” and eventually kill him. (Fortunately, this is a condition that is not always fatal on Star Trek).

This episode raises some fascinating questions about the proper application of the concept of personhood. Prior to discovering his mechanical nature, the villagers accept Data univocally as a person – a rather strange person, perhaps, but a person nonetheless. Furthermore, they seem to have arrived at this conclusion by that age old, tried and true empirical principle: if it looks like a duck, walks like a duck, and quacks like a duck, it must be a duck. Data, a highly sophisticated – yet man-made – machine, had succeeded in convincing a whole village of people that he was one of them.

When viewed from this perspective, the episode reminds us of that most famous of computer/human face-offs, the Turing Test. This test, devised by computer pioneer Alan Turing, was designed to determine whether or not a computer is capable of human-like cognition. (Or, to put the point in the more positivistic tones Turing preferred, the question of whether or not a computer could pass the test should replace the less informative, more emotional question, “Can this computer think?”)

The Turing Test is a variation on what Turing calls “the imitation game.” This game pits a person (the “interrogator”) against two other people, a man and a woman, all three of whom sit at teletype machines in three different rooms. The interrogator seeks to determine which person is the man and which is the woman simply by asking questions over the teletype. The woman’s task is to convince the interrogator that she is in fact the man. It is the man’s task to convince the interrogator that he is the man. Turing then transforms this game into the Turing Test with two simple but profound questions:

What will happen when a machine takes the part of [the woman] in this game? Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original ‘Can machines think?’ (p. 54)
That is, Turing proposes that a machine that can convince an interrogator that it is a human being, even faced with genuine human competition, will have done all that can be expected of a machine to satisfy whatever criterion we wish to suggest that qualifies human beings as genuinely cognitive beings.


2. Demythologizing the Turing Test

The Turing Test has motivated a virtual mountain of paper in response over the last half century. These responses range from declarations that the Test marks the ultimate solution to the mind/body problem to charges that the Test is nothing but science fiction musing, with little or no philosophical import. Be all that as it may, the relative import of the Test as devised by Turing is not my primary concern here. Rather, I want to argue that the figure of Data, and particularly his adventures in the episode “Thine Own Self,” suggest a number of revisions to the Turing Test that would turn it into a much more interesting philosophical tool. That is, the conceptual possibility of a Data-sophisticated android can aid us in sharpening up and focusing this classic thought experiment. In essence, I am proposing that the Turing Test be demythologized – reinterpreted to speak to present day concerns in light of the tremendous advances in cybernetic technology over the last several decades, and in light of the sharpening of our imaginations and intuitions regarding where such technology could conceivably go. The Ice Man Cometh – and the Turing Test must adapt to receive him.

The standard picture of the Turing Test shared in the literature today already represents a significant revision of the test from the way Turing presents it in the passage quoted above. Rather than an interrogator attempting to distinguish between a machine and a man, the version I will call the Received Test (RT) pits an interrogator against a single responder, and the interrogator must try to determine whether the responder is human or machine.7 The RT is clearly an improvement over the original game suggested by Turing. Given the chance to make direct comparisons, and the knowledge that one and only one responder is a human, the interrogator may well be able to see distinctions that will tip the hand of one responder or the other. But without such advantages, one might more easily mistake the answers of a machine for those of a human – and this based on the standard intuitions at play in our assessment of others with whom we interact as persons. But even the RT does not go far enough in making the Turing Test the most effective thought experiment it can be for sharpening our intuitions. I believe the adventures of Data in “Thine Own Self” suggest at least three other revisions that can be made.

Before looking at these proposed revisions, however, it will be helpful to say more about the notion of a Data-sophisticated android (DSA). It is important to understand that I am not concerned with whether or not current cybernetic technology suggests that DSA’s are within the reaches of future technology. It is enough for my purposes that the idea of a DSA is coherent, so as to allow the character of Data to move unimpeded through the series and our minds. My purpose here is to examine the concept of personhood, given the apparent coherence of the idea of a DSA. Given that this is a conceptual, and not an empirical, study, the question of whether or not there ever will be – or even could be any DSA’s (in a scientific or epistemological, not a metaphysical, sense) is of no concern to me.


3. Revision #1: Raising the Stakes

The first revision to the RT I suggest is obvious from the fact that I am utilizing the Turing Test to examine the question “Is Data a person?” The original test, and even the RT, are tests of cognition or intelligence. They concern the question of whether or not a machine can think (or is thinking) in the same way that humans think. But there may well be conceptual room for an object that thinks just like human beings think, but does not qualify metaphysically as a person. At the very least, it is not obvious that humanesque cognitive capacities constitute a sufficient condition for personhood.8

However, as the development of this paper will make clear, I believe that my Revised Turing Test (RTT), by expanding the range of behaviors that the interrogator observes, also expands the metaphysical category that can be demonstrated by those behaviors. I believe that the Turing Test can be revised with regards to its purpose – that it can be a test not just for cognition but for personhood as well. The villagers did not take Data merely to be a thing that thinks. They took him to be a person. They understood him to possess will, desire, self-awareness, purposeful deliberation and decision making capacities. They saw him as conscious, free, sentient, and morally responsible. And they arrived at this conclusion (or, more accurately, made this tacit assumption) on the basis of their observation of and interaction with his complex behavioral patterns.

Todd Moody notes a similar expansion to the RT already at work ersatz in much of the Turing Test literature: Although Turing himself limited his remarks to the property called ‘intelligence’ and the question of whether machines can ‘think,’ others have been eager to generalize his conclusions to cover any and all mental states and properties. According to this extended thesis, the Turing Test-passing computer may be said to be not just intelligent but conscious, with a mental life, ideas, belief, desires, and whatever else goes with having a mind. Put another way, the thesis is that the ability to pass the Turing Test is a logically sufficient condition for having a mind (p. 78).

My first revision actually differs from this revision of the test by claiming that a DSA, by passing the RTT, may well qualify as a person. While I believe that the test also gives as good reason as we could ask for to assume consciousness, mental states, and the like of DSA’s, this will not be my primary focus (though I will touch on it briefly in section 7).

It is important here to note the connections between the concept of personhood and that of human being. As I have said before, the fact that Data is not a human being is both obvious and irrelevant. The concept of nonhuman persons is certainly coherent. However, the fact remains that the only objects known to exist that are uncontroversially persons are human beings. And this fact is quite relevant to the question of whether or not an object would be taken to be a person in the RTT (since, as Revision #3 will provide, the RTT requires face to face confrontation between interrogator and person candidate). The villagers would never have taken Data to be a person if he had not looked like a human being. If he had been a talking box or had resembled a coyote that speaks, gestures, and reasons, he might have been taken as a god or a monster. But he would not have been accepted as a being of identical metaphysical status. And, as I mentioned above, their assessment of him as a person was overturned when it was discovered that he was not human.

I believe that any DSA encountering human beings on Earth today would be in the same boat. Without human appearance, its chances of being taken as a person and treated like one are slim to none. (We live, after all, in a society that has only recently advanced to the realization that other human beings who differ from a given type of human being in significant ways nonetheless count as persons!) Perhaps some day DSA’s will be developed and human encounters with them will become commonplace enough to broaden our intuitions to the point that we accept them as persons even though we know that they are not human. Then we may be poised to begin accepting as persons even machines or other organisms that do not look human. But until such a time, biochauvinism will reign supreme, and the concept of a person will remain closely linked with that of human beings.9

Exactly what do I have in mind when I suggest that the RTT is a test for personhood? There are at least three distinct theses that this claim might be seen as asserting; an epistemological one, an ontological one, and a moral one:

ET: Since we are justified in believing that other human beings are persons, we would be justified in believing any DSA that passed the RTT to be a person.

OT: Since human beings are persons, any DSA that passes the RTT is also a person.

MT: Since human beings, qua persons, should be treated as rights-bearers, Kantian ends, autonomous agents, etc., so should any DSA that passes the RTT.

All three of these are stated in non-skeptical terms, assuming that human beings indeed satisfy the category in question. There are more modest versions of each that might work as well. Most notably, ET could be toned down to

MET: We are as justified in believing that any DSA that passes the RTT is a person as we are in believing that other human beings are persons.

I believe that ET, OT, and MT are all true, and that my present project offers good reasons for accepting all three. Perhaps it offers the best support for MT, slightly less for ET, and slightly less still for OT. (Notice that neither MT nor ET entails that DSA’s are persons.) (Nonetheless, I believe that even the case for OT is pretty strong.10 At the very least, I believe that my project offers overwhelming support for MET, and I would be satisfied if this were all my hearer granted me (especially since I believe that it is relatively easy to argue that MET entails MT). Each of these four theses should be kept in mind throughout the paper, however, and references to them will be made when appropriate for clarification.

One final word is in order. While my primary concern in this paper is personhood, that concern naturally and unavoidably entangles itself with related concerns, such as consciousness, mind, and cognition. Of the relationships between personhood and these others I will say only this: I take it as essentially uncontroversial that having good reason to believe that a given entity with which one is interacting is a person is likewise good reason for believing that it is conscious, has a mind, and engages in cognitive activity in the same sense in which each of these is true of normally functioning human beings. Furthermore, I take the reverse to be true – that the combination of mind, consciousness and cognition like that possessed by humans is good reason to consider an entity to be a person. (I say combination particularly because, as was seen above, I want to leave logical room for the possibility of an entity that possesses human cognitive capacities but does not count as a person.)11


4. Revision #2: The Ice Man Hypothesis

There is one clear, simple, obvious reason why the people of the village accepted Data as a person rather than a computer; they had no conception of computer at all, much less one on the DSA level. Unlike a Turing interrogator, they did not have what I will call the “Android Hypothesis” available as a possible explanation for Data’s strange appearance, speech, and behavior; viz., “This person-like but somewhat aberrant behavior is exhibited by an android and not a human being.” They were not trying to decide if Data were man or machine – they did not even comprehend that such a choice could face them.

But lack of the Android Hypothesis alone is insufficient to explain their taking Data to be person. They could have simply categorized him as some other animal, or (as they later did) as a monster, or some other entity – or they could have simply opted for not knowing what he was. Their taking Data as a person – as belonging to the same metaphysical category as themselves – is explained both by the absence of the Android Hypothesis and by the invoking of what I will call the Ice Man Hypothesis. When faced with the task of explaining Data’s aberrant characteristics, the medicine woman postulated that, while a person, Data was a person of a far different sort – from a far off land with strange customs, bizarre lifestyles, and uncanny abilities. Even with no empirical evidence that there were any such persons, let alone that Data was one of them, this conjecture – this Ice Man Hypothesis – was curiously satisfying to the medicine woman, the villagers, and (again, perhaps) even to Data himself.

The Ice Man Hypothesis is possible precisely because the concept of personhood, like any sufficiently rich concept, is broad and open-ended. Responding to the question “What is typical of persons?” Jane English produces a litany of characteristics, which she classifies under psychological factors, rational factors, social factor, and (unfortunately!) biological factors. But then she offers the following insightful observation: Now the point is not that this list is incomplete, or that you can find counter-instances to each of its points. [Ό] There is no single core of necessary and sufficient features which we can draw upon with the assurance that they constitute what really makes a person: there are only features that are more or less typical (p. 297).

English’s point is similar to that made by Strawson, who speaks of a central class of predicates that seem to form a core for the idea of personhood, but which is itself of undetermined membership – only typical cases may be cited.12 The concept is capable of accommodating a wide range of aberrant and deviant characteristics, as long as a central core of features is present to a satisfactory degree. In other words, qualifying as a person is not so simple as satisfying an uncontroversial list of necessary and sufficient conditions. It is a balancing act between characteristics thought to be typical and those thought to be atypical.

If the concept of a person is as English and Strawson depict it – and I believe it to be quite clear that it is – then two facts about Data give rise to the Ice Man Hypothesis. First, Data clearly demonstrates a sufficient array of characteristics typical of persons, and those related to one another in sufficiently characteristic ways. Hence, his personhood is not questioned. Nevertheless, Data also clearly demonstrates a vast array of characteristics that, though not challenging the central core of personhood characteristics, nonetheless distinguish him from familiar persons in a way that cries out for explanations.

Enter the following proposal – the Ice Man Hypothesis: there are [or, at least, it is possible that there are] persons that display characteristics and behaviors very different from any persons with whom I am acquainted. The villagers accept this hypothesis implicitly, and so naturally respond to the juxtaposition of typical core person properties and atypical non-core person properties in Data by applying the proposal to his case – “You are an Ice Man.” In the absence of the Android Hypothesis, the villagers are faced with three possible assessments of Data’s status: “you are an Ice Man” (the Ice Man Hypothesis), “you are not a person at all” (the Monster Hypothesis), and “we don’t know what you are” (the Null Hypothesis). They univocally settle on the first.

Hence, my second suggested revision of the RT: eliminate the Android Hypothesis and see if the Ice Man Hypothesis emerges. Do not ask the interrogator to determine whether she is interacting with a human being or a computer. Simply ask her to interact and give her assessment. If she is engaging a DSA or some other sufficiently advanced machine, she will undoubtedly encounter characteristics and behaviors that differ from her normal expectation of social interaction. However, if such differences lie outside the core set of person characteristics, or if she encounters a sufficiently counterbalancing set of core characteristics, she is likely to invoke something like the Ice Man Hypothesis – “this person is really different” – rather than the Monster Hypothesis or the Null Hypothesis or even the Android Hypothesis. By explicitly offering the Android Hypothesis as an option to the interrogator prior to the test, we introduce a pollutant that sets the interrogator up to weigh atypical characteristics heavier than she otherwise would. She will tend to magnify non-core differences and treat them as she would core differences under normal circumstances. So eliminating the specific mention of the Android Hypothesis as an option would force the interrogator to come up with this option on her own, or to weigh the Ice Man Hypothesis against it or the other options for their own merit. If, when faced with a conglomeration of typical and atypical characteristics and behaviors, the interrogator invokes the Ice Man Hypothesis rather than one of the others, this will count at least as prima facie evidence that the DSA encountered by the interrogator qualifies as a person.13 It will, at the very least, offer strong support for MET.

Moody suggests that a situation like that required by this revision compromises the entire idea of the Turing Test. His point is related to an interesting experience he had while debating the Turing Test on the Internet:

At one point I was accused of inconsistency. How could I question the adequacy of the Turing Test when the very electronic medium that we were using closely resembled it? Wasn’t I attributing all manner of mental states to my interlocutors simply in virtue of exchanging typed text with them? Indeed I was, but only because a number of other presumptions were already firmly in place. It was (and is) part of my general knowledge that there are no programs that come close to passing the Turing Test, much less engaging in philosophical discourse. The very idea of a test presupposes some plausible doubt in the matter. Where there was no doubt, I could not be said to be testing anyone. The same reply goes to those who claim that we are always Turing Testing each other in our everyday interactions. This is false, unless we employ a much looser and vaguer sense of the Turing Test than what Turing himself had in mind (p. 96, emphasis mine).

Perhaps I am employing a “much looser and vaguer sense of the Turing Test” than Turing intended, and I don’t really care whether I am or not. But it seems clear that Moody’s objection misses a couple of points that are captured by the Ice Man Hypothesis revision. First, sufficiently aberrant conditions can raise doubt even when there is initially none. This is illustrated by the fact that the villagers’ assessment of Data changed radically when they discovered that he was not a biological being. If one of Moody’s interlocutors were suddenly to begin responding to his queries with completely inappropriate replies, he would begin wondering if he were interacting with a human being any more – or ever had been. My argument is that interaction with a DSA sufficient to stave off doubts is in its own rights important evidence at least for MET if not ET or OT.

Second, the whole purpose of the RTT (and of the RT and original test, I believe) is not to fool someone into thinking that a computer is really a human (as suggested by Moody’s insistence that there must be doubt), but to demonstrate whether or not the computer satisfies a certain criterion (whether for personhood or cognition). The best test for whether or not it does so is whether or not it is taken as a person or a cognitive being. Given that sufficiently aberrant interaction will not fool people whether they originally have any doubts or not, such doubts seem unnecessary for the test to do its job. Besides, as I have suggested above, a certain amount of doubt may well prove counterproductive and polluting to the experiment. Initial doubts can lead one to weigh aberration heavier than she ought. Given the fluidity of personhood properties, there will be a wide variety of property manifestations that would be allowed and even unnoticed in the absence of doubt, but would be (wrongly) taken as significant if there were initial doubts causing one to notice such too readily.


5. Revision #3: Coming out of the Closet

In the original Turing Test, as well at in the RT, the interrogator never has face to face contact with the one she questions. The subject of her interrogations is always hidden, packed away in a closet or cubicle and seated at (or attached to!) a teletype devise. (In the interest of demythologizing the Turing Test, I will hereafter refer to the connection between interrogator and candidate, including its terminals, as the “network.”) One reason for this anonymity is obvious. It is unlikely that Turing or any of his near contemporaries ever envisioned that a computer would be developed that could fool someone into thinking it was a human while in direct visual contact with it. The DSA possibility, of course, gives reason to reexamine such an assumption. But a better reason (and perhaps more to the point) for the anonymity is that, DSA’s notwithstanding, there may well be machines that can emulate human verbal behavior flawlessly, but look nothing like humans. Visual contact with such machines would cloud the interrogator’s judgment and prevent a fair assessment of the machine’s cognitive capacities, given the close connection assumed to exist between persons and humans that was discussed in section 3 above. So the computer goes into the closet.

But the DSA scenario frees us from such constraints. Were Data or one of his ilk to be built, then the computer could come out of the closet and go toe to toe with its interlocutor. This is, indeed, what happens in “Thine Own Self.” Data is taken to be a person not simply because his verbal behaviors suggest humanesque cognitive capacities, but also because he looks, acts, and responds like a human being – at least he does so within acceptable parameters given the core properties. So the third suggested revision of the RT is to bring the computer out of the closet. A computer capable of emulating a more complete range of human behaviors is one much more likely to be taken as a person and for exactly the same reasons that humans are routinely taken as persons. A decloaked DSA would have opportunity to emulate a sufficient number of core person properties and thus win a favorable assessment regarding its personhood.

By coming out of the closet, the computer must interact with its interrogator in various ways, including gestures, facial expressions and other body language. It must be able to report and respond accurately to sensory stimuli. It must also be able to engage in genuine conversation, not merely respond to a set of predetermined questions. Its computer status would be revealed were it simply to abide quietly until approached with an inquiry. It would need to be able to generate conversations and other forms of human interaction – not merely participate in them along prescribed guidelines.14

Notice, then, that there is a sense in which a DSA out of the closet would have a more difficult time being taken as a person than if it remained hidden. All things being equal, many people would take demonstration of intelligence and cognition alone as sufficient to signify personhood (though, as I have mentioned above, it is far from clear that this is actually the case). Absent the Android Hypothesis, it may be likely that a computer could win personhood assessment from the interrogator just through cognitive interchange on the network, whereas that same computer would be incapable of convincing her in direct contact. This would be the case were its noncognitive behaviors and characteristics aberrant enough to override the prima facie assessment suggested by the cognitive capacities. When the DSA comes out of the closet, it must be capable of sufficiently supplementing its cognitive skills or it runs the risk of failing the RTT.

This revision suggests that the RTT would be (at best) a sufficient, but not a necessary test for personhood. While bodily features and behaviors are certainly quite suggestive of, and may well be part of a package of, sufficient conditions for personhood, it seems a serious mistake to suggest that they are necessary conditions for personhood. Putnam’s brain in a vat, for example, is a conceptual possibility. Were it properly connected to mechanical or electronic devices allowing it to communicate with and gather sensory data from the world around it, it would most likely display behaviors that would strongly move us to regard it as a person. Also, if the concepts of God and disembodied spirits are coherent at all (as many have taken them to be), they would certainly count as persons. More to the point, if DSA’s are a conceptual possibility, then certainly a computer with all the cognitive, sensory, and interactive capacities of a DSA, but without the humanoid form and attendant behaviors, is also a conceptual possibility. If such a machine were in the cubicle attached to the network, it might very well exhibit interactive behaviors sufficient to convince anyone it was a person. Outside the closet, it would have a tougher time.15


6. The Turing Test and Behaviorism

The RTT may be seen as highlighting a feature that many have accused the Turing Test of manifesting all along – a commitment to a behavioristic model of mind. After all, the RTT purports to test for personhood solely on the basis of displayed verbal and nonverbal behavior patterns. But this line of thinking is a mistake. While Turing was almost certainly a behaviorist, it is not the case that the Turing Test, in any of its varieties, entails or even suggests behaviorism. What the test does suggest is a similar, though weaker, claim.

In the words of Dale Jacquette, “The most fundamental insight of the Turing Test is that, to be recognized for scientific or philosophical purposes, intelligence [or personhood] must be behaviorally communicated from source to judge” (p. 70, emphasis mine). In other words, the scientist and philosopher must use behavioral phenomena as the only available empirical evidence for judgments regarding intelligence (or personhood). But the understanding of why such judgment must be grounded in behavior is what separates behaviorism from other theories of mind. The behaviorist takes such behavior to be the mental events (or, more accurately, to stand in the place of such events in a more accurate, empirically pure description of the world). But even if one denies this claim, she must still rely on behavior as the only public evidence that mental events are occurring at all. Even the functionalist and connectionist, who will desire to map mental events onto types of processes, can only associate such processes with given mental events via behavioral indicators (verbal or otherwise) that such events are indeed taking place. A commitment to the view that the existence of mental events is communicated behaviorally if at all is not a commitment to behaviorism as a theory of mind.

In fact, evidence for the consistency of the Turing Test with non-behavioristic models of mind is suggested by the work of Turing himself. The kind of computer Turing envisioned possibly passing his original test was that modeled on (appropriately enough) the Turing Machine. But the outputs of Turing Machines are not simply functions of inputs; rather, they are functions of the combination of inputs and inner states of the machine. A machine in state S1 given input I will respond with output O1; the same machine in state S2, given I, will respond with output O2. (For example, a vending machine that requires a dime for output would move to a different internal state if a nickel is inputted – a state in which a nickel input is required for output.16) Such a model is functionalist, not behaviorist. And this structure of internal states causing outputs and other internal states is both inconsistent with behaviorism (which does not recognize the fecundity of internal states in explaining behaviors) and consistent with a Turing Test evaluation of cognition (or personhood). Such consistency is seen in the fact that it may well be (probably is) the case that some internal state manipulation is required if core cognition or person properties are to be emulated. The fact that behaviors may be sufficient to demonstrate cognition or personhood does not entail or even suggest that such behaviors are constitutive of or can provide replacement for any internal states functioning in the input to output process.17

In his seminal paper “Minds, Brains, and Programs,” John Searle seems to have confused these two distinct positions. He claims that

in much of AI there is a residual behaviorism or operationalism. Since appropriately programmed computers can have input-output patterns similar to those of human beings, we are tempted to postulate mental states in the computer similar to human mental states. But once we see that it is both conceptually and empirically possible for a system to have human capacities in some realm without having any intentionality at all, we should be able to overcome this impulse. (p. 371)

Searle is here addressing specifically the thesis he terms “strong AI” – the view that “the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states” (p. 353, emphasis his). But the key expression here is “given the right programs.” This expression reveals that, even in strong AI, there is commitment to some mechanism beyond (or behind) strictly behavioral patterns that explains or constitutes mental activity. The notion of programs suggests internal states of the computer playing causal roles in its outputs. Again, this is the critical feature that separates functionalism from behaviorism.

But Searle errs further when he ties this supposed commitment of strong AI to behaviorism in with the Turing Test. “The Turing test is typical of the tradition in being unashamedly behavioristic and operationalistic ... .” (p. 371). Granted, Turing's desire that his test replace (rather than answer) the question “Can machines think?” suggests a view of mental phenomena that is (progressively) empiricist, positivist, operationalist, and behaviorist. But Turing's test transcends his philosophical musings regarding its import. Perhaps Turing was “unashamedly behavioristic and operationalistic,” but his test need not be so; and, I suggest, is not so. The test, especially in its RTT form, may simply (perhaps most helpfully) be seen as a heuristic device, employing the only criterion we have – sophisticated and complex patterns of verbal and non-verbal behavior – to judge the presence of intelligence or existence of a person. The question of what constitutes or guarantees such presence is still open and debatable.18


7. RTT and the Problem of Other Minds

Dale Jacquette has noted a “recent trend among mechanists to disown the Turing Test ... ” (p. 72). Rather than surrendering the dream of intelligent machines, they alter the conception of intelligence and speak of types of intelligence that the Turing Test could not possibly measure. Jacquette does an admirable job of thwarting this attempted end run, then lays down a solemn warning.

But there is a price to be paid for giving up Turing’s Test. The fundamental idea of the test is that intelligence can be recognized in verbal and problem-solving behavior in conversation or question-and-answer interchange. If this idea is wrong, then there is no solution to solipsism. Then we can have no justification for believing that there are other human minds. (p. 72)

As stated, Jacquette’s argument is a blatant non sequitur. Perhaps he meant to say “intelligence can only be recognized in verbal ... behavior ....” And most likely “this idea” refers to the conviction behind the Turing Test that verbal communication is a reliable indicator of intelligence. In other words, I think Jacquette’s claim is that the Turing Test utilizes the only empirical evidence there is for intelligence, and abandoning it leaves nowhere to turn for rational assertion of intelligence, even in other human beings. But even in this reconstructed form Jacquette’s argument is an overstatement. Certainly a deaf, mute, and illiterate human being is still able to exhibit behavior that suggests intelligence. Strawson’s core properties undoubtedly include many noncognitive properties, and a sufficient dosage of these might override the absence or minimal demonstration of “verbal and problem-solving behavior.”

However, when adapted to accommodate the RTT, I believe that Jacquette’s warning has real bite. The recognition of mind and the judgment of personhood in other human beings can only be made on the basis of observation of a sufficient display of core person characteristics – the very observations that are utilized in the RTT. If this criterion is rejected as reason for calling DSA’s persons, then a very serious solipsism really does threaten.

Traditionally, the problem of other minds has been motivated by questions like, “How do I know that the human being interacting with me has a mind, and is not simply an automaton or machine?” Now, given the conceptual possibility of DSA’s, the question must be changed. Even if we know that the entity with which we interact is a machine, not a human being, we must still ask how we can tell if it has a mind or not. Heretofore it has been taken that association with a human body is sufficient, and perhaps darn near necessary, for behaviors to indicate the presence of a mind. But the possibility of DSA’s forces us to see the body requirement as arbitrary at best, so the problem of other minds arises anew.

I will close with the following claim: whatever criterion allows us to bypass the underdetermination of behavioral evidence and rationally assume that the behaviors of other human beings indicates that they are persons must equally allow us to bypass such underdetermination in the case of DSA’s and rationally assume that they, too, are persons. Any rejection of such a parallel is arbitrary and unjustified, given the nature of the problem of other minds; viz., that it is only overcome without solipsism by allowing rational bridging of the determination gap between a certain level of cognition-indicating behavior to the belief in other minds. No relevant difference exists between the relevant behaviors of human beings and DSA’s to warrant acceptance of the former and rejection (or even suspension of judgment regarding) the latter.

Unless one is willing to bite the biochauvinistic bullet and insist that it is a metaphysical necessity that only human beings are persons, there is no way to reject the RTT criterion for personhood and escape solipsism. For reasons I have already outlined, and others I do not have time or space to mention, I find biochauvinism to be a conceptually indefensible position. Since I am quite positive that solipsism is false, I am left with the fascinating – if somewhat unsettling – conclusion that the RTT presents a sufficient test, and the only test we can hope for, for MET at least – and perhaps also for MT, ET, and even OT.

In short, if Data is not a person, then I don’t know who is.19


Notes

1Descartes, p.116

2Freedman, p. 890.

3The possibility of constructing such neural nets out of inorganic materials, as in Data’s case, forces a distinction between two concepts that are often conflated: biochauvinism and neural chauvinism. The former designates a conviction that no inorganic entity could generate mental activity; the latter, that no non-neural entity could do so. Connectionism may well be teaching us that neural chauvinism is in fact justified. However, if neuronal systems sufficient to produce humanesque mental activity can be constructed from inorganic matter, then neural chauvinism is an independent concept from biochauvinism and neither entails nor even suggests it. In fact, if such inorganic neuronal systems are possible (and the connectionists are right), then biochauvinism if false. (I am assuming here that connectionism is not eliminativist and therefore is consistent with the concept of mental activity at all.)

4Another concept toward which the various Star Trek series emboldens our intuitions is that of non-human biological persons – though, of course, lovers of C. S. Lewis and J. R. R. Tolkein have long been endeared to the idea!

5This question, of course, raises the more fundamental – and at least as debatable – question: what is it about human beings by virtue of which they hold such status? However, I will not concern myself directly with this larger question in the present paper. I am justified in this bracketing because I believe that a corollary to the thesis of this paper is that any plausible theory of what makes human beings persons will also qualify Data as a person. Or, to put the point contrapositively, any theory of persons that disqualifies Data as a person will be implausible for other reasons. Any theory sufficiently rich and broad to escape clear counterexample will also permit Data into personhood

6All STNG television episodes Σ1987-1994 by Paramount Pictures.

7Moody notes that “in his subsequent commentary Turing himself begins to treat the game as if it were a game in which a human tries to decide whether she is interacting with another human or a machine. It is interesting, however, that the transition to what is now called the Turing Test was not explicitly noted by Turing himself” (p. 77).

8In one STNG episode Data argues for rights of protection and self-determination for a group of machines called “Exocomps,” sheerly on the basis that they have acted in a self-preserving manner. While this action (involving rewriting of programming instructions to avoid destructive behavior) clearly demonstrates some humanesque cognitive capacities, my intuition is that the audience is never quite convinced to sympathize with the Exocomps as we do with Data in “Measure of a Man” or with Lol in “The Offspring.” I believe that a number of the ways in which Data is similar – and the Exocomps dissimilar – to humans accounts for these disparate intuitions. Several of these ways are delineated in this paper.

9One may wonder, though, if human appearance would prejudice judgment too much in favor of personhood – if the standard inclination would be to accept anyone who seemed human as a person. This may be correct; however, I believe that what it would take to “seem human” to this extent involves more than a human appearance. A store mannequin has a human appearance. However, no one with more than a cursory encounter with one would take it to be human. In fact, the next two revisions I suggest both address just this dimension – that a wide range of complex behavior patterns must be observed in order for one to be taken as a person. If the interrogator takes persons and humans to be coextensive, then she is much more likely to consider a human-looking object lacking the sufficient range of behaviors as neither human nor a person than to assume that the human appearance alone, or accompanied by only a few humanesque behaviors, qualifies the object as a human being and therefore a person. (There are, after all, mannequins and other dolls that move, “speak,” and even shake hands.)

10Cf. especially my revision of Jacquette’s argument in section 7 below.

11For further testimony to the closeness of these concepts, see the passages from English and Strawson referred to in Section 4 and its notes below.

12See Strawson, chapter 8, especially pp. 108ff.

13It is interesting to note at this point that most of the villagers in “Thine Own Self” turn out to be biochauvinists (see note 3 above). They revert to the Monster Hypothesis (or, perhaps more accurately, the Null Hypothesis) when Data’s internal makeup is revealed. His failure to exhibit the proper biological characteristics compromises the Ice Man Hypothesis in their estimations. However, not all villagers react this way. Some – including the one who came to know Data best – are not dissuaded. They are so overwhelmed by the display of core characteristics in their encounters with Data that they are able to treat biological makeup as a peripheral characteristic – even though they would have (no doubt) previously taken it as a core characteristic – and continue to think of Data as a person.

14This suggested revision makes the RTT similar to what Steven Harnad has called the “Total Turing Test.” (“Minds, Machines, and Searle,” Journal of Experimental and Theoretical Artificial Intelligence 1 (1989). Quoted and discussed in Moody, pp. 99-100.) The Total Turing Test bases judgment of a computer’s cognitive capacities on the full range of capabilities the computer has, and not just its question and answer skills. But coming out of the closet would cause the test to exceed even the goals of the Total Turing Test, as I understand it. Harnad seems to suggest that the full range of human capabilities would be necessary for a machine to simulate human cognitive capacities on a scale so as to fool an interrogator. But, of course, any machine that could so emulate that full range would demonstrate not only cognitive capacities, but a complete enough subset of Strawson’s core properties to be taken not simply as a thinking thing, but as a person. This would especially be the case were the interchange with such a machine taking place in a setting in which the Android Hypothesis either was unavailable or had not been openly suggested to the interrogator, giving the Ice Man Hypothesis full opportunity to come to the fore. So all three suggested revisions work together in a very rich version of the test.

15Though, of course, in a world freed from the biochauvinism alluded to in section 3 above, even this machine could come out of the closet.

16I owe this illustration to Norman Lillegard (p. 30).

17This point should not be read as a suggestion that DSA’s would be functionalist machines. Data is almost certainly not a functionalist machine (though Lillegard assumes that he is – pp. 29ff), but rather a connectionist machine. The implications of this difference are subtle but important (see Sennett, pp. 198-200).

18Moody suggests that strong AI differs significantly from classical behaviorism in that it restricts the range of behavior relevant to mind – a view he dubs “textual I/O [input/output] behaviorism,” but which he acknowledges that the literature calls “Turing machine functionalism” (p. 83). So seen, strong AI is actually a narrower view than classical behaviorism. Rather than being consistent with non-behavioristic theories of mind, it is inconsistent with even garden variety behaviorism. Like Searle, Moody overlooks the critical role played by programming in the strong AI thesis, which sets it apart from behaviorism in several respects. This difference is reflected in the aforementioned fact that the literature opts for the term “Turing machine functionalism” rather than Moody’s suggested “textual I/O behaviorism.” Functionalism is not behaviorism.

19Thanks to the participants of the 1995 Louisiana Philosophy Convention and the 1996 meetings of the Southern Society for Philosophy and Psychology for very insightful and useful discussions of previous versions of this paper. Thanks also to my colleague Todd Furman for helpful comments, and to my former student Laura Miller, whose philosophy of mind research project at Palm Beach Atlantic College in the Spring of 1995 helped inspire several of the ideas in this paper.


Works Cited

Descartes, Rene. Discourse on Method, Part V. The Philosophical Works of Descartes, vol. I, translated by Elizabeth S. Haldane and G. R. T. Ross. Cambridge: Cambridge University Press, 1911

English, Jane. “Abortion and the Concept of a Person.” Canadian Journal of Philosophy 5 (1975). Reprinted as “Abortion: Beyond the Personhood Argument” in Louis P. Pojman and Francis J. Beckwith, eds., The Abortion Controversy: A Reader. Boston: Jones and Bartlett, 1994; pp. 295-304. Citations are from the reprint.

Freedman, David H. “A Romance Blossoms Between Gray Matter and Silicon.” Science 265 (August 12, 1994), p. 890.

Hofstadter, Douglas R., and Daniel C. Dennett, eds., The Mind’s I: Fantasies and Reflections on Self and Soul. Toronto: Bantam Books, 1981

Jacquette, Dale. “Who's Afraid of the Turing Test?” Behavior and Philosophy 21 (1993): 61-73.

Lillegard, Norman. “No Good News for Data.” Cross Currents 44 (1994): 28-42.

Moody, Todd C. Philosophy and Artificial Intelligence. Englewood Cliffs, NJ: Prentice Hall, 1993.

Searle, John. “Minds, Brains, and Programs.” The Behavioral and Brain Sciences 3 (1980). Reprinted in Hofstadter and Dennett, pp. 353-373. Citations are from the reprint.

Sennett, James. “Requiem for an Android? A Response to Lillegard.” Cross Currents 46.2 (1996): 195-215.

Strawson, P. F. Individuals: An Essay in Descriptive Metaphysics. Garden City, NY: Anchor Books, 1963.

Turing, Alan. “Computing Machinery and Intelligence,” Mind 59 (1950). Reprinted in Hofstadter and Dennett, pp. 53-67. Citations are from the reprint.


Back to the start of the paper

Back to the index of Sennett's papers

To James Sennett's home page