Readings
Texts
- John Searle, "Minds, Brains, and Programs" (pdf, from last time)
- John Searle, Watson Doesn't Know It Won on 'Jeopardy!' (pdf, from last time))
- Margaret Boden, "Escaping from the Chinese Room" (from last time)
- John Searle, "Is the Brain a Digital Computer?" (from last time)
- Ned Block, "The Mind as the Software of the Brain" (section 4)
- Fred Dretske, "If You Can't Make One, You Don't Know How It Works"
- Fred Dretske, "Minds, Machines, and Money: What Really Explains Behavior"
Notes
- The Chinese Room: Criticisms and Replies (from last time)
- Boden's Reply (from last time)
- Searle's Argument (from last time)
- Block's Reply
- Dretske's Compass
Videos
Synopsis
Please note that we are officially one day behind schedule, hence the long reading list. We'll try to catch up in the next couple of days.
Today we took up various of the responses to Searle's Chinese Room Thought Experiment, so permit me to add to our discussion of it a bit.
In considering the above news report about the IBM computer Watson beating the legendary Ken Jennings at Jeopardy, we must ask, does Watson understand the questions it is answering? This is the challenge Searle presents with the justly famous Chinese Room Thought Experiment.
I'm grateful to a former student of Minds and Machines for finding and recommending the following (3+ minutes) segment from the BBC: The Chinese Room Experiment - The Hunt for AI - BBC, which provides a neat summary of the challenge the Chinese Room Thought Experiment poses.
The genius of Searle's Chinese Room Thought Experiment is to make vivid a point we can well appreciate from our work constructing Turing Machines: No function computable by a Turing Machine could be cognitive, since Turing Machines operate solely by the rule-governed manipulation of strings of symbols. But nothing like that, so the argument goes, could exhibit original intentionality.
Searle's Chinese Room Thought Experiment presents us with a fairly serious challenge: How can intentionality emerge from the rule-governed manipulation of strings of symbols? Searle's Chinese Room Thought Experiment puts Searle himself in the position of being a rule-governed manipulator of strings of (Chinese) symbols, none of which Searle understands.
The fact that neither Searle-in-the-Chinese-Room nor any combination of Searle with the Chinese room understands Chinese even while passing the Turing Test in Chinese is a serious matter indeed. For if Brentano is correct and intentionality is the mark of the mental in such a way that cognitive functions are intentional, yet no intentional function is Turing Machine Computable, then no cognitive function is Turing Machine Computable, either. Since there are only countably infinitely many Turing Machine Computable functions compared to the uncountable infinite totality of functions, it is entirely possible that cognitive functions escape Turing Machine Computability. The problem is not just that the Turing Test is too weak--other, less complicated arguments can be given for rejecting the thesis that the perfect imitation of intelligence is intelligence. Rather, the Chinse Room Thought Experiment seems to show that congnitive functions, distinguishable from other functions by their intentionality, are not Turing Machine Computable at all. In light of the Church-Turing Thesis and Dretske's Dictum, the most we can say is that if we are meat machines, we cannot understand our own minds. We cannot build minds unless we duplicate exactly what we already have using exactly the same materials and structures that evolved in us. Such duplication is not entirely without its merits, but it gets us no nearer to understanding minds than we were when we began. Moreover, it suggests that there is something crucial about the particular substance and structure of us that makes it impossible to replicate cognitive functionality in, say, silicon.
To be sure, the very simplicity of Searle's Chinese Room Thought Experiment lends it considerable force against the proposition that any effective procedure relying solely on the rule-governed manipulation of strings of symbols could exhibit original intentionality. Nevertheless, in proposing the Chinese Room Thought Experiment, Searle met not a few counter-arguments, as you may recall from our discussion today.
Without rehearsing our entire discussion, then, we have,
- The Systems Reply
- The Robot Reply
- The Brain Simulator Reply
- The Combination Reply
- The Other Minds Reply
- The Many Mansions Reply
Focusing on, first, the ideas that motivated and made prima facie plausible each reply to the Chinese Room Thought Experiment and, second, the reasons Searle gave for rejecting upon careful consideration each reply goes a considerable distance in grasping Searle's main point in constructing the Chinese Room Thought Experiment. The Chinese Room passes, by hypothesis, the Turing Test, yet it exhibits no original intentionality (Searle does not in the least bit understand Chinese) because the rule-governed manipulations of strings of symbols much as what a Turing Machine does are purely syntactic operations irrespective of the semantics of the symbols so manipulated. Since their semantics is no part of the symbol manipulations performed by Searle in the Chinese Room, the room cannot exhibit original intentionality. The symbols only mean something to the native Chinese seakers outside the room, whose own original intentionality, in understanding what the symbols mean, confers upon them derived intentionality.
Now, a curious feature of Searle's challenge to traditional artificial intelligence is that intentionality is fundamentally a biological phenomenon. That is, Searle insists that our brains have the capacity to underwrite intentional states because they have have specific causal features which depend, ultimately, on the particular stuff out of which the brain is made.
One might imagine resurrecting Putnam's Multiple Realizability Argument against Type-Physicalism. That is, one might argue that Searle's emphasis on the particular causal features of our neurobiology makes him into something of a Type-Physicalist, so that nothing which fails to have our special and specific neurobiology can have mental states since mental states are intentional yet intentionality is biological in a uniquely human sense.
This criticism would not be quite right, however, nor is it part of Boden's Reply. In particular, Searle can and does admit that mental states are multiply realizable. Crucially for Searle, though, the other stuff that realizes mental states must have mostly the same causal features of the biological stuff which underwrites our mental states. So intelligence can be realized in other substances, but those substances must be similar in specific causal respects to our substance. It is not the case that intelligence can be constructed out of any old thing, but it's also not the case that we are the only thing that can be intelligent. A puzzle for Searle is just how similar the other stuff must be to our stuff to have enough of the same causal features to secure intentionality.
A better question, and the one Boden asks, is why the causal capacities Searle thinks underwrite intentionality are features of mere substance as opposed to features of a substance having a specific structure. After all, it seems that the causal capacities a thing has are more a matter of its structure than the material out of which it is made. Try playing billiards with cubes instead of spheres. Even if the cubes are made of the same stuff as the spheres, you won't have much of a game.
In short, it's fine for Searle to say
The brain, as far as its intrinsic operations are concerned, does no information processing. It is a specific biological organ and its specific neurobiological processes cause specific forms of intentionality. (John Searle, "Is the Brain a Digital Computer")
Yet even if we agree that the rule-governed manipulation of strings of symbols will never yield intentionality, we still need an explanation for why processes other than neurobiological ones can't.
Before turning to their arguments, though, I should like to briefly revisit the puzzling philosophical notion of intentionality. Here is a passage from the first chapter (pp. 5-8) of John Haugeland's "Mind Design II: Philosophy, Psychology, Artificial Intelligence" (Cambridge, Mass.: MIT Press, 1997) which might help:
"Intentionality", said Franz Brentano (1874/1973), "is the mark of the mental". By this he meant that everything mental has intentionality, and nothing else does (except in a derivative or second-hand way), and, finally, that this fact is the definition of the mental. 'Intentional' is used here in a medieval sense that harks back to the original Latin meaning of "stretching toward" something; it is not limited to things like plans and purposes, but applies to all kinds of mental acts. More specifically, intentionality is the character of one thing being "of" or "about" something else, for instance by representing it, describing it, referring to it, aiming at it, and so on. Thus, intending in the narrower modern sense (planning) is also intentional in Brentano's broader and older sense but much else is as well, such as believing, wanting, remembering, imagining, fearing, and the like.
Intentionality is peculiar and perplexing. It looks on the face of it to be a relation between two things. My belief that Cairo is hot is intentional because it is about Cairo (and/or its being hot). That which an intentional act or state is about (Cairo or its being hot, say) is called its intentional object. (It is this intentional object that the intentional state stretches toward.) Likewise, my desire for a certain shirt, my imagining a party on a certain date, my fear of dogs in general, would be "about"--that is, have as their intentional objects--that shirt, a party on that date, and dogs in general. Indeed, having an object in this way is another way of explaining intentionality; and such having seems to be a relation, namely between the state and its object.
But, if it's a relation, it's a relation like no other. Being-inside-of is a typical relation. Now notice this: if it is a fact about one thing that it is inside of another, then not only that first thing, but also the second has to exist X cannot be inside of Y, or indeed be related to Y in any other way, if Y does not exist. This is true of relations quite generally; but it is not true of intentionality. I can perfectly well imagine a party on a certain date, and also have beliefs, desires and fears about it, even though there is (was, will be) no such party. Of course, those beliefs would be false, and those hopes and fears unfulfilled; but they would be intentional--be about, or "have", those objects--all the same.
It is this puzzling ability to have something as an object, whether or not that something actually exists, that caught Brentano's attention. Brentano was no materialist: he thought that mental phenomena were one kind of entity, and material or physical phenomena were a completely different kind. And he could not see how any merely material or physical thing could be in fact related to another, if the latter didn't exist; yet every mental state (belief, desire, and so on ) has this possibility. So intentionality is the definitive mark of the mental...
Many material things that arent intentional systems are nevertheless about other things - including, sometimes, things that don't exist. Written sentences and stories, for instance, are in some sense material; yet they are often about fictional characters and events. Even pictures and maps can represent nonexistent scenes and places Of course, Brentano knew this... But [he] can say that this sort of intentionality is only derivative. Here's the idea: sentence inscriptions--ink marks on a page, say--are only about anything because we (or other intelligent users) mean them that way. Their intentionality is second-hand, borrowed or derived from the intentionality that those users already have.
So, a sentence like "Santa lives at the North Pole", or a picture of him or a map of his travels, can be about Santa (who, alas, doesn't exist), but only because we can think that he lives there, and imagine what he looks like and where he goes. It's really our intentionality that these artifacts have, second-hand, because we use them to express it. Our intentionality itself, on the other hand, cannot likewise be derivative: it must be original. (Original, here, just means not derivative, not borrowed from somewhere else. If there is any intentionality at all, at least some of it must be original; it can't all be derivative.)
The problem for mind design is that artificial intelligence systems, like sentences and pictures, are also artifacts. So it can seem that their intentionality too must always be derivative--borrowed from their designers or users, presumably--and never original. Yet, if the project of designing and building a system with a mind of its own is ever really to succeed then it must be possible for an artificial system to have genuine original intentionality, just as we do. Is that possible?
Think again about people and sentences with their original and derivative intentionality, respectively. What's the reason for that difference? Is it really that sentences are artifacts, whereas people are not, or might it be something else? Here's another candidate. Sentences don't do anything with what they mean: they never pursue goals, draw conclusions, make plans, answer questions, let alone care whether they are right or wrong about the world they just sit there, utterly inert and heedless. A person, by contrast, relies on what he or she believes and wants in order to make sensible choices and act efficiently; and this entails, in turn, an ongoing concern about whether those beliefs are really true, those goals really beneficial, and so on. In other words, real beliefs and desires are integrally involved in a rational, active existence intelligently engaged with its environment. Maybe this active, rational engagement is more pertinent to whether the intentionality is original or not than is any question of natural or artificial origin.
I find Haugeland's discussion of intentionality illuminating, with the necessary caveat that intentionality is such a fundamental notion virtually any discussion of it will involve a certain amount of handwaving. It is, to be sure, a slippery concept, as any essentially foundational question no doubt will prove to be.
Let us return now to Boden's reply to Searle's Chinese Room Thought Experiment so as to set the stage for Searle's subsequent analysis of the brain. Even if we do not endorse Boden's argument that the Chinese Room qua Robot exhibits some minimal intentionality, it is much harder to dismiss her challenge to Searle's position that the brain is an organ which has evolved to realize intentional states. Thus, for Searle, biochemical processes underwrite intentionality in a way that the rule-governed manipulation of strings of symbols cannot.
Boden's challenge to this view is that what is important about the brain is not its biochemical processes per se but what those biochemical processes do in transmitting information. In a manner of speaking, Searle recapitulates the Type-Physicalist's error (at least, according to Putnam's Multiple Realizability Argument). If intentionality is the mark of the mental, as Brentano reminds us, then restricting intentional states to our peculiar biochemical processes restricts minds to only those things that enjoy similar composition.
To be sure, that is not all of Boden's response to Searle. She argues that, properly understood, the Chinese Room as Searle conceives it necessarily includes some, albeit minimal, original intentionality.
Focus, though, on Boden's criticisms of Searle that begin by noting Searle's apparently peculiar insistence that while the brain with all its masses of interconnected neurons provides original intentionality (and, thus, cognition itself) nothing but the brain--not even something very like the brain, recalling the Brain Simulator Reply Searle discusses and Block revives--does. Thus Searle's challenge can be seen as unduly anthropocentric, in something of an echo of the Multiple Realizability Argument.
Searle, naturally, wants to meet the charge of excessive anthropocentrism. To do this he carefully clarifies and defends his position, but see our notes for more on his argument. The broad outline of his argument is that syntax is not intrinsic to physics, it is only read into physics by observers who confer at most derived intentionality by their own original intentionality. Yet if syntax is not intrinsic to physics, syntax enjoys no causal powers beyond those rendered by observers. As a result, the rule-governed manipulation of strings of symbols, in whatever physical way it is realized, cannot bear the causal relationships it must to exhibit original intentionality. Viewing the brain computationally, then, is only possible if one freely commits the homunculus fallacy--in effect begging the question by hypothesizing intentionality-rich sub-minds, if you will. Further, while we can view the brain as a computer in the same sense in which nearly any physical process can be harnessed to suit computational needs, it is not intrinsically an information processor, which it would have to be to cash the check computationalism (the view assumed by virtually the entire Cognitive Science community that the brain is a natural kind of computer) has written. In this way, Searle's response to Boden can be taken as a criticism of computationalism itself. And it is an important one. If something is a computer only insofar as it is viewed as such, then computers (and, with them, computational states) are not natural kinds--they are not part of nature at all.
We will take up Searle's responses to Boden next time, however.
We continued today by contrasting Boden and Block's responses--viz., his resurrection of the Systems Reply--to the Chinese Room Thought Experiment with Dretske's response so as to set it out. To be sure, it is an intriguing way to respond to the Chinese Room Thought Experiment. Instead of objecting to the experiment by arguing (as Boden and Block do) that it doesn't show what Searle takes it to show, concede that it does, but argue that that simply means we haven't got the right raw materials out of which to build a mind!
That is to say, it is important to recognize that Boden's criticism--that the Chinese Room necessarily evinces some intentionality, however rudimentary--and Block's criticism--that, properly conceived, the Systems Reply is correct insofar as Searle-in-the-Chinese-Room is an English-understanding system implementing a Chinese-understanding system without being aware of that fact--are both critical responses to the Chinese Room Thought Experiment. Thus, they both argue that the thought experiment does not show what Searle takes it to show, namely that intentionality cannot arise from the rule-governed manipulation of strings of symbols or, equivalently, that a machine can appear to understand without actually understanding what it is doing. They do this by arguing that intentionality can so arise (Boden) or that understanding and awareness of understanding are two different things (Block), contra Searle.
Another approach is to just grant the Chinese Room Thought Experiment in its entirety. Accept Searle's claims about what the Chinese Room Thought Experiment is supposed to show, but take it as a challenge to be met. If intentionality cannot arise from the rule-governed manipulation of strings of symbols, what must we add to computation to secure intentionality? How, in other words, might we go about grounding the symbols used in those rule-governed manipulations that constitute computation? This is Dretske's approach, which has made him a popular read as you can imagine in roboticist circles.
The task of laying out the groundwork for understanding Dretske's answer to the Chinese Room Thought Experiment is somewhat non-trivial. We must distinguish between 'intentionally'--as in, acting with a goal or purpose--'intentionality'--as in, the aboutness or directedness sentences (derivatively) and mental states (originally) enjoy--and 'intensionality', which simply refers to the failure of substitutivity of co-referring terms or co-extensional predicates salve veritate in linguistic contexts (sometimes called 'opaque contexts' when they exhibit this failure). Granted, intensionality-with-an-s is a technical notion.
To frame Dretske's response, however, let us briefly revisit Dretske's Dictum and our discusson of it:
You don't understand it if you don't know how to build it.
Now, perhaps, is as good a time as any to review and clarify this assertion. Here is what Dretske himself says in explaining it (from If You Can't Make One, You Don't Know How It Works.)
There are things I believe that I cannot say - at least not in such a way that they come out true. The title of this essay is a case in point. I really do believe that, in the relevant sense of all the relevant words, if you can't make one, you don't know how it works. The trouble is I do not know how to specify the relevant sense of all the relevant words.
I know, for instance, that you can understand how something works and, for a variety of reasons, still not be able to build one. The raw materials are not available. You cannot afford them. You are too clumsy or not strong enough. The police will not let you.
I also know that you may be able to make one and still not know how it works. You do not know how the parts work. I can solder a snaggle to a radzak, and this is all it takes to make a gizmo, but if I do not know what snaggles and radzaks are, or how they work, making one is not going to tell me much about what a gizmo is. My son once assembled a television set from a kit by carefully following the instruction manual. Understanding next to nothing about electricity, though, assembling one gave him no idea of how television worked.
I am not, however, suggesting that being able to build one is sufficient for knowing how it works. Only necessary. And I do not much care about whether you can actually put one together. It is enough if you know how one is put together. But, as I said, I do not know how to make all the right qualifications. So I will not try. All I mean to suggest by my provocative title is something about the spirit of philosophical naturalism. It is motivated by a constructivist's model of understanding. It embodies something like an engineer's ideal, a designer's vision, of what it takes to really know how something works. You need a blueprint, a recipe, an instruction manual, a program. This goes for the mind as well as any other contraption. If you want to know what intelligence is, or what it takes to have a thought, you need a recipe for creating intelligence or assembling a thought (or a thinker of thoughts) out of parts you already understand.
In speaking of parts one already understands, I mean, of course, parts that do not already possess the capacity or feature one follows the recipe to create. One cannot have a recipe for cake that lists a cake, not even a small cake, as an ingredient. One can, I suppose, make a big cake out of small cakes, but recipes of this sort will not help one understand what a cake is (although it might help one understand what a big cake is). As a boy, I once tried to make fudge by melting fudge in a frying pan. All I succeeded in doing was ruining the pan. Don't ask me what I was trying to do - change the shape of the candy, I suppose. There are perfectly respectable recipes for cookies that list candy (e.g., gumdrops) as an ingredient, but one cannot have a recipe for candy that lists candy as an ingredient. At least it will not be a recipe that tells you how to make candy or helps you understand what candy is. The same is true of minds. That is why a recipe for thought cannot have interpretive attitudes or explanatory stances among the eligible ingredients - not even the attitudes and stances of others. That is like making candy out of candy - in this case, one person's candy out of another person's candy. You can do it, but you still will not know how to make candy or what candy is.
In comparing a mind to candy and television sets I do not mean to suggest that minds are the sort of thing that can be assembled in your basement or in the kitchen. There are things, including things one fully understands, things one knows how to make, that cannot be assembled that way. Try making Rembrandts or $100 bills in your basement. What you produce may look genuine, it may pass as authentic, but it will not be the real thing. You have to be the right person, occupy the right office, or possess the appropriate legal authority in order to make certain things. There are recipes for making money and Rembrandts, and knowing these recipes is part of understanding what money and Rembrandts are, but these are not recipes you and I can use. Some recipes require a special cook.
This is one (but only one) of the reasons it is wrong to say, as I did in the title, that if you cannot make one, you do not know how it works. It would be better to say, as I did earlier, that if you do not know how to make one, or know how one is made, you do not really understand how it works.
Perhaps a simpler way to make the same point is that just as you can drive a car without understanding how a car works, you use your mind without understanding it. To understand the mind, just as in understanding the car, you have to understand how it's built--how, that is, all the parts and pieces fit together and function in such a way as to get you down the road (both car and mind!) You don't need to be actually able to build one, of course, but you do need to know how it's built to truly understand it.
This, as we have said, is something of an engineering constraint which has largely been ignored by psychology and psychiatry. Its value is recognized, however, by cognitive psychologists and cognitive neuroscientists. We may see them as friends, even if they are often quick to sweep complicated problems under the rug while claiming to have solved them with an unappealing hubris.
Now, it is important to emphasize (again) that Dretske has a very different strategy than Boden or Block. Where Boden and Block put their efforts into showing how the Chinese Room Thought Experiment does not show what Searle thinks it shows--in each case by arguing that we do find intentionality in the Chinese Room even if we might have to look very hard for it--Dretske's response grants Searle's basic point: We won't have created artificial intelligence if we go about it along anything like the lines of the Chinese Room construction.
Rather, we have to take the challenge of figuring out how to efficiently compute cognitive (that is to say, by Brentano's Thesis, intentional) functions seriously. (This is why I say that Dretske has something of an engineering approach to philosophy.) That is, Dretske grants that the Chinese Room is a problem, but it is a problem he thinks we can solve. Dretske's strategy is to concede Searle's point in the Chinese Room Thought Experiment but to argue that original intentionality is neither rare nor special.
First, we need to approach this problem as an engineer might and figure out how to build a mind out of simpler elements we already understand. Since original intentionality is a unique property of minds--not to mention being a uniquely problematic feature for those wishing to build minds--it would be helpful if the elements of our design exhibited original intentionality themselves, giving us something of a leg-up in our project.
Notice that building a mind out of elements having original intentionality is not itself a problem for understanding the mind, since our goal is understanding the mind, not understanding original intentionality. Along the way, of course, we will come to a better understanding of the intentionality if all goes as it should.
Yet is it especially easy to find basic elements that exhibit original intentionality?
Dretske thinks so; it is as easy as going to the local hardware store to buy a compass. He thinks original intentionality is woven into the fabric of the universe in virtue of the causal relation. The lowly compass, indicating as it does the magnetic north in virtue of its causal relationship and thereby being about the magnetic north (in an admittedly attenuated sense) exhibits original intentionality. To be sure, this is not the full-blown, tough-nut-to-crack intentionality our cognitive functions enjoy, yet it is enough, Dretske thinks, to respond to Searle's skeptical challenge.
The particulars of Dretske's argument that the compass exhibits original intentionality can be found in the handout on Dretske's Argument. Suffice it to say that Dretske has an intriguing argument which bears further scrutiny.
We will continue with our discussion of Dretske's response to Searle next time.