Home
Ninja Cat

Main navigation

  • Home
  • CV (opens in new tab)
  • Writing
    • Scholarship (opens in new tab)
    • Fun Stuff (opens in new tab)
    • Works in Progress (opens in new tab)
    • Ideas (opens in new tab)
  • Teaching
    • Finding Philosophy (opens in new tab)
    • Reading Philosophy (opens in new tab)
    • Writing Philosophy (opens in new tab)
    • Courses (opens in new tab)
    • Classes (opens in new tab)
  • News and Views (opens in new tab)
  • Contact (opens in new tab)

Robot Intentionality I: The Chinese Room

Breadcrumb

  • Home
  • Teaching
  • Classes
  • Spring 2026
  • Minds and Machines
  • Robot Intentionality I: The Chinese Room
  • Robot Intentionality I: The Chinese Room

Readings

Texts

  • John Searle, "Minds, Brains, and Programs"(pdf; optional peer commentary after pg 424.)

Notes

  • The Chinese Room

Lectures

John Searle: Strong Artificial Intelligence

John Searle: The Chinese Room Argument

Videos

Synopsis

Let us briefly summarize the optimistic case we have made for meeting the engineering obligato posed by Dretske's Dictum--which case also, by the way, underwrites the assumptions made by Cognitive Science today.

Turing proved that there exists a Universal Turing Machine, a Turing Machine which computes any function computable by any Turing Machine. This result, which is sometimes called “Turing's Theorem”, demonstrates the possibility of generalized computing machines. The modern digital computer is one manifestation of the practical applicability of Turing's Theorem developed along the lines specified by the brilliant John von Neumann, among others.

It also turns out that every effective procedure we conjure to compute functions computes the same functions as Turing Machines, which strongly suggests that Turing Machines can be used in place of other effective procedures. It is, of course, a strong suggestion only. We have no way of knowing whether some clever person will invent an effective procedure which computes a function that cannot be computed by a Turing Machine. Nevertheless, the Church-Turing Thesis, as it is called, gives us hope that any procedure which can be boiled down to basic, finite, determinate, and repeatable operations can also be implemented by a Universal Turing Machine.

If the operations of the human brain can be similarly boiled down, then it ought to be possible to implement those operations on a Universal Turing Machine. After all, it is well-documented that neurons are complicated kinds of switches performing in union various specialized tasks. Prima facie, there is no reason Turing Machines could not accomplish the same tasks, perhaps even better than their biological counterparts.

Eagerly adopting these results, the contemporary cognitive sciences as a whole have assumed not merely functionalism--the view that the mind is fundamentally a result of what the brain is doing--but computationalism--the view that all of the cognitive functions the brain implements are ultimately efficiently computable functions. This is a startlingly narrow and specific assumption, to be sure.

Yet the question, are cognitive functions specifically efficiently Turing Machine computable? clearly begs a further question:

What are cognitive functions?

Enter the philosopher of psychology Franz Brentano (1838-1917). In his 1874 book Psychology from an Empirical Standpoint, Brentano sought to distinguish mental phenomena:

Every mental phenomenon is characterized by what the Scholastics of the Middle Ages called the intentional (or mental) inexistence of an object, and what we might call, though not wholly unambiguously, reference to a content, direction toward an object (which is not to be understood here as meaning a thing), or immanent objectivity. Every mental phenomenon includes something as object within itself, although they do not do so in the same way. In presentation, something is presented, in judgment something is affirmed or denied, in love loved, in hate hated, in desire desired and so on.

This intentional inexistence is characteristic exclusively of mental phenomena. No physical phenomenon exhibits anything like it. We can, therefore, define mental phenomena by saying that they are those phenomena which contain an object intentionally within themselves.

From Brentano we get the thesis and the slogan,

Intentionality is the mark of the mental.

These passages are admittedly confusing. To sort out intentionality, first note that the term “intentionality” is a philosophical term-of-art. It is not to be confused with intending in the sense of “She intended to hurt his feelings” or “It was her intention to hurt his feelings.”

Second, contrast the statement

The cat is in the cat-tree.

with the belief-ascription,

Ashley believes that the cat is in the cat-tree.

and

The state-of-affairs of the cat's being in the cat-tree.

The statement and the belief are about or represent the state-of-affairs of the cat's being in the cat-tree. That is, statements and beliefs are intentional insofar as they are about, reach out to, or are directed upon, objects and the relations into which they (the objects) enter--the aforementioned states-of-affairs. Absent someone who understands the statement, “the cat is in the cat-tree”, of course, the statement is just a meaningless string of symbols or marks. Thus written and spoken sentences (utterances, generally) have at most derived intentionality. Beliefs and other mental states, however, have original intentionality. There is nothing further required for mental states to be intentional than that they be mental states.

Brentano's thesis recast in our terms, then, is the claim that a function is cognitive only if it exhibits original intentionality, where original intentionality is understood to be the relationship the function has in representing the objects in the world and their states-of-affairs. Intentionality, however, is an extremely puzzling property. Borrowing from--and supplementing, just a bit--Michael Thau's excellent Consciousness and Cognition, four paradoxes emerge from intentionality. Although the same paradoxes trouble other cognitive functions if Brentano is right, let us cast them as problems for belief specifically.

  1. Beliefs can represent in absentia. How do we account for my believing that extraterrestrial aliens are responsible for UFO sightings if there are no extraterrestrial aliens? In that case I am mistaken, of course. But even mistaken, if intentionality is a relation between a belief and the content of the belief, or the state-of-affairs the belief represents, then how can my belief have any content if there are no extraterrestrial aliens? Beliefs are representational, even when there is no thing being represented. Yet if representation is a relation, there must be something to stand in the relation. This is what Brentano is talking about when he uses the apparently absurd phrase, the 'inexistence of an object'.
  2. Beliefs can represent indeterminately. Contrast my belief that the cat is in the cat-tree with my belief that some cat or other is in the cat-tree. In the first case, my belief is about a specific, and specifiable, animal. In the second case, my belief is about some specific yet un-specifiable or indeterminate animal. Or consider another cognitive function, my desire to own a cat. The content of my desire is not even a specific cat, but it is some cat. Again, if representation is a relation, there must be some specific, determinate thing that stands in the relation.
  3. Beliefs can represent differentially. My belief that Frank has degree in psychology is perfectly consistent with my belief that the manager of B&J's Pizza does not have a degree despite the fact that Frank is the manager of B&J's pizza. Thus my beliefs represent the same object, Frank, in different--in this case, even incompatible--ways.
  4. Beliefs can represent mistakenly. My belief that there is a horse in the horse-trailer represents the state-of-affairs of horse's being in the horse-trailer even if the furry brown ear I glimpsed through the dusty window and which caused me to believe that there is a horse in the horse-trailer is in fact attached to a cow. My belief, in short, represents a horse, but the object so represented is not a horse. It is a cow.

A theory of intentionality has the difficult task of making sense of these apparent paradoxes. Nevertheless, Brentano's thesis boils down to the claim that cognitive functions exhibit original intentionality. That is, they are representational without requiring interpretation such as a sentence, a drawing, or even a picture would, somewhat loosely speaking.

With Brentano's Thesis in hand, next today we took up the first of a series of skeptical challenges to computationalism: Searle's Chinese Room Thought Experiment.

The genius of Searle's Chinese Room Thought Experiment is to make vivid a point we can well appreciate from our work constructing Turing Machines: No function computable by a Turing Machine could be cognitive, since Turing Machines operate by the rule-governed manipulation of strings of symbols. But nothing like that, so the argument goes, could exhibit understanding since it is devoid of original (uninterpreted) intentionality.

To prove the point Searle puts himself in the machine's position of being programmed to pass the Turing Test and asks, what would I understand in this context?

Searle imagines himself in a locked room where he is given pages with Chinese writing on them. He does not know Chinese. He does not even recognize the writing as Chinese per se. To him, these are meaningless squiggles. But he also has a rule-book, written in English, which dictates just how he should group the Chinese pages he has with any additional Chinese pages he might be given. The rules in the rule-book are purely formal. They tell him that a page with squiggles of this sort should be grouped with a page with squiggles of that sort but not with squiggles of the other sort. The new groupings mean no more to Searle than the original ordering. It's all just symbol-play, so far as he is concerned. Still, the rule-book is very good. To the Chinese-speaker reading the Searle-processed pages outside the room, whatever is in the room is being posed questions in Chinese and is answering them quite satisfactorily, also in Chinese.

The analogy, of course, is that a machine is in exactly the same position as Searle.

The machine's rulebook (or program), assiduously followed as computers will do, can get it to pass the Turing Test without the least smidgen of understanding of the questions being answered or even that it is being tested in the first place.

Moreover, if Searle is correct, no amount of redesigning will ever result in a machine which understands what it is doing, since no matter how clever or complicated the rule-book, it is still just a rule-book. Yet if a machine cannot, in principle, understand what it is doing, then it cannot be intelligent.

Searle's Argument
 1If it is possible for computers to be intelligent, then computers can understand what it is that they are doing. 
 2If computers operate according to the rule-governed manipulations of strings of symbols, then computers cannot understand what it is that they are doing. 
 3Computers operate according to the rule-governed manipulations of strings of symbols. 
∴4Computers cannot understand what it is that they are doing.2&3
∴5It is not possible for machines to be intelligent.1&4

The fact that neither Searle-in-the-Chinese-Room nor any combination of Searle with the Chinese room understands Chinese even while passing the Turing Test in Chinese is a serious matter indeed. We cannot build machines with minds as computationalism proposes we should be able to do unless we duplicate exactly what we already have using exactly the same materials and structures that evolved in us. Such duplication is not entirely without its merits, but it gets us no nearer to artificial intelligence than we were when we began. Moreover, it suggests that there is something crucial about the particular substance and structure of us that makes it impossible to replicate cognitive functionality in, say, silicon.

Searle neatly summarizes and clarifies his argument as follows:

By way of concluding I want to try to state some of the general philosophical points implicit in the argument. For clarity I will try to do it in a question-and-answer fashion, and I begin with that old chestnut of a question:

"Could a machine think?"The answer is, obviously, yes. We are precisely such machines.

"Yes, but could an artifact, a man-made machine, think?"

Assuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours, again the answer to the question seems to be obviously, yes. If you can exactly duplicate the causes, you could duplicate the effects. And indeed it might be possible to produce consciousness, intentionality, and all the rest of it using some other sorts of chemical principles than those that human beings use. It is, as I said, an empirical question.

"OK, but could a digital computer think?"

If by "digital computer" we mean anything at all that has a level of description where it can correctly be described as the instantiation of a computer program, then again the answer is, of course, yes, since we are the instantiations of any number of computer programs, and we can think.

"But could something think, understand, and so on solely in virtue of being a computer with the right sort of program? Could instantiating a program, the right program of course, by itself be a sufficient condition of understanding?"

This I think is the right question to ask, though it is usually confused with one or more of the earlier questions, and the answer to it is no.

"Why not?"

Because the formal symbol manipulations by themselves don't have any [understanding]; they are quite meaningless; they aren't even symbol manipulations, since the symbols don't symbolize anything. In the linguistic jargon, they have only a syntax but no semantics. Such [understanding] as computers appear to have is solely in the minds of those who program them and those who use them, those who send in the input and those who interpret the output.

The aim of the Chinese room example was to try to show this by showing that as soon as we put something into the system that really does have [understanding] (a man), and we program him with the formal program, you can see that the formal program carries no additional [understanding]. It adds nothing, for example, to a man's ability to understand Chinese.

To be sure, the Chinese Room Thought Experiment did not itself go unchallenged when Searle presented it. Opposition was incisive and fierce, if not decisive. Searle himself discusses these responses and his answers to them in his Minds, Brains, and Programs. We take up these and other responses next time.