Readings
Texts
- Alan Turing, "Computing Machinery and Intelligence" (pdf)
- Alan Turing, et. al., "Can Automatic Calculating Machines Be Said to Think?" (from the Turing Digital Archives)
Notes
Synopsis
The notion at which we have arrived--through threads firstly philosophical and secondly mathematical--that intelligence can ultimately be boiled down to computational processes presents an incredible opportunity, but also two crucial questions:
First, the opportunity.
Our project in this course is to understand as best we can the mind. But what is it to understand the mind? Today I drew attention to Fred Dretske's engineering obligato, which I dubbed 'Dretske's Dictum':
You don't understand it if you don't know how to build it.
where 'it', in our case, refers to the mind. To be sure, Dretske's Dictum places a kind of engineering requirement on any claim to understand the mind, subject to careful qualification. Here is what Dretske himself says about it.
There are things I believe that I cannot say - at least not in such a way that they come out true. The title of this essay is a case in point. I really do believe that, in the relevant sense of all the relevant words, if you can't make one, you don't know how it works. The trouble is I do not know how to specify the relevant sense of all the relevant words.
I know, for instance, that you can understand how something works and, for a variety of reasons, still not be able to build one. The raw materials are not available. You cannot afford them. You are too clumsy or not strong enough. The police will not let you.
I also know that you may be able to make one and still not know how it works. You do not know how the parts work. I can solder a snaggle to a radzak, and this is all it takes to make a gizmo, but if I do not know what snaggles and radzaks are, or how they work, making one is not going to tell me much about what a gizmo is. My son once assembled a television set from a kit by carefully following the instruction manual. Understanding next to nothing about electricity, though, assembling one gave him no idea of how television worked.
I am not, however, suggesting that being able to build one is sufficient for knowing how it works. Only necessary. And I do not much care about whether you can actually put one together. It is enough if you know how one is put together. But, as I said, I do not know how to make all the right qualifications. So I will not try. All I mean to suggest by my provocative title is something about the spirit of philosophical naturalism. It is motivated by a constructivist's model of understanding. It embodies something like an engineer's ideal, a designer's vision, of what it takes to really know how something works. You need a blueprint, a recipe, an instruction manual, a program. This goes for the mind as well as any other contraption. If you want to know what intelligence is, or what it takes to have a thought, you need a recipe for creating intelligence or assembling a thought (or a thinker of thoughts) out of parts you already understand.
-Fred Dretske, “If You Can't Make One, You Don't Know How It Works”
Perhaps we will discover that Dretske's Dictum places too great a burden on our investigations and so close the semester by rejecting it. Nevertheless, it is a good place to start, and it puts us in exactly the right frame of mind to understand the endeavors of Cognitive Science today.
Second, we find two crucial questions emerging from these discussions:
1. What is intelligence?
and,
2. What is computation?
As it happens, A.M. Turing provides answers to both questions.
Our study of his work began today with the puzzle of intelligence.
Turing's key insight on the puzzle of intelligence, which we will have many opportunities to critically examine, can be simply put as the proposition that
The perfect imitation of intelligence is intelligence.
That is, in his unusually accessible essay "Computing Machinery and Intelligence", Turing suggests that the question of whether a machine is intelligent is hopelessly ill-formed and perhaps unanswerable in that form. After all, there are many things we might mean by 'intelligent', and it is not clear what all of them have to do with one another. Psychometricians wrestle with this problem to this very day! Turing, however, proposed that we replace the question of machine intelligence with an imitation game in which an interlocutor interrogates a machine and a person by teletype (today: chatroom) to determine which is which. This is a strictly behavioral test. If the machine can fool the interlocutor better than average number of times, then the machine is behaviorally indistinguishable from a person insofar as 'verbal' behavior is concerned.
It is enough, in other words, to satisfy any questions about 'intelligence' we could meaningfully ask. (It bears noting that psychometrics, the study of the measure of psychological phenomena, has thus far born out his suspician: We really have no idea what intelligence is, apart from its being some sort of 'X' factor to which all the tests so far developed seem to point.)
All that said, let us be absolutely clear what we mean by the significance of the Turing Test for machine intelligence.
It would be a mistake to argue that passing the Turing Test is both necessary and sufficient for intelligence. That is, it would be a mistake to assert that
1. X passes the Turing Test if, and only if, X is intelligent.
(1) is clearly false. Consider that (1), logically speaking, is a bi-conditional, where a bi-conditional is defined (or can be defined) as simply the conjunction of two distinct conditionals:
1a. If X passes the Turing Test, then X is intelligent. (The Turing Test, we say, is according to this proposition a sufficient condition on, or suffices for, intelligence. Passing the Turing Test means successfully having intelligence.)
1b. If X is intelligent, then X passes the Turing Test. (The Turing Test, we say, is according to this proposition a necessary condition on, or necessary for, intelligence. Failing the Turing Test means failing to have intelligence.)
(1b), though, is false. For if the question of intelligence is at all meaningful, one can be intelligent without passing the Turing Test. There are lots of reasons why an intelligent being might not pass the Turing Test. Inability to type would be one reason; speaking another language would be another.
The interesting question, from our standpoint, is whether passing the Turing Test is a sufficient condition on intelligence, or, equivalently, suffices for intelligence--i.e., is (1a) true?
There are roughly two skeptical responses to the assertion (1a) that passing the Turing Test suffices for intelligence: The Turing Test is too strong a sufficient condition on intelligence, or the Turing Test is too weak a condition on intelligence to suffice
The Turing Test is Too Strong (aka, The Problem of False Negatives).
First, one might argue, as researchers in computer science and robotics sometimes do, that the Turing Test is simply too strong a condition on intelligence because intelligence is not always expressed verbally. Witness the Mars rovers Spirit and Opportunity. The rovers would fail the Turing Test miserably, yet it can be argued that they have behavioral capacities on a par with insect-level intelligence. Intelligence has evolved, and so too with our machines. It may be a very long time before we can create machines that can reliably pass the Turing Test, yet we should not for that reason refuse to see the fantastically complicated and sophisticated behavioral repertoire of their predecessors as anything but intelligence.
In short, this objection holds that the Turing Test is too strong in the sense that it ignores as intelligent many things that should rightfully be considered intelligent. Perhaps dog and dolphin-lovers will agree.
The Turing Test is Too Weak (aka, The Problem of False Positives).
A much more common objection to the Turing Test, at least within the philosophical community, is that it is simply too weak. According to this objection, the Turing Test could in principle result in false positives: things that pass the test but ought not be considered intelligent. The most common source of these objections has been the philosophical community, and many of these bear careful scrutiny, as we shall see.
Whether the Turing Test is too strong, too weak, or, as a robotic Goldilocks might wish, just right, it is clear that Turing has done much to help us sharpen the debate over the possibility of artificial intelligence.
Here are some useful links for further reading about Alan Turing and the Turing Test:
- The Stanford Encyclopedia Entry on the Turing Test
- The Loebner Prize (no longer being run, it seems!)
- Andrew Hodges' Turing Pages
Next time we dive into a thicket of much thornier questions: What is the nature of computation? (We are, after all, considering the possibility of artificial intelligence, which is ultimately the test of computationalism. We can't make any progress in understanding computationalism without first knowing what to count as a computation!) And what are the capacities and, crucially, limitations of computation? This part of the course takes a decidedly technical, perhaps even mathematical turn which some students find off-putting. Rest assured as past classes have amply demonstrated, this material can be mastered, and mastered quite readily and thoroughly given patience and time.