Readings
Texts
- Dan Dennett, "True Believers (Introduction)"
- Dan Dennett, "True Believers: The Intentional Strategy and Why It Works"
- Dan Dennett, "True Believers (Postscript)"
- Dan Dennett, "Cognitive Wheels: The Frame Problem of AI"
- Fred Dretske, "If You Can't Make One, You Don't Know How It Works" (from last time)
- Fred Dretske, "Minds, Machines, and Money: What Really Explains Behavior" (from last time)
Notes
- Dretske's Compass (from last time)
- The Frame Problem (SEP)
Synopsis
Following up on our previous discussion, if Dretske's argument is successful, then the problem of intentionality dissolves. That is, if the lowly $1.98 compass exhibits original intentionality, then our problem in building minds is not that of building something that exhibits original intentionality. We can already do that, and quite well. Instead, our problem is that the original intentionality exhibited by the compass is substantially less 'feature-rich', if you will, than it would need to be to be useful in building minds. Let me explain.
The original intentionality exhibited by the compass is a function of its causal relation to the Earth's magnetic field. Yet this causal relation is constant: In the absence of stronger magnetic fields, the compass will always and consistently point to the magnetic north. Indeed, in the 'presence' of stronger magnetic fields, it will point to their 'magnetic north', if you will. Thus the compass is veridical. It always points to the magnetic north, which is of course why the compass is such a useful instrument for navigation.
Beliefs and other intentional states of mind are not, however, veridical. Unlike compasses, beliefs can be mistaken. The cow in the field might cause me to form the belief that there is a horse in the field late in the evening. Worse, why does the state-of-affairs of the cow's being in the field cause me to believe that there is a cow in the field, if it does, but it doesn't cause me to believe that there is either a cow or a horse in the field? In short, a belief can misrepresent a state-of-affairs, and it is not at all clear if a belief represents a state-of-affairs that it represents that state-of-affairs at all. To further complicate matters, consider that desires are never directed on existing states-of-affairs. Rather, the desire that there be a horse in the field represents the non-existent state-of-affairs of a horse being in the field, but it does not thereby misrepresent even though it is strictly speaking false that there is a horse in the field. Mental representations can be in error, but being in error is not always the same thing as misrepresenting.
Intentionality as it applies to mental states goes far beyond the simple causal, veridical relation that secures, Dretske argues, the compass' original intentionality. Dretske thinks he can secure a richer intentional basis to allow for misrepresentation by introducing the notion of a natural function--an indicator, that is, which indicates the presence of an F, quite apart from our reading it as indicating the presence of an F, and even if it is caused to indicate the presence of an F by the presence of a G.
Question: Do Dretske's natural functions which make original intentionality off-the-shelf cheap provide the right sort of intentionality Brentano insisted was characteristic of all mental states?
After all, the pervasive natural relationships between states of affairs, typically causal vis-a-vis the ordinary compass and magnetic north, are veridical in the absence of extraordinary circumstances. The compass always points to the magnetic north so long as there isn't nearby any confounding magnetic source. (Note that I said 'magnetic north' here, because true north and magnetic north can and, importantly, have diverged significantly: magnetic north has basically wandered up to the north pole in recent history and could in theory flip to the south pole.)
Yet desires, for example, are presumably a natural relationship between the state of affairs of a complex of neural states and some state of affairs desired--money, say, or food, or shelter. But by the nature of desire, those second state of affairs forming the content of the desire do not obtain. One does not desire what one has. So the original intentionality characteristic of cognitive functions is decidedly, in some cases necessarily, and perhaps also almost gleefully, not veridical.
Now, Dretske provides an especially clear way of understanding the challenge intentionality--also brought out by Searle's Chinese Room Thought Experiment--poses for artificial intelligence and, ultimately, understanding the mind. Under the assumption of computationalism, consider the analogy between a person and a vending machine:
| A Person | A Vending Machine |
| Functions according to neurophysiological states determined by the formation of beliefs and desires. | Functions according to electromechanical states determined by buttons pressed and coins inserted. |
| Beliefs have intrinsic properties and extrinsic properties. | Coins have intrinsic properties and extrinsic properties. |
| A belief's intrinsic (biochemical) properties include the person's neurophysiological state in having that belief. | A coin's intrinsic (material) properties include its size, shape, weight, material, and electrical characteristics. |
| A belief's extrinsic (intentional) properties include the state-of-affairs it is about. | A coin's extrinsic (economic) properties include its value. |
We assume that--and would like to be able to explain just how--a belief's extrinsic properties play a causal role in the person's behavior. My belief that the cat is on the mat, for example, causes me to step elsewhere.
Yet it appears that what is relevant to, and all that is relevant to, the production of behavior are the belief's intrinsic, biochemical properties. That is, following the analogy through we recognize that coin's value, which indeed fluctuates day-to-day, has nothing to do with the behavior of the vending machine. All that matters for the behavior of the vending machine are the coin's material properties. Anything with the same material properties the vending machine checks will be counted as a coin by the vending machine--hence the possibility of 'slugs' or cheats for vending machines.
Extrinsic properties like intentional or representative relationships are irrelevant to the production of behavior in persons and vending machines alike, or so it seems. If so, then the fact that my belief that the cat is on the mat is about the cat's being on the mat has no bearing on my behavior, contrary to almost everyone's pre-theoretic intuition.
This is a further challenge to which Dretske must respond, and the remainder of his article is devoted to explaining how the intentional properties of a belief can bear on its causal relations in the mind.
Next today we examined Dennett's argument that rather than meet Searle's Chinese Room Thought Experiment head-on, as it were, as Boden, Block, and Dretske do in their various ways, we can simply sidestep it.
Characterized only somewhat sarcastically, if Dretske thinks intentionality is cheaply had, Dennett thinks intentionality can be had on the cheap.
The point is that for Dennett there is no problem of original intentionality, because intentionality is in the eye of the beholder. Well, not entirely, but almost, which is why Dennett likes to call himself a quasi-realist with respect to intentionality.
In what sense could intentionality be in the eye of the beholder? Following Dennett, distinguish three 'stances' we might take (or strategies we might employ) in explaining the behavior of a complex system (organism, say, or mechanism, if that distinction any longer makes any sense.)
- Taking the Physical Stance, we say the car failed to start because the rapid chemical process involved in hydrocarbon combustion failed to occur.
- Taking the Design Stance, we say the car failed to start because the ignition system failed to deliver spark to the cylinders.
- Taking the Intentional Stance, we say that the car hates its owner.
Of course, so described the Intentional Stance seems patently absurd, despite the fact that alarmingly many people do in fact take the Intentional Stance with respect to complicated machines like cars and computers.
Yet suppose, Dennett asks, that you are in fact able to explain and predict something's behavior by attributing intentional states (beliefs, desires, etc.) to it. Suppose furthermore that that is the only way to provide explanations and predictions, because for whatever reason the Physical and Design stances aren't available to take. Then because we must take the Intentional Stance and because we can successfully take the Intentional Stance, the thing has so much as any of us should care intentionality. Witness how we explain and predict each other's behavior! Consider further how we explain and predict our very own behavior!
Next time we take up the Frame Problem, which is a different sort of problem with intentionality altogether.