Home
Ninja Cat

Main navigation

  • Home
  • CV (opens in new tab)
  • Writing
    • Scholarship (opens in new tab)
    • Fun Stuff (opens in new tab)
    • Works in Progress (opens in new tab)
    • Ideas (opens in new tab)
  • Teaching
    • Finding Philosophy (opens in new tab)
    • Reading Philosophy (opens in new tab)
    • Writing Philosophy (opens in new tab)
    • Courses (opens in new tab)
    • Classes (opens in new tab)
  • News and Views (opens in new tab)
  • Contact (opens in new tab)

The History of Artificial Intelligence

Breadcrumb

  • Home
  • Teaching
  • Classes
  • Spring 2026
  • Minds and Machines
  • The History of Artificial Intelligence

Readings

Texts

  • John Haugeland, "Artificial Intelligence: The Very Idea" (pdf)
  • Terry Bisson, "They're Made Out of Meat"

Notes

  • Buchanan's Chronology of AI Developments
  • Development of the Computer (pdf)

Video

Synopsis

To put it crudely, our assumption this semester--and the assumption of all of the cognitive sciences, including neuroscience and psychology--is that we are meat machines busily flapping our meat at one another. Understanding how mere meat machines might be able to reason, however, requires a brief foray into developments in mathematics and science, starting with the Copernican Revolution.

To put it more precisely, there are uncountably many functions just as there are uncountably many real numbers, as Cantor proved. There are infinitely many of them, but there are so many they cannot be put into a list (or a one-to-one correspondence with the natural numbers. Thus there is an uncountably enormous completed infinite totality of functions. There are many fewer computable functions. They can be listed (put into a one-to-one correspondence with the natural numbers. So there are infinitely many computable functions, but at most countably many such functions. Thus the difference between the class of funcions generally and the class of computable functions specifically is the same as the difference between the number of real numbers and the number of natural numbers, respectively. If you find the notion of infinities of ever increasing size stacked one upon the other perplexing, welcome to the club. Remember that Cantor's work on the nature of infinity was roundly rejected by the mathematical community, leading him to a lifetime of depression and despair. Happily, he has since been vindicated. His work is now seen as a significant but uncontroversial contribution to mathematics.

The upshot for us is that we have to understand how we justify going from functionalism, the view more pithily that we are meat machines, to the much narrower thesis of computationalism, the view that among all the kinds of meat machines possible, we are a very specific kind, for we are meat computers. It is thus one thing to say that mental states are funcions of the organism, quite another to claim that mental states are computed functions of the organism. The audacity of the claim is made manifest in the fact the class of computable functions is such a tiny subset of the totality of functions.

So we began today by briefly reviewing the thread of argument I've been laying out that begins with Plato and ends with Functionalism. Our question now, however, is how do we get from Functionalism to Computationalism? How, that is, do we go from saying that the mind is what the brain does to asserting that what the brain does is to compute? For this we follow another thread altogether from the history of mathematics that begins with the Copernican Revolution and ends with Computationalism, which we might also call the Fundamental Hypothesis of Traditional Artificial Intelligence:

Thought consists of the rule-governed manipulation of (strings of) symbols.

Haugeland, "Artificial Intelligence: The Very Idea", does an excellent job explaining the history of this idea, including i) Galileo's astonishing intellectual leap in recognizing that the lines, points, and planes of Euclidian Geometry can be reinterpreted to be, not about spatial relations (viz., earth measure), but about other kinds of magnitudes including velocity and acceleration, ii) Descartes' brilliant synthesis of geometry and algebra in analytic geometry to show how systems of equations and operations on them can be variously interpreted depending on the domain of inquiry, and iii) Hobbes' proclamation that "By ratiocination, I mean computation." That is,

When a man reasoneth, hee does nothing else but conceive a summe totall, from Addition of parcles; or conceive a remainder, from Subtraction of one summe from another: which (if it be done by Words) is conceiving of the consequence of the names of all the parts, to the name of the whole; or from the names of the whole and one part, to the name of the other part... Out of which we may define (that is to say determine,) what that is, which is meant by this word Reason, when we reckon it amongst the Faculties of the mind. For Reason, in this sense, is nothing but Reckoning (that is, Adding and Subtracting) of the Consequences of generall names agreed upon, for the marking and signifying of our thoughts; I say marking them, when we reckon by ourselves; and signifying, when we demonstrate, or approve our reckonings to other men.

--Thomas Hobbes, Leviathan, 1651. (Quoted in Tim Crane, "The Mechanical Mind", p. 130.)

Following Haugeland, however, we pause to note that the fundamental assumption of cognitive science today, so-called computationalism, gives rise to what he calls "the Paradox of Mechanical Reason":

Reasoning (on the computational model) is the manipulation of meaningful symbols according to rational rules (in an integrated system). Hence there must be some sort of manipulator to carry out those manipulations. There seem to be two basic possibilities: either the manipulator pays attention to what the symbols and rules mean or it doesn't. If it does pay attention to the meanings, then it can't be entirely mechanical--because meanings (whatever exactly they are) don't exert physical forces. On the other hand, if the manipulator does not pay attention tot he meanings, then the manipulations can't be instances of reasoning--because what's reasonable or not depends crucially on what the symbols mean.

In a word, if a process or system is mechanical, it can't reason; if it reasons, it can't be mechanical.

--John Haugeland, "Artificial Intelligence: The Very Idea", p. 39.

To fully understand the Paradox of Mechanical Reason, however, we need to understand mechanism and we need to have some account of intelligence. For both these problems, we turn to none other than Alan Turing, who did more than anyone to frame the issues.