Fisher, M. 1983. "A Note on Free Will and Artificial Intelligence." Philosophia 13: 75-80
In two recent books[1] the claim is made that the collision between the concepts of free action and of scientific knowledge can be avoided, thanks to new understanding of the former afforded by the new discipline of artificial intelligence (AI). Is this really so?
Both writers believe that the great merit of AI models and theories of mental activity is that they provide explanations of how a number of apparently simple but actually highly complex mental acts are possible at all. The problem of freedom and determinism, for Boden, is accordingly "not so much whether a common-sense claim (that people are free, for example) is true, as whether one can offer an acceptable philosophical analysis of it," though she allows the possibility that no such analysis may be available; "perhaps it is incoherent or inadequate in some way, in which case one will be committed to denying it.[2] She mentions four ways in which even the crude Al models available today can illuminate the question of how the concept of freedom is to be analysed.
(1) Free action is self-determined action. AI models help us to understand what exactly may be meant by this: "There is a clear distinction between programs ... that have within them something describable as a reflexive representation of their own range of action, and programs ... that do not. There is a clear difference also between procedures that are, and those that are not, crucially influenced at particular points by reference to specific aspects of this inner model of the system as a whole.[3]
(2) Free agents must be capable of being bound by moral laws which are universalizable and are grounded in the interests of persons. AI models make clearer what it is in persons that makes possible their employment of the concept of universalizability. Appreciation of morality also requires the ability to grasp morally relevant analogies; AI models help us understand how such analogical thinking is possible.
(3) A free agent sometimes does not do something which she[4] could have done, and can be blamed or praised for it accordingly. Existing AI models explicate the concepts of attending, of inattention, and of taking care, and hence this notion of what one could have done, central to an agent's freedom.
(4) The libertarian idea that a free agent always can falsify any prediction made of what she will do can be built into an AI model, and that will enable us to understand exactly how far, and why, this is true.
Sloman's claims are more sweeping. I quote: "people are increasingly designing programs which, instead of blindly doing what they are told, build up representations of alternative possibilities and study them in some detail before choosing. This is just the first step towards real deliberation and freedom of choice."[5] He looks forward to "systems which, instead of always taking decisions on the basis of criteria explicitly programmed into them (or specified in the task), try to construct their own goals, criteria and principles, for instance by exploring alternatives and finding which are most satisfactory to live with." He looks forward to the kind of self-modifying program which "could acquire not only new facts and new skills, but also new motivations; that is desires, dislikes, principles, and so on." And he flatly concludes: "If this is not having freedom, and being responsible for one's own development and actions, then it is not at all clear what else could be desired under the name of freedom."[6]
Neither Boden nor Sloman clearly state the problem which the new understanding provided by AI is going to help us solve, presumably because they assume its general character is sufficiently well known, as indeed it is. But when the conflict between free action and scientific knowledge is stated in its starkest form it is not at all clear how the new ideas of AI can help to remove it.
The conflict can be described with maximum brevity as the collision between the necessity demanded by scientific knowledge and the possibility demanded by free action. I shall first try to state the reason for this conflict more clearly. If scientific knowledge is possible, what is known must be capable of being stated in the form of law-like generalizations, each of which is necessary. We need not go into the difficulties attending attempts to explicate the necessity involved; it is enough to remark that whatever kind it is it possesses an associated possibility such that, if something is necessary, it is not possible that it should not be (or, occur). Hence if scientific knowledge of human actions is possible, there must be law-like generalizations, possessing this necessity, capable of explaining such actions. (The generalizations need not, of course, mention the actions; they may, probably will, mention quite other items. But it must be true that no alteration in the action could occur without an alteration in the items mentioned in the laws.) In this fairly precise sense, then, the possibility of scientific understanding of human actions requires the existence of laws explaining them such that, if such an action occurs, it is impossible that any other should have occurred.[7]
Wiggins argues elaborately that the truth of determinism is incompatible with any statement of the form "The agent x could at t have done other than she did at t," if "could have done" is taken to at least imply that what could have been done at t was not historically impossible at t; historical impossibility at t applies to all events A at all times earlier than t such that what occurred was A, and historical inevitability (necessity) is the dual of historical possibility. Determinism, he maintains, entails that each event is historically inevitable at all times earlier than the time it occurs.[8] I think he is right, but I also think there is a more direct proof of the logical collision between the possibility of scientific knowledge and the possibility of free action. If at a given moment I do an action A freely, everyone agrees that it must at that moment be true that there are other things which I could do; there must be other possibilities. But if determinism is true then whatever happens at that moment is the one thing that can happen, it is impossible that any other thing should happen; so there are no other possibilities.
The brief and programmatic remarks made by Boden and Sloman encourage us to think that AI can provide us with some indication of how this conceptual conflict is to be resolved. Boden says nothing else about the problem, but I think a good deal of her discussion of various mental concepts gives clues to how she thinks this can happen. If the various concepts presupposed by that of free action, such as intention, purpose, taking means to an end, desire, preference, interest, planning, hope, fear, and many others no doubt, can be theoretically understood in the sense that programs can be written which simulate the exercise of such concepts, we may well reach a point in our developing scientific understanding of mind where we can say that we have a program which does essentially all that a person does when she acts freely. Boden apparently, and Sloman certainly, thinks that AI is already on the way to such understanding and that nothing is known to stand in the way of our reaching it. Perhaps they might argue, imitating hard determinists, that "in principle" such understanding can already be assumed to be possible, and conclude that, since the machines on which the programs will run will undoubtedly be physical systems, no less deterministic than bicycles or radios, the logical collision has already, at this "in principle" level, been shown to be avoidable. This is to put words into their mouths, but it seems the most reasonable filling-out of their optimistic sketches.
I have no reason to think there is anything standing in the way of the kind of theoretical understanding and simulation they seem to envisage; nor does it seem to me at all objectionable that such programs, when they are devised, will behave in a way from which it will be difficult, perhaps impossible, to withhold honorific descriptions like "conscious," "intelligent," "having a point of view," and so on. But I cannot see that such developments would, or that their possibility does, enable us to avoid the collision between the concepts of free action and of scientific knowledge. When such programs are developed we shall have artifacts which can think, deliberate, choose, and act. By hypothesis, what is true of their thought and action will be true of and essential to ours (since the program represents the best theoretical account of what is essential in our thought and action). So, if free action is a genuine possibility for them, it must be for us too. But why should we suppose that free action will be a genuine possibility for such programs? No doubt such programs will employ the concept of free actions, essentially as we have it, for no-one denies, I suppose, that that concept is quite essential to all our thought and action. But none of this goes any way at all towards showing that the program simulating all the essentials of our thought and action must possess the genuine possibility of free action. If the argument of the hard determinist, already sketched, is admitted, all the writing of such programs would prove is that they, too, must labour under the same unavoidable illusion that holds us captive, the illusion that there are real open possibilities.
Another attack upon the freedom vs. determinism impasse is suggested by some things Sloman says about the methods of philosophy and science.[9] He suggests that both natural sciences in their early phases, the human sciences now and in 'the foreseeable future, and philosophy, especially since Kant, are predominantly and rightly concerned with explanations of how things are possible. He grants that natural sciences, as they develop, turn gradually more and more to attempts to explain the limits upon the possibilities earlier studied, that is, to a search for laws setting limits to what is possible; but he maintains that there are several reasons why it is unlikely the human sciences "will discover new laws with predictive content and explanations of those laws, apart from ... trivial laws ... based on common sense," or "culture-bound regularities."[10] Later he modifies this to the view that human sciences will have to concentrate on the explanation of how familiar facts are possible "for some time yet."[11]
Though Sloman nowhere makes this claim, someone might try to strengthen what he says here to the thesis that the human sciences must, in principle, restrict their efforts to the search for explanations of how things are possible. If there were good reasons for believing in the necessity of such a restriction, it would follow that the absence or triviality of laws in the human sciences is not due to their youth: such laws could then not be expected at all. So the usual formulations of determinism would lose all plausibility.
I can see why Sloman, or anyone, might think that the human sciences ought to concentrate in practice upon the explanation of possibilities. The degree of complexity involved in large-scale social processes makes explaining possibilities a far more fruitful scientific strategy. Consideration of the role that scientific understanding of human phenomena plays in human life points in the same direction. But these considerations are practical. They bear upon what it may be practically feasible or most worthwhile for us to discover, and so, at most, upon whether it will ever be possible for us to get enough evidence to confirm the meta-scientific hypothesis of determinism. They surely do not bear upon the problem they are supposed to resolve: whether in the human sciences determinism is true.
I conclude that neither the possibility of writing programs which simulate, and so show we have explained, all the essential features of human, thought and action, nor the interesting new perspective upon the methods and aims of science suggested by (though not only by) AI, does anything to reduce the discomfort we ought to feel as we contemplate the collision of our idea of scientific knowledge with our idea of free action, which if I am right remains, to change the metaphor, a philosophical disaster area.[12]
NOTES
1. Margaret Boden, Artificial intelligence and natural man (Harvester, Hassocks 1977) pp. 432-3; Aaron Sloman, The computer revolution in philosophy (Harvester, Hassocks 1978) pp. 206-7. Sloman argues similar claims at greater length in "Physicalism and the bogey of determinism,* in S.C. Brown ed., Philosophy of psychology (Macmillan, London 1974). Several passages in Boden's Purposive explanation in psychology (Harvard University Press, Cambridge 1972) are also pertinent.
2. Boden,432.
3. ibid.
4. I follow Boden, and for her excellent reasons, stated on pp. 423-4, in this usage.
5. Sloman, p. 266.
6. Sloman, pp. 266-7.
7. Further explication, from which I have learnt a lot, is found in D. Wiggins, "Towards a reasonable libertarianism," in T. Honderich ed., Essays on freedom of action (Routledge & Kegan Paul, London 1973) and "Freedom. knowledge, belief and causality," in G.N.A. Vesey ed., Knowledge and necessity (Macmillan, London 1970).
8. Wiggins, "Towards a reasonable libertarianism," Section III.
9. Sloman, ch. 3.
10. Sloman, p. 81.
11. Sloman, p. 82.
12. Helpful comments were made on an earlier draft by Patricia Baillie and Aaron Sloman.