Compatibilism and the Free Will Defense

The free will defense is a theistic strategy for rejecting a certain argument for the non-existence of God. The argument, sometimes called the “logical problem of evil,” insists that it is logically impossible for an omnipotent, omniscient, perfectly benevolent God to co-exist with evil. Since evil clearly exists, the argument goes, God does not. The free will defense responds by claiming that, since free will is very good indeed, a God with all the characteristics in question might co-exist with evil—provided that the evil was brought about by (other) free creatures and God could not have prevented the evil without making his creatures unfree. I think it is commonly believed (although, perhaps, seldom argued) that the free will defense only works if the creatures in question have a sort of freedom incompatible with determinism. I once held this belief. I now think that it is wrong.

I. The Problem of Evil , the Free Will Defense, and Compatibilism

The free will defense is best seen as a tactic in just one of the many skirmishes between atheists and theists. The atheist begins the skirmish by saying, “God, if there is a God, would have to be omnipotent, omniscient, and perfectly benevolent. But, necessarily, any perfectly benevolent being prevents any evil insofar as it is able and can foresee it. And, necessarily, an omniscient being could foresee all potential evil and an omnipotent one could eliminate all of it. So, necessarily, if God existed, there would be no evil: he would have prevented it. But there is clearly some evil in the world—I yelled at my spouse this morning, for instance—so, by modus tollens, God must not exist.”[1]

The free-will defender responds, “Wait a minute. I don’t buy your premise that ‘any perfectly benevolent being prevents any evil insofar as it is able and can foresee it.’ A perfectly benevolent being may allow some avoidable, foreseeable evil so long as a much greater good is produced thereby: a perfectly benevolent doctor may allow, or even cause, the ‘evil’ pain of a vaccination in order to bring about the much greater good of prolonged life.”

“Ok, I’ll grant you your modified premise,” the atheist says. “Let it be said rather that, necessarily, a perfectly benevolent being prevents all evil within its power to foresee and prevent, unless that evil brings about a greater good. But, necessarily, an all-powerful being can get any of these supposed greater goods without relying on the evil as an intermediary: a doctor may have to rely on the pain of a vaccination in order to prolong life, but surely God can administer the vaccination without the needle.”

“Not so fast,” replies the free-will defender. “The goods God is worried about here are not prolonged lives but creatures with the ability to make morally significant choices. Such creatures are very, very valuable, and if God needs to allow a bit of evil in order to have them, they are worth it. If God were to eliminate all evil, he would have to do it by keeping his creatures from acting evilly, which would require his taking away their free will. But were he to do that, they could not make morally significant choices and a very valuable good would be lost. So the existence of evil is not incompatible with the existence of God, since God may be forced to allow his creatures to do evil in order for them to be morally responsible, and this moral responsibility outweighs the evil his creatures commit.”[2]

I think it is often thought that the theist’s response will only work if moral responsibility is incompatible with determinism, the thesis that the laws of nature, conjoined with any proposition describing the state of the world at any instant, entail any true proposition whatsoever.[3] Here’s why. If compatibilism is true—that is, if moral responsibility is compatible with determinism, then (the argument goes) God could create free creatures in a deterministic universe. But clearly God gets to choose the laws and initial conditions for whatever universe he decides to create, and surely there is enough variation in the possible laws and initial conditions for him to create a universe that would contain creatures that met sufficient compatibilist conditions for moral responsibility and still never did any evil. So if compatibilism were true, God really could have gotten the goods—the morally responsible creatures—without allowing any evil, and the free-will defense fails.

I think this line of thought is mistaken. Or, at least, I think that there is a coherent and motivated compatibilist-theistic position on which this line of thought fails. In the remainder of this paper, I spell out this compatibilist position and show how it may be used in a variant of the free will defense. If I am right, then free-will defenders need not be incompatibilists: there is at least one compatibilist position that can use the free-will defense as well.

The compatibilist free-will defense I spell out involves two central theses. The first is a thesis about moral responsibility; the second, a thesis about divine foreknowledge and deliberation. Let’s begin with the thesis about moral responsibility.

II. Manipulation and Moral Responsibility

Moral responsibility first. Suppose that Jim generally enjoys whatever conditions you think are sufficient for him to be morally responsible. Jim is now considering joining a certain cult. He takes his decision very seriously, and will only join the cult if he becomes entirely convinced that its religious claims are true. As it stands, the objective probability of Jim’s joining the cult if he continues his life as normal is fairly low, say .09. However, Jim is thinking about going “out into the wilderness” for 40 days and nights to commune with nature and deliberate whether to join. If he goes out into the wilderness and has certain experiences while there, thanks to his psychological makeup, the objective probability of his joining the cult will become .9. He decides, on his own, to go out into the wilderness; while there, he has the experiences that raise the objective probability of his joining the cult, which he then joins after the 40 days.

It seems plain to me that Jim is morally responsible for joining the cult. His decision was indeed influenced, and significantly so, by experiences he had and over which he had no control. Nonetheless, this seems to be the sort of thing that happens all the time. Many of us begin our career, our week, or our day significantly unlikely to perform given acts, but as time goes on things happen that change our beliefs, attitudes, or dispositions. None of this seems by itself any reason to think we are not fully responsible for our actions.

On the other hand, if Jim’s experiences were engineered by a covert manipulator who was actively trying to get him to join the cult, we would feel that he was less culpable for his decision. Imagine that Bob, a brilliant member of the cult, is appraised both of Jim’s intention to go into the wilderness and of his general psychological makeup. As a result, he knows that if Jim has such-and-such experiences in the wilderness, he is more likely to join the cult. Bob then covertly forges these experiences for Jim, with the result that the probable outcomes of Jim’s deliberation change accordingly and he joins the cult. In this case, we are less inclined to hold Jim far less morally responsible for joining the cult.

The degree to which his responsibility is mitigated is a function of both how much the probability that he join the cult goes up and how high it is at the end of the process. If Bob’s manipulations brought Jim from a .25 probability to a .5 probability, we would probably hold Jim more responsible for his decision that we would have if Bob’s manipulations had either brought him from a .5 probability to a .5 probability or from a .5 probability to a .75 probability. And, when the manipulation results in Jim’s joining the cult with probability 1, we cease to hold him morally responsible at all.

These considerations suggest a certain kind of historical constraint on moral responsibility:

No Manipulation Constraint (NoMan): If a manipulator M arranges matters in order to get S to A, and if M’s so arranging matters raises the objective probability that S A-s from m to n, then S’s moral responsibility for A-ing is mitigated as a function of m and n,

where, if n=1, S is not morally responsible at all for A-ing.

NoMan, however, neglects two important facts about Jim’s manipulation. First, the manipulation was invisible to the manipulated agent and was of a sort the agent wouldn’t have wanted. If Bob had simply talked with Jim and tried to convince him to join the cult, and if Jim had found Bob’s arguments persuasive, NoMan would count Jim as less responsible than if his decision had been made without Bob’s input. But this case is surely different than the one under consideration where Jim has no idea about what Bob is up to.[4]

A second problem is that NoMan will mitigage Jim’s moral responsibility even if the success of Bob’s manipulations were the result of wild luck. Suppose that Bob mistakenly believes that Jim values kindness and charity and knows that people who value kindness and charity will be heavily influenced by a certain sort of experience. What Bob doesn’t know is that Jim couldn’t care less about kindness and charity but instead values hard work and rugged individualism. Fortunately for Bob, it turns out that, in precisely the sort of circumstances Jim has put himself in, the sort of experiences that will influence a kindness-and-charity-valuing person to join the cult will also convince a hard-work-and-rugged-individualism-valuing person to join. It is at least less clear that, if Bob has been Gettiered in his belief that the manipulations in question would raise Jim’s probability of joining the cult was true, Jim’s moral responsibility would be mitigated.

These considerations suggest that whether or not moral responsibility is mitigated depend, in part, on what the manipulator and the manipulated know. I propose that the considerations should lead us to accept a condition like the following:

No Known Manipulation Constraint (NoKnowMan): If, unbeknownst to S, a manipulator M arranges matters in order to get S to A, and if M knows that the objective probability of S A-ing given his so arranging matters is n and that the objective probability of S A-ing given his not so arranging matters is m, then S’s moral responsibility for A-ing is mitigated as a function of m and n.

And, once again, if n is 1, then S is not morally responsible for A-ing at all.

Note that NoKnowMan may describe just one of many ways a manipulator might mitigate someone’s moral responsibility. If Bob knows that his manipulations will alter the likelihood of Jim's joining the cult, but believes the outcome of his manipulations will leave Jim with a 78% chance of joining, when the manipulations will in fact leave Jim with an 80% chance of so joining, we ought not conclude Jim is just as morally responsible for his joining as he would be had there been no manipulation whatsoever. NoKnowMan, however, will not underwrite this judgment, since Bob does not know the precise effect his manipulations will have on the probability that Jim joins the cult. Some other principle will be in play. It is tricky business spelling out a fully general no-manipulation condition that can underwrite out intuitions in all sorts of cases, and a business I won't engage in here. Fortunately for our purposes nothing more general than NoKnowMan is needed.[5]

IV. Divine Foreknowledge and Deliberation

NoKnowMan is the first thesis I need for a compatibilist free-will defense. The second is a thesis about divine foreknowlege. It is this: when making decisions, God is able to “bracket,” or exclude from consideration, certain things which he knows and which might be relevant to the decision in question.

If God is omniscient, then God knows everything: that is, God knows every true proposition and no false ones. If we suppose that God is in time and that future propositions have no truth value, this foreknowledge might not amount to very much. If, however, we think either that God is outside of time or that propositions have their truth-values eternally, we must say that God's omniscience implies his knowing in exact detail how the future is going to be.[6] Let’s say that God has complete foreknowledge if (a) every proposition is either true or false and (b) God knows all the true ones and disbelieves all the rest.

We may presume that when God acts, he acts for reasons. We may also presume that when God deliberates about what to do, he considers various reasons for performing various acts open to him, and chooses one act on the basis of those reasons.

It seems obvious that certain sorts of propositions will not be available to God as reasons for certain decisions, and this even if God knows those propositions. Suppose God is deliberating about whether to send a big rainstorm to wipe out the Earth’s population. Suppose further that God has complete foreknowledge; then either the proposition that God does send such a rainstorm or the proposition that God does not send such a rainstorm is true, and God knows which one it is. Suppose it is in fact the proposition that God sends the rainstorm. It seems clear that God cannot cite his knowledge that he is going to send a rainstorm as a reason for his sending it. If we ask God after the fact, “Why did you send the rainstorm?” we will be rightly skeptical of his status as a rational agent if he responds, “Because I knew I was going to.”

There seems to be a constraint on rational action that agents have not acted fully rationally if their reasons for acting as they did cite the way they in fact act. Suppose I am a time traveler and remember my younger self watching my older self buying a cappuccino at a certain time. It is now that time (again, from my perspective) and I walk up and buy that cappuccino. You can ask me why I did so, and I can cite a number of reasons consistent with my having acted rationally. I may say, “I was thirsty,” or “I was tired and cappuccinos help me stay awake.” But if I simply say, “I remembered having done it, so I did it again,” you will regard my action as not really motivated—not done for any good reason. That’s not to say that I’m irrational, as I might be if I had instead bought and drunk a can of paint. But somehow my reasons for acting aren’t quite the sort we want an ideal rational agent to have.