Jeffrey’s Challenge[1]

Mark Kaplan

Indiana University

I. JEFFREY’S CHALLENGE

In 1953, Richard Rudner published a paper called, “The Scientist Qua Scientist Makes Value Judgments.”[2] His argument for the claim that makes up his paper’s title was roughly this. Scientists are in the business of (among other things) deciding what hypotheses they should accept--where accepting a hypothesis is compatible with not being fully confident (not as confident as, say, one is in the truth of a simple tautology) that the hypothesis is true. A scientist’s decision as to whether she ought to accept a hypothesis is a consequential one. There can be serious consequences should a scientist accept a hypothesis and the hypothesis turn out to be false. So, before she decides whether she ought to accept H, it behooves a scientist to look at what the consequences of her accepting H might be, evaluate (from a moral and prudential point of view) the significance of these consequences, and decide whether she ought to accept H in the light of that information. It is thus part of the business of a scientist qua scientist to make value judgments.

It is easy to see the power of this line of thought. Suppose a scientist confronts the following decision problem:[3]

H not-H

Option A $10 x

Option B $5 $5

Suppose that she is, as her evidence warrants her being, confident, but not fully confident that H. Then, before she decides whether to accept H, she will want to know what x is. For there can be no question but that she must accept that, if H is true, then A will have a better outcome than B if chosen, and so (it would seem) that A is the option she should choose. Thus, it would seem that, if she accepts H, she will have no choice but to conclude (since it will then follow from what she accepts) that A is the option she should choose. But she will not want to choose option A if x is really dire—if x consists, say, in the slaughter of hundreds of innocent people. After all, she is not fully confident that H is true, and thus that choosing A won’t result in x. Thus her decision whether to accept H must, insofar as she is rational, turn (in part) on what x is and how good or bad she deems x to be. So, insofar as it is her business qua scientist to decide whether to accept H, it is her business qua scientist to make value judgments.

In 1956 Richard Jeffrey published a response to Rudner’s article.[4] In it, he argued that what the scientist accepts (or, at least, what she accepts qua scientist) cannot possibly be consequential in the way Rudner thought. The only thing, by way of opinion with respect to H that is consequential for what she should choose, is how much confidence she invests in the truth H (how great a degree of belief she has in H, how subjectively probable H is for her). And determining how much confidence she ought to invest in H is something she is free to do on purely epistemic grounds, without regard for prudential or moral consequences.

Jeffrey’s account of how the scientist should solve her decision problem was an application of Bayesian decision analysis:[5]

1.  The scientist should determine, on purely epistemic grounds, how much confidence to invest in H and in not-H, making sure that the amounts of confidence she invests conform (for she opens her position to criticism if they do not conform) to the axioms of the probability calculus.

2.  She should determine the subjective utility for her of the three possible outcomes of the options before her: $10, $5, and x—that is, the intensity of her preference for $5 relative to the best outcome, $10, and the worst, x. This comes down to determining the degree of confidence, r, she would have to have in H for her to be indifferent between $5, and an option that gives her $10 if H and x if not-H.[6] [In so doing she determines that u($10) (her subjective utility for $10) = 1, u(x) = 0, and u($5) = r.]

3.  She should prefer A to B if and only if A has greater expected subjective utility for her than B—i.e., if and only if her degree of confidence in H is greater than r. And she should prefer B to A if and only if B has greater subjective utility for her than A-- i.e., if and only if her degree of confidence in H is less than r.[7]

Naturally, the direr she regards x, the closer r will be to 1—the more confident she will need to be in H for her to (be entitled to) prefer A to B. Thus it is, on this analysis, that if x is sufficiently dire, it will be rational for the scientist to prefer option B even though she is extremely confident in H, extremely confident that A will have the better outcome. It is an eminently reasonable verdict, according to which her degree of confidence in H is consequential as advertised--but consequential in a way that is perfectly compatible with her having arrived at that degree of confidence on purely epistemic grounds.[8]

Where then does the seemingly powerful line of thought in favor of Rudner’s surprising conclusion go wrong? Jeffrey did not explicitly address this question. This is perhaps because Rudner was himself less than explicit about how exactly he was reasoning to his conclusion. He did not, for example, actually lay out the line of thought I supplied him above. But, given the Bayesian analysis and supposing that the line of thought I offered Rudner is one he would have been happy to claim as his own, it is clear how his line of thought goes wrong. It goes wrong with the assumption that the scientist’s accepting that A will have the better outcome if chosen amounts to accepting that she should choose A. In particular, it goes wrong with the assumption that, if A will have the better outcome if chosen, then the scientist should choose A.

On the Bayesian analysis, this last assumption is false. What determines whether the scientist should choose A is not whether it will have the better outcome if chosen; it is whether A bears the maximum subjective expected utility for her. On reflection, of course, this diagnosis of Rudner’s error seems right. After all, suppose the scientist’s evidence does not warrant her having a degree of confidence in H as great as r: that is, suppose her evidence does not warrant her being confident enough in H to be willing risk the lives of hundreds of innocents on H’s being true, for the prospect of being $5 better off. Suppose that, accordingly, she chooses option B. Suppose she later becomes fully confident that H is true—fully confident that she did not choose the option that offered the better outcome if chosen. Who thinks that she should then look back at her choice and say, “I should have done otherwise”?

The moral one would have expected Jeffrey to have drawn, given the foregoing, is that, Rudner to the contrary, there is no reason to think that the decisions as to what she ought to accept that a scientist makes need to answer to any concerns other than purely epistemic ones. She is free to accept H, and thus that A will have the better outcome if chosen, without any fear of thereby committing herself to any verdict whatsoever as to which of the two options in her decision problem she should choose.

But the moral Jeffrey drew in fact was not at all what one might have expected. The moral he drew, from how Rudner had gone wrong, is that it is no part of the business of a scientist to decide what hypotheses to accept.

Jeffrey reasoned this way. Consider what we learn in coming to appreciate Rudner’s mistake. We learn that whether our scientist accepts H is not consequential in the way Rudner imagines. It is, rather, how confident she is in H that is consequential. Yet the latter is consequential in a way completely compatible with our scientist’s determining on purely epistemic grounds how much confidence she ought to invest in H.[9] So, if (as seems plausible) a scientist is in the business of settling, by appeal to purely epistemic considerations, on states of opinion that can, if the occasion demands it, serve to inform decision making, then what we have learned is that a scientist is in the business of determining how much confidence she ought to invest in hypotheses. But that much granted, it is hard to see what part, in the business of a scientist, the acceptance of hypotheses could possibly play. Once we know how confident our scientist is in H’s truth, there doesn’t seem be anything of import about her qua scientist to be found out by asking, “Yes, but does she accept H?”

Now, it should be clear that there is nothing in the foregoing diagnosis of where Rudner goes wrong, or in the moral Jeffrey drew from his critique of Rudner’s paper, that would be weakened had Rudner made his case in terms of how a rational inquirer should determine what she ought to believe, instead of making the case (as he did) in terms of how a scientist should determine what she ought to accept. Indeed, in a paper published some fourteen years after the first, Jeffrey made clear that he understood his moral to apply as much to belief as it does to acceptance. “[I am] not disturbed that our ordinary notion of belief is only vestigially present in the notion of degree of belief,” he wrote. “I am inclined to think that Ramsey sucked the marrow out of the ordinary notion, and used it to nourish a more adequate view.”[10] That is, he made clear that he was issuing a serious challenge to those whose epistemologies would trade in belief or acceptance: the challenge to say why we should suppose that talk of belief or talk of acceptance cottons onto anything that isn’t already better taken care of by talk of graded confidence. It is the challenge to say how our account of a rational inquirer would be any the poorer were we to pursue that account strictly in terms of probabilistically constrained states of confidence; how a rational inquirer’s ability to fulfill her duties as an inquirer would be in any way compromised were she to attend to nothing, by way of an attitude she has adopted (or might adopt) towards a hypothesis, apart from how confident she is (or might be) in the truth of that hypothesis.

II. WHAT IT TAKES TO MEET JEFFREY’S CHALLENGE

What does it take to meet Jeffrey’s challenge? This much seems obvious. To meet the challenge, one has to identify an attitude that, unlike states of graded confidence, does not admit of degree: one either has that attitude towards a hypothesis or one does not. Whether a person has that attitude towards to a hypothesis has got to be of methodological importance—it has to make a difference to how she fulfills her duties as an inquirer. It also has to make sense that an epistemology would concern itself with the attitude in question. That is, it has to make sense that the propriety of an inquirer’s having (or failing to have) that attitude towards a given hypothesis be a legitimate subject of epistemic appraisal. And this attitude has to co-exist happily with the Bayesian decision analysis, including Bayesian Probabilism: the doctrine that an inquirer opens her states of graded confidence are to criticism if those states are not suitably constrained by the axioms of probability.

But does one have to identify an attitude that will answer faithfully to all (or even to most of) what we (or, at least, we philosophers) have been accustomed to say (or have regarded as being intuitive to say) about beliefs? We have already seen enough to know that the answer has to be “No.” After all, what we have been accustomed to say, about what beliefs are and when it is rational to have them, is bound up with our conception of what work beliefs do for us, what import they have for us. And, if the Bayesian analysis of Rudner’s error is correct, then that conception—in particular, the conception that what we believe is consequential for our decision making, in that we should do what we believe will turn out best—is fundamentally mistaken. With it, so is the notion that one hasn’t explained a person’s behavior until one has established that it is something the agent believed would best satisfy her desires at the time (or, at least, believed was at least as likely as any of her alternatives to maximize the satisfaction of her desires at the time); also the notion that one of the reasons we want our beliefs to be true is that we act on them (we do what we believe will have the best outcome given the truth of our beliefs), and so things will go badly for us if our beliefs are not true.[11]

Indeed, if the Bayesian analysis is correct, then a significant amount of the work that we have been using our talk of categorical belief to do is actually done much better by talk of graded confidence. And if that is so, there can be no question as to whether we can identify an attitude meeting the obvious conditions cited above that will be faithful to what we have been accustomed to say about belief and rational belief. What we have been accustomed to say is bound to conflict (as we have already seen it conflict) with the Bayesian way of thinking about things—with the idea that graded states of confidence are open to criticism unless they answer to the axioms of probability, and with the idea that our choices are open to criticism unless we choose in such a way as to maximize our subjective expected utility.[12]

Does this mean, then, that Jeffrey’s challenge simply cannot be met? Not at all. It is entirely compatible with the Bayesian analysis’ being correct that there is still something important that we have been using our talk of categorical belief to talk about. It is entirely compatible with the Bayesian analysis’ being correct that we can separate out some use we have been making of “belief” that picks out a genuine attitude that, whatever the relation it bears to graded states of confidence, does not itself admit of grades—an inquirer either has that attitude towards a hypothesis or she does not—and where it is of real methodological moment whether she has it or not. And if that is something we can do, then, it seems to me that we will have thereby done all we need to do to meet Jeffrey’s challenge.[13]