1. Diagnostic Reasoning

К оглавлению1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 
68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 
85 86 87 88 89 90 91 92 93 94 

In chapter 8, we explored the conceptual reject-the-norm arguments of

Cohen and Gigerenzer that held that subjects who neglected base rates

were not making an error. Base rate neglect occurs when subjects are

trying to come to a conditional probability judgment (e.g., given that a

subject tests positive on a drug test, what is the probability he has drugs in

his system?). Subjects who neglect the base rate typically take the inverse

conditional probability (the probability that the test will be positive given

that the subject has drugs in his system) to be the conditional probability

they’re after. So suppose a subject is told that a test is 80% accurate (i.e., if

S is positive, the test will say so 80% of the time; and if S is negative, the

test will say so 80% of the time). The subject who suffers from base rate

neglect will judge that if someone tests positive (negative) there is an 80%

chance that they are positive (negative). But simply because the probability

of P given Q is 80%, it doesn’t follow that the probability of Q given P is

80%. The probability that S is pregnant given that she has had sex is not

the same as the probability that S has had sex given that she is pregnant.

The standard way to solve such problems is with Bayes’ Rule: P(C/S)ј

P(S/C)_ P(C) / {[P(S/C)_ P(C)]ю[P(S/_C)_ P(_C)]}. As a mathematical

identity, Bayes’ Rule is, of course, true. But a mathematical formula isn’t

by itself a reasoning strategy. A reasoning strategy is a cognitive representation

of a rule we can often characterize in terms of four elements: (a)

the cues used to make the judgment; (b) the formula for combining the

cues to make the judgment; (c) the target of the judgment (i.e., what it’s

about); and (d) the range of objects (states, properties, processes, etc.),

defined by detectable cues, about which the rule makes judgments that are

thought to be reliable. So we can characterize a Bayesian reasoning strategy

as follows:

1. Cues : Conditional Probability of Q given P; Prior Probability of P;

Conditional Probability of Q given not-P

2. Formula: P(P/Q)јP(Q/P) _ P(P) / {[P(Q/P) _ P(P)]ю[P(Q/_P) _

P(_P)]}

3. Target : Conditional Probability of P given Q

4. Range : Indefinite

The first three features are self-explanatory, but we should say something

about the range of the Bayesian reasoning strategy. It is indefinite, in the

same sense that the range of deductive logic is indefinite: As long as the problem facing a reasoner has the right sort of formal structure, it can be

about anything.

So far, we have two ways to solve diagnosis problems. We can neglect

base rates (which seems to involve confusing a conditional probability with

its inverse) or we can apply Bayes’ Rule. As we have argued (in chapter 8),

neglecting base rates leads to errors on highly significant problems. So we

should avoid that reasoning strategy if possible. But there is considerable

evidence that subjects don’t find it easy to use the Bayesian reasoning

strategy. For example, the study by Casscells, Schoenberger, and Grayboys

(1978), even though flawed (see our discussion in chapter 8, section 1),

suggests that the faculty and staff at Harvard Medical School had a difficult

time using Bayes’ Rule. This is disturbing. Consider, first, that medical

doctors are, as a group, very intelligent; second, they (unlike most people)

have been introduced to Bayes’ Rule in their studies (at least, we hope they

have); third, medical doctors are faced with diagnosis problems all the

time; and fourth, these problems are highly significant for medical doctors.

They have very weighty moral and prudential reasons to be as accurate as

they can be in drawing conclusions about their patients’ health on the basis

of medical tests. And surely most doctors must know that diagnosis

problems are highly significant. When it comes to implementing a reasoning

strategy, one would think that these conditions are about as ideal as

one can realistically hope for. So if the faculty and staff at Harvard Medical

School can’t get diagnosis problems right, this suggests there’s trouble.

Gigerenzer and Hoffrage describe three physicians who dropped out of an

experiment in which they were asked to engage in diagnostic reasoning.

One university professor ‘‘seemed agitated and affronted by the test and

refused to give numerical estimates.’’ The professor said, ‘‘This is not the

way to treat patients. I throw all these journals [with statistical information]

away immediately. One can’t make a diagnosis on such a basis.

Statistical information is one big lie’’ (Hoffrage and Gigrenzer 2004, 258).

We can’t help but worry about this doctor’s patients. These are people who

might have a serious disease and who need to make treatment decisions.

Surely, they would benefit from a clear idea of the likelihoods facing them.

From our perspective, these results strongly suggest that the Bayesian

reasoning strategy (as represented above) is not particularly tractable for

most people. For most people, the start-up costs are high (i.e., it’s hard to

learn) and the benefits are low (i.e., it’s hard to successfully apply to cases

given the cognitive resources most of us bring to such problems). It is

worthwhile to investigate whether there is some other reasoning strategy

that avoids the inaccuracies of base rate neglect and that also avoids the high costs of the Bayesian strategy. Fortunately, Gigerenzer and Hoffrage

(1995) have shown how to dramatically improve people’s reasoning

on diagnosis problems without a lot of complicated statistical training. It

turns out that people do much better on these sorts of problems when they

are framed in terms of frequencies rather than probabilities. The best way

to see this is with an example. Here are two mathematically equivalent

formulations of a diagnosis problem:

Probability format. The probability of breast cancer is 1% for women at age

forty who participate in routine screening. If a woman has breast cancer, the

probability is 80% that she will get a positive mammography. If a woman

does not have breast cancer, the probability is 9.6% that she will also get a

positive mammography. A woman in this age group had a positive mammography

in a routine screening. What is the probability that she actually has

breast cancer?

—%.

Frequency format. 10 out of every 1,000 women at age forty who participate

in routine screening have breast cancer. 8 of every 10 women with breast

cancer will get a positive mammography. 95 out of every 990 women

without breast cancer will also get a positive mammography. Here is a new

representative sample of women at age forty who got a positive mammography

in routine screening. How many of these women do you expect to

actually have breast cancer?

___out of___.

People with no training in statistics tended to do much better on problems

in the latter frequency formats. Gigerenzer and Hoffrage report that 16% of

subjects faced with probability formats got the Bayesian answer, while 46%

of subjects faced with frequency formats got the Bayesian answer (693).

These results suggest an obvious reasoning strategy: When faced with

a diagnosis problem framed in terms of probabilities, people should learn

to represent and solve the problem in a frequency format. The frequency

format solution to this (or any) diagnosis problem would involve five

steps (adapted from Gigerenzer and Hoffrage 1995):

1. Draw up a hypothetical population of 1,000. (Literally, draw a rectangle

that represents 1,000 people.)

2. Base rate cut: How many (of 1,000) have the disease? Answer : 10 (1% of

1,000).

3. Hit rate cut: How many of those with the disease will test positive?

Answer: 8 (test sensitivity is 80%). (In a corner of the rectangle, color in

the space representing the 8 true positives.)

Positive Advice 141

4. False alarm cut: How many of those (990) without the disease will test

positive? Answer: 95 (9.6% of 990). (In another corner of the rectangle,

color in the space representing the 95 false positives.)

5. Comparison step: What’s the fraction of true positives (8) among the

positives (8ю95)? Answer: 8/103, or about 7.8%.

There is no mystery why subjects have an easier time with the frequency

format than the probability format. First, the frequency format makes the

base rate information transparent. Second, the frequency format requires

performing a much easier calculation.

The calculation for the probability format: .01 _ .08 / [(.01 _ .08)ю(.99 _

.096)]

The calculation for the frequency format: 8/(8ю95)

Studies like the ones cited here provide a lot of evidence for thinking that

people can reason better about frequencies than they can about probabilities

(Gigerenzer et al. 1999). So here is a piece of advice that drops out naturally

from our naturalistic epistemological theory: When tackling diagnosis

problems, repackage the problem-task so that it will (for many

people) naturally trigger a cognitive mechanism that will quickly and

reliably get the Bayesian answer. By framing diagnosis problems in terms

of frequencies rather than probabilities, people can reason about significant

problems more reliably.

The start-up costs of adopting and implementing the frequency format

are not negligible. One must learn to frame a diagnosis problem in terms of

idealized populations and frequencies, and one must learn to apply the

format’s five steps to problems. The reliability of the frequency format is

considerably higher than that of neglecting the base rate; and we can

confidently assert (having taught undergraduates both strategies) that the

frequency format is significantly easier to learn to use than the Bayesian

strategy. Should everyone learn to use the frequency format? This is very

much an empirical issue, but we suspect not. Certainly any person whose

profession involves drawing inferences from diagnostic tests (whether for

disease or drug use) who cannot easily apply the Bayesian reasoning strategy

should learn to use frequency formats (which is the recommendation of

Gigerenzer and Hoffrage 1995). If there are institutions, policies, or practices

in place that make it highly unlikely that people will suffer because of

themistakes of experts involved in diagnosing important conditions, it is not

clear that everyone would need to go to the trouble of learning frequency

formats. There might be good reasons for people to do so (e.g., to understand

how highly reliable tests for rare conditions can generate many more

false positives than true positives, to check on the diagnostic judgments of

experts, etc.). But given the evidence we have reviewed, it seems unlikely that

we are in a situation in which the risk of poor diagnosis is very low. If this is

right, then it would behoove just about everyone who has the potential to get

a serious disease or who has a loved one who has the potential to get a serious

disease to understand how frequency formats work.

Here is a natural objection to the advice that (some) people adopt

frequency formats: ‘‘Anyone who uses the frequency format is really computing

Bayes’ Rule. Both are computing the same function—given a set of

inputs, Bayes’ Rule and the frequency format will have the same answer as

an output. So this advice provides no grounds for rejecting Bayes’ Rule.’’

(One might respond that they aren’t the same function, since they presuppose

different views about probability. While this might be a legitimate

objection, we intend to focus on what we think is a more serious problem

with the argument.) The problem with this objection is that it confuses

two things that must be kept distinct: Bayes’ Rule as a mathematical

identity and Bayes’ Rule as a reasoning strategy (as a psychological process).

As a mathematical identity, Bayes’ Rule is true. But most people

can’t use the Bayesian reasoning strategy very well. So even though (in

some sense) these two strategies compute the same formula, for reasons of

computational difficulty, the Bayesian reasoning strategy just isn’t as good

as the frequency format. In fact, the frequency format is quite different

from the Bayesian strategy (described above). There are a number of different

ways we might characterize the frequency format. But Gigerenzer

and Hoffrage (1995) introduce it primarily as a means of improving

doctors’ reasoning about diagnostic inferences. Narrowing its range in this

way, we can characterize it as follows:

1. Cues : Base rate of disease; hit rate of the test; false positive rate of the

test

2. Formula : true positives/total positives

3. Target : The likelihood that someone who tests positive for a disease

actually has the disease

4. Range : Medical diagnoses based on medical tests

There are various ways one might try to extend the range of this reasoning

strategy. (For example, one might extend it to apply to drug and alcohol

tests.) While extending the strategy’s range would make it a more robust

reasoning strategy, it is a thoroughly empirical claim whether or not this

would improve it. This will depend in part on how well a reasoner can be

expected to employ the more robust reasoning strategy; and it will also depend on how significant those extra reasoning problems are likely to be

for the reasoner. On our view, it might well be that given the range of

reasoning problems most people expect to face, the full Bayesian reasoning

strategy is not worth the trouble. It is possible that the only significant

reasoning problems most people are likely to face that require Bayesian

reasoning are diagnosis problems (e.g., medical and drug tests). In that

case, when it comes to offering normative guidance, the mathematical

question of whether the frequency format calculation is identical to the

Bayesian one is near enough irrelevant. The relevant issue is which of the

two clearly different reasoning strategies people should adopt.

We suspect that many epistemologists will want to raise a version of

the triviality objection: ‘‘Why does this example exhibit the superiority of

your naturalistic theory over any other (remotely plausible) epistemological

theory? Conjoin the empirical results discussed above with an epistemological

theory. If the theory is remotely plausible, it will hold that under

normal circumstances, for any diagnosis problem, the justified belief is

delivered by Bayes’ Rule. So any plausible view can recommend the frequency

format. Given our cognitive abilities, the frequency format will

lead people to reason to justified beliefs better than alternative reasoning

strategies.’’ This objection explicitly relies on the distinction between Bayes’

theorem as a mathematical identity and as a reasoning strategy. But it does

so by divorcing from epistemology the issue of what reasoning strategy to

adopt. The objection suggests that any plausible epistemological theory—

foundationalist, coherentist, reliabilist, pragmatist, contextualist—will be

consistent with any reasonable normative guidance about reasoning one

might offer on the basis of psychological findings. But if this is really true,

then how reason guiding could these theories possibly be? If the practical

normative content of all these very different theories is something like

‘‘Adopt justified beliefs, but we have no resources to tell you how to do

this,’’ then these theories are like the financial advisor who takes his

commission after offering the advice ‘‘Buy low and sell high.’’ This

describes a desirable state of affairs, but it’s hardly guidance.

In chapter 8, we explored the conceptual reject-the-norm arguments of

Cohen and Gigerenzer that held that subjects who neglected base rates

were not making an error. Base rate neglect occurs when subjects are

trying to come to a conditional probability judgment (e.g., given that a

subject tests positive on a drug test, what is the probability he has drugs in

his system?). Subjects who neglect the base rate typically take the inverse

conditional probability (the probability that the test will be positive given

that the subject has drugs in his system) to be the conditional probability

they’re after. So suppose a subject is told that a test is 80% accurate (i.e., if

S is positive, the test will say so 80% of the time; and if S is negative, the

test will say so 80% of the time). The subject who suffers from base rate

neglect will judge that if someone tests positive (negative) there is an 80%

chance that they are positive (negative). But simply because the probability

of P given Q is 80%, it doesn’t follow that the probability of Q given P is

80%. The probability that S is pregnant given that she has had sex is not

the same as the probability that S has had sex given that she is pregnant.

The standard way to solve such problems is with Bayes’ Rule: P(C/S)ј

P(S/C)_ P(C) / {[P(S/C)_ P(C)]ю[P(S/_C)_ P(_C)]}. As a mathematical

identity, Bayes’ Rule is, of course, true. But a mathematical formula isn’t

by itself a reasoning strategy. A reasoning strategy is a cognitive representation

of a rule we can often characterize in terms of four elements: (a)

the cues used to make the judgment; (b) the formula for combining the

cues to make the judgment; (c) the target of the judgment (i.e., what it’s

about); and (d) the range of objects (states, properties, processes, etc.),

defined by detectable cues, about which the rule makes judgments that are

thought to be reliable. So we can characterize a Bayesian reasoning strategy

as follows:

1. Cues : Conditional Probability of Q given P; Prior Probability of P;

Conditional Probability of Q given not-P

2. Formula: P(P/Q)јP(Q/P) _ P(P) / {[P(Q/P) _ P(P)]ю[P(Q/_P) _

P(_P)]}

3. Target : Conditional Probability of P given Q

4. Range : Indefinite

The first three features are self-explanatory, but we should say something

about the range of the Bayesian reasoning strategy. It is indefinite, in the

same sense that the range of deductive logic is indefinite: As long as the problem facing a reasoner has the right sort of formal structure, it can be

about anything.

So far, we have two ways to solve diagnosis problems. We can neglect

base rates (which seems to involve confusing a conditional probability with

its inverse) or we can apply Bayes’ Rule. As we have argued (in chapter 8),

neglecting base rates leads to errors on highly significant problems. So we

should avoid that reasoning strategy if possible. But there is considerable

evidence that subjects don’t find it easy to use the Bayesian reasoning

strategy. For example, the study by Casscells, Schoenberger, and Grayboys

(1978), even though flawed (see our discussion in chapter 8, section 1),

suggests that the faculty and staff at Harvard Medical School had a difficult

time using Bayes’ Rule. This is disturbing. Consider, first, that medical

doctors are, as a group, very intelligent; second, they (unlike most people)

have been introduced to Bayes’ Rule in their studies (at least, we hope they

have); third, medical doctors are faced with diagnosis problems all the

time; and fourth, these problems are highly significant for medical doctors.

They have very weighty moral and prudential reasons to be as accurate as

they can be in drawing conclusions about their patients’ health on the basis

of medical tests. And surely most doctors must know that diagnosis

problems are highly significant. When it comes to implementing a reasoning

strategy, one would think that these conditions are about as ideal as

one can realistically hope for. So if the faculty and staff at Harvard Medical

School can’t get diagnosis problems right, this suggests there’s trouble.

Gigerenzer and Hoffrage describe three physicians who dropped out of an

experiment in which they were asked to engage in diagnostic reasoning.

One university professor ‘‘seemed agitated and affronted by the test and

refused to give numerical estimates.’’ The professor said, ‘‘This is not the

way to treat patients. I throw all these journals [with statistical information]

away immediately. One can’t make a diagnosis on such a basis.

Statistical information is one big lie’’ (Hoffrage and Gigrenzer 2004, 258).

We can’t help but worry about this doctor’s patients. These are people who

might have a serious disease and who need to make treatment decisions.

Surely, they would benefit from a clear idea of the likelihoods facing them.

From our perspective, these results strongly suggest that the Bayesian

reasoning strategy (as represented above) is not particularly tractable for

most people. For most people, the start-up costs are high (i.e., it’s hard to

learn) and the benefits are low (i.e., it’s hard to successfully apply to cases

given the cognitive resources most of us bring to such problems). It is

worthwhile to investigate whether there is some other reasoning strategy

that avoids the inaccuracies of base rate neglect and that also avoids the high costs of the Bayesian strategy. Fortunately, Gigerenzer and Hoffrage

(1995) have shown how to dramatically improve people’s reasoning

on diagnosis problems without a lot of complicated statistical training. It

turns out that people do much better on these sorts of problems when they

are framed in terms of frequencies rather than probabilities. The best way

to see this is with an example. Here are two mathematically equivalent

formulations of a diagnosis problem:

Probability format. The probability of breast cancer is 1% for women at age

forty who participate in routine screening. If a woman has breast cancer, the

probability is 80% that she will get a positive mammography. If a woman

does not have breast cancer, the probability is 9.6% that she will also get a

positive mammography. A woman in this age group had a positive mammography

in a routine screening. What is the probability that she actually has

breast cancer?

—%.

Frequency format. 10 out of every 1,000 women at age forty who participate

in routine screening have breast cancer. 8 of every 10 women with breast

cancer will get a positive mammography. 95 out of every 990 women

without breast cancer will also get a positive mammography. Here is a new

representative sample of women at age forty who got a positive mammography

in routine screening. How many of these women do you expect to

actually have breast cancer?

___out of___.

People with no training in statistics tended to do much better on problems

in the latter frequency formats. Gigerenzer and Hoffrage report that 16% of

subjects faced with probability formats got the Bayesian answer, while 46%

of subjects faced with frequency formats got the Bayesian answer (693).

These results suggest an obvious reasoning strategy: When faced with

a diagnosis problem framed in terms of probabilities, people should learn

to represent and solve the problem in a frequency format. The frequency

format solution to this (or any) diagnosis problem would involve five

steps (adapted from Gigerenzer and Hoffrage 1995):

1. Draw up a hypothetical population of 1,000. (Literally, draw a rectangle

that represents 1,000 people.)

2. Base rate cut: How many (of 1,000) have the disease? Answer : 10 (1% of

1,000).

3. Hit rate cut: How many of those with the disease will test positive?

Answer: 8 (test sensitivity is 80%). (In a corner of the rectangle, color in

the space representing the 8 true positives.)

Positive Advice 141

4. False alarm cut: How many of those (990) without the disease will test

positive? Answer: 95 (9.6% of 990). (In another corner of the rectangle,

color in the space representing the 95 false positives.)

5. Comparison step: What’s the fraction of true positives (8) among the

positives (8ю95)? Answer: 8/103, or about 7.8%.

There is no mystery why subjects have an easier time with the frequency

format than the probability format. First, the frequency format makes the

base rate information transparent. Second, the frequency format requires

performing a much easier calculation.

The calculation for the probability format: .01 _ .08 / [(.01 _ .08)ю(.99 _

.096)]

The calculation for the frequency format: 8/(8ю95)

Studies like the ones cited here provide a lot of evidence for thinking that

people can reason better about frequencies than they can about probabilities

(Gigerenzer et al. 1999). So here is a piece of advice that drops out naturally

from our naturalistic epistemological theory: When tackling diagnosis

problems, repackage the problem-task so that it will (for many

people) naturally trigger a cognitive mechanism that will quickly and

reliably get the Bayesian answer. By framing diagnosis problems in terms

of frequencies rather than probabilities, people can reason about significant

problems more reliably.

The start-up costs of adopting and implementing the frequency format

are not negligible. One must learn to frame a diagnosis problem in terms of

idealized populations and frequencies, and one must learn to apply the

format’s five steps to problems. The reliability of the frequency format is

considerably higher than that of neglecting the base rate; and we can

confidently assert (having taught undergraduates both strategies) that the

frequency format is significantly easier to learn to use than the Bayesian

strategy. Should everyone learn to use the frequency format? This is very

much an empirical issue, but we suspect not. Certainly any person whose

profession involves drawing inferences from diagnostic tests (whether for

disease or drug use) who cannot easily apply the Bayesian reasoning strategy

should learn to use frequency formats (which is the recommendation of

Gigerenzer and Hoffrage 1995). If there are institutions, policies, or practices

in place that make it highly unlikely that people will suffer because of

themistakes of experts involved in diagnosing important conditions, it is not

clear that everyone would need to go to the trouble of learning frequency

formats. There might be good reasons for people to do so (e.g., to understand

how highly reliable tests for rare conditions can generate many more

false positives than true positives, to check on the diagnostic judgments of

experts, etc.). But given the evidence we have reviewed, it seems unlikely that

we are in a situation in which the risk of poor diagnosis is very low. If this is

right, then it would behoove just about everyone who has the potential to get

a serious disease or who has a loved one who has the potential to get a serious

disease to understand how frequency formats work.

Here is a natural objection to the advice that (some) people adopt

frequency formats: ‘‘Anyone who uses the frequency format is really computing

Bayes’ Rule. Both are computing the same function—given a set of

inputs, Bayes’ Rule and the frequency format will have the same answer as

an output. So this advice provides no grounds for rejecting Bayes’ Rule.’’

(One might respond that they aren’t the same function, since they presuppose

different views about probability. While this might be a legitimate

objection, we intend to focus on what we think is a more serious problem

with the argument.) The problem with this objection is that it confuses

two things that must be kept distinct: Bayes’ Rule as a mathematical

identity and Bayes’ Rule as a reasoning strategy (as a psychological process).

As a mathematical identity, Bayes’ Rule is true. But most people

can’t use the Bayesian reasoning strategy very well. So even though (in

some sense) these two strategies compute the same formula, for reasons of

computational difficulty, the Bayesian reasoning strategy just isn’t as good

as the frequency format. In fact, the frequency format is quite different

from the Bayesian strategy (described above). There are a number of different

ways we might characterize the frequency format. But Gigerenzer

and Hoffrage (1995) introduce it primarily as a means of improving

doctors’ reasoning about diagnostic inferences. Narrowing its range in this

way, we can characterize it as follows:

1. Cues : Base rate of disease; hit rate of the test; false positive rate of the

test

2. Formula : true positives/total positives

3. Target : The likelihood that someone who tests positive for a disease

actually has the disease

4. Range : Medical diagnoses based on medical tests

There are various ways one might try to extend the range of this reasoning

strategy. (For example, one might extend it to apply to drug and alcohol

tests.) While extending the strategy’s range would make it a more robust

reasoning strategy, it is a thoroughly empirical claim whether or not this

would improve it. This will depend in part on how well a reasoner can be

expected to employ the more robust reasoning strategy; and it will also depend on how significant those extra reasoning problems are likely to be

for the reasoner. On our view, it might well be that given the range of

reasoning problems most people expect to face, the full Bayesian reasoning

strategy is not worth the trouble. It is possible that the only significant

reasoning problems most people are likely to face that require Bayesian

reasoning are diagnosis problems (e.g., medical and drug tests). In that

case, when it comes to offering normative guidance, the mathematical

question of whether the frequency format calculation is identical to the

Bayesian one is near enough irrelevant. The relevant issue is which of the

two clearly different reasoning strategies people should adopt.

We suspect that many epistemologists will want to raise a version of

the triviality objection: ‘‘Why does this example exhibit the superiority of

your naturalistic theory over any other (remotely plausible) epistemological

theory? Conjoin the empirical results discussed above with an epistemological

theory. If the theory is remotely plausible, it will hold that under

normal circumstances, for any diagnosis problem, the justified belief is

delivered by Bayes’ Rule. So any plausible view can recommend the frequency

format. Given our cognitive abilities, the frequency format will

lead people to reason to justified beliefs better than alternative reasoning

strategies.’’ This objection explicitly relies on the distinction between Bayes’

theorem as a mathematical identity and as a reasoning strategy. But it does

so by divorcing from epistemology the issue of what reasoning strategy to

adopt. The objection suggests that any plausible epistemological theory—

foundationalist, coherentist, reliabilist, pragmatist, contextualist—will be

consistent with any reasonable normative guidance about reasoning one

might offer on the basis of psychological findings. But if this is really true,

then how reason guiding could these theories possibly be? If the practical

normative content of all these very different theories is something like

‘‘Adopt justified beliefs, but we have no resources to tell you how to do

this,’’ then these theories are like the financial advisor who takes his

commission after offering the advice ‘‘Buy low and sell high.’’ This

describes a desirable state of affairs, but it’s hardly guidance.