2.1. Epistemic benefits

К оглавлению1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 
68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 
85 86 87 88 89 90 91 92 93 94 

One might think that the benefits of reasoning will be a function of the

accuracy of our judgments. Yes, but it will be a very complicated function.

Accuracy by itself is cheap. What’s dear when it comes to reasoning is

accuracy about significant problems. We will discuss the issue of significance

in detail in chapter 6. Significance on our view is a property of a

problem for a person—so a reasoning problem can be more or less significant

for a person. The excellent reasoner will tend to focus on significant

reasoning problems, even if those problems are difficult to solve. As

a result, the excellent reasoner will often decide to execute a reasoning

strategy that is not among the most reliable strategies available to her. Suppose

an excellent reasoner is charged with making decisions about whether

a potential parolee is likely to commit another violent crime. She will

The Costs and Benefits of Excellent Judgment 85

adopt the best reasoning strategy she can for that problem, even though

tackling an easier problem (‘‘At every 10-second interval, how many

people are there in this room?’’) will get her more true judgments. So in

practice, the call for the excellent reasoner to tackle significant problems

will mean that she will not maximize accuracy in her judgments. She will

not come to maximize the overall reliability (or truth-ratio) of her beliefs.

Significance also has a role to play in the decision to act on judgments.

Certain significant problems are such that certain sorts of errors are more

costly than others. In these cases, one might come to a judgment but act

‘‘as if ’’ that judgment were mistaken. For example, in an environment in

which social institutions have broken down and a significant minority of

the population are armed, the epistemically excellent reasoner might well

judge of any particular person who is not obviously armed that he or she is

probably not armed. But she might act on the assumption that everyone is

armed. (This sort of example is different from the ducking reflex. With the

reflex, it isn’t obvious whether the ducker acquires a belief. Further, even if

she does, it isn’t the result of higher-order processing over which she has

any control. Applied epistemology is relevant only to those reasoning

problems about which one has some control over how to reason.)

Our account of significance will not yield a notion of epistemic benefit

that can be represented by units along a single dimension. The fact

that we can’t accurately assign units of epistemic benefit to a reasoning

strategy might seem like a serious problem for any cost-benefit approach

to epistemology. And it would be if cost-benefit analyses had to exhaustively

identify the benefits of good reasoning along a single dimension. But

as we have already argued, even deeply imperfect cost-benefit analyses can

be useful. So we propose to identify the benefits of a reasoning strategy in

terms of its reliability. We can measure the reliability of a reasoning

strategy; and this tracks reasonably well (in most cases) the real benefits of

reasoning. Reliability is a measurable surrogate that stands in for a reasoning

strategy’s epistemic benefits.

Some might worry about such an obviously flawed cost-benefit approach

playing such a central role in applied epistemology. Two points

should help assuage this worry. First, we begin our cost-benefit analysis by

recognizing that what we’re counting as the benefit of reasoning is only a

stand-in, and more importantly, by recognizing the way in which this

stand-in is flawed (i.e., it ignores significance). Given that we include in

our theory an account of significance, the applied epistemologist can

readily identify those cases in which reliability is likely to closely gauge the

real benefits of reasoning and those cases in which it is not likely to closely

gauge the real benefits of reasoning. As a result, we can decide to trust

some particular analysis (when the surrogate tracks the real benefits), and

we can decide to ignore or amend another analysis (when the surrogate

does not track the real benefits). The second reason not to worry too much

about this flaw in our cost-benefit approach arises out of our view about

what is the central task of applied epistemology: to suggest reasoning strategies

that are tractable, robustly reliable, and focused on problems that tend

to be highly significant. We can explore each of these three factors independently:

We can begin by noting what sorts of problems tend to be

highly significant for people, and then we can search for reasoning strategies

that are both tractable and robustly reliable on those problems. (In

fact, this is essentially what we do in chapter 9.)

One might think that the benefits of reasoning will be a function of the

accuracy of our judgments. Yes, but it will be a very complicated function.

Accuracy by itself is cheap. What’s dear when it comes to reasoning is

accuracy about significant problems. We will discuss the issue of significance

in detail in chapter 6. Significance on our view is a property of a

problem for a person—so a reasoning problem can be more or less significant

for a person. The excellent reasoner will tend to focus on significant

reasoning problems, even if those problems are difficult to solve. As

a result, the excellent reasoner will often decide to execute a reasoning

strategy that is not among the most reliable strategies available to her. Suppose

an excellent reasoner is charged with making decisions about whether

a potential parolee is likely to commit another violent crime. She will

The Costs and Benefits of Excellent Judgment 85

adopt the best reasoning strategy she can for that problem, even though

tackling an easier problem (‘‘At every 10-second interval, how many

people are there in this room?’’) will get her more true judgments. So in

practice, the call for the excellent reasoner to tackle significant problems

will mean that she will not maximize accuracy in her judgments. She will

not come to maximize the overall reliability (or truth-ratio) of her beliefs.

Significance also has a role to play in the decision to act on judgments.

Certain significant problems are such that certain sorts of errors are more

costly than others. In these cases, one might come to a judgment but act

‘‘as if ’’ that judgment were mistaken. For example, in an environment in

which social institutions have broken down and a significant minority of

the population are armed, the epistemically excellent reasoner might well

judge of any particular person who is not obviously armed that he or she is

probably not armed. But she might act on the assumption that everyone is

armed. (This sort of example is different from the ducking reflex. With the

reflex, it isn’t obvious whether the ducker acquires a belief. Further, even if

she does, it isn’t the result of higher-order processing over which she has

any control. Applied epistemology is relevant only to those reasoning

problems about which one has some control over how to reason.)

Our account of significance will not yield a notion of epistemic benefit

that can be represented by units along a single dimension. The fact

that we can’t accurately assign units of epistemic benefit to a reasoning

strategy might seem like a serious problem for any cost-benefit approach

to epistemology. And it would be if cost-benefit analyses had to exhaustively

identify the benefits of good reasoning along a single dimension. But

as we have already argued, even deeply imperfect cost-benefit analyses can

be useful. So we propose to identify the benefits of a reasoning strategy in

terms of its reliability. We can measure the reliability of a reasoning

strategy; and this tracks reasonably well (in most cases) the real benefits of

reasoning. Reliability is a measurable surrogate that stands in for a reasoning

strategy’s epistemic benefits.

Some might worry about such an obviously flawed cost-benefit approach

playing such a central role in applied epistemology. Two points

should help assuage this worry. First, we begin our cost-benefit analysis by

recognizing that what we’re counting as the benefit of reasoning is only a

stand-in, and more importantly, by recognizing the way in which this

stand-in is flawed (i.e., it ignores significance). Given that we include in

our theory an account of significance, the applied epistemologist can

readily identify those cases in which reliability is likely to closely gauge the

real benefits of reasoning and those cases in which it is not likely to closely

gauge the real benefits of reasoning. As a result, we can decide to trust

some particular analysis (when the surrogate tracks the real benefits), and

we can decide to ignore or amend another analysis (when the surrogate

does not track the real benefits). The second reason not to worry too much

about this flaw in our cost-benefit approach arises out of our view about

what is the central task of applied epistemology: to suggest reasoning strategies

that are tractable, robustly reliable, and focused on problems that tend

to be highly significant. We can explore each of these three factors independently:

We can begin by noting what sorts of problems tend to be

highly significant for people, and then we can search for reasoning strategies

that are both tractable and robustly reliable on those problems. (In

fact, this is essentially what we do in chapter 9.)