4.3. Adopting more reliable, more expensive reasoning strategies

К оглавлению1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 
68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 
85 86 87 88 89 90 91 92 93 94 

One might adopt a reasoning strategy that brings more benefits but is

more costly than the old strategy. Such a change will sometimes (but not

always) lead to better reasoning (top right quadrant of Figure 3.5). Suppose

Test Taker is currently expending c resources on reasoning strategy

D; but he has the time and energy to employ c1 resources on these

problems (Figure 3.6). At this expenditure of resources, E is the most

reliable reasoning strategy available. Now should Test Taker quit D (at

cost c) in favor of E (at cost c1)? The answer is, of course, it all depends. If

he were to switch to E, he would increase his reliability on these particular

reasoning problems. But whether this change leads to better overall reasoning

all depends on whether the gain in reliability in this portion of the

test more than offsets the loss of reliability that results from spending

fewer resources on the other portion of the test. Although this may seem

odd to say, the most locally reliable reasoning strategy is not always the

best overall reasoning strategy. That’s because given resource limitations,

the optimization of global reliability often requires that local reliability not

be optimized.

All reasoning strategies have opportunity costs (i.e., what is forgone

by not devoting resources to the best available alternative). The devotion

of cognitive resources to one problem typically prevents or hinders us from spending time and energy on something else. Our point here is akin

to one made years ago by Simon (1982) about satisficing and about

bounded rationality in general: sometimes it is better to adopt reasoning

and decision procedures that are good and cheap rather than great and

expensive. We take Simon’s point to be that from the perspective of

prudential rationality, one ought not always use the ideal (in the sense of

‘‘maximally accurate’’) reasoning strategy. If given the choice between

reasoning in an ideal fashion about X or using the same energy to reason

very well (but less than ideally) about X and take your kid fishing, most

problems we face aren’t so significant that it would be worth it to miss out

on the fishing trip. But there is a stronger point to make. When a reasoner

has a choice between two tractable reasoning strategies (i.e., reasoning

strategies she can actually employ), sometimes the reasoner ought to adopt

the cheaper and less reliable strategy—even from a purely epistemic perspective.

This will occur when the opportunity cost comes in the form of a

forsaken epistemic benefit. In these cases, part of the cost of devoting those

resources to one problem is not having made any headway on some other

problem.

There are many examples of highly reliable reasoning strategies that

come with high costs. For example, any epistemological theory that

recommends Bayes’ Rule for updating belief is recommending a reasoning

strategy that is more reliable but also more expensive than ones a reasoner

is likely using. Another high cost prescription is that reasoners’ knowledge

should be closed under entailment (Cherniak 1986). Many have made the

point that strategies that are in practice impossible to implement cannot

enhance epistemic excellence. But given our discussion of opportunity

costs, there’s a brash lesson to draw: A reasoning strategy that is more

costly and reliable, but not so costly that it can’t be used, still does not

necessarily enhance epistemic excellence. By expending more resources on

a new and improved reasoning strategy, one inevitably takes resources that

could be used elsewhere. And if those extra resources could be better used

elsewhere, then one could be a better reasoner if one retained the less

reliable reasoning strategy and used the extra resources more effectively.

Ameliorative Psychology recommends a number of reasoning strategies

that would likely be more expensive to implement and execute than

the reasoning strategies most people currently employ. For example, deciding

to use frequency formats or a consider-the-opposite strategy (see

chapter 9) comes with nonnegligible start-up costs. And even if we ignore

start-up costs, they are likely to be more expensive to execute than most

reasoners’ default strategies. As a result, it is not inevitable that such

68 Epistemology and the Psychology of Human Judgment

reasoning strategies will make someone a better reasoner—even though

such strategies are more reliable than reasoners’ actual reasoning strategies

and can in practice be implemented and executed.

One might adopt a reasoning strategy that brings more benefits but is

more costly than the old strategy. Such a change will sometimes (but not

always) lead to better reasoning (top right quadrant of Figure 3.5). Suppose

Test Taker is currently expending c resources on reasoning strategy

D; but he has the time and energy to employ c1 resources on these

problems (Figure 3.6). At this expenditure of resources, E is the most

reliable reasoning strategy available. Now should Test Taker quit D (at

cost c) in favor of E (at cost c1)? The answer is, of course, it all depends. If

he were to switch to E, he would increase his reliability on these particular

reasoning problems. But whether this change leads to better overall reasoning

all depends on whether the gain in reliability in this portion of the

test more than offsets the loss of reliability that results from spending

fewer resources on the other portion of the test. Although this may seem

odd to say, the most locally reliable reasoning strategy is not always the

best overall reasoning strategy. That’s because given resource limitations,

the optimization of global reliability often requires that local reliability not

be optimized.

All reasoning strategies have opportunity costs (i.e., what is forgone

by not devoting resources to the best available alternative). The devotion

of cognitive resources to one problem typically prevents or hinders us from spending time and energy on something else. Our point here is akin

to one made years ago by Simon (1982) about satisficing and about

bounded rationality in general: sometimes it is better to adopt reasoning

and decision procedures that are good and cheap rather than great and

expensive. We take Simon’s point to be that from the perspective of

prudential rationality, one ought not always use the ideal (in the sense of

‘‘maximally accurate’’) reasoning strategy. If given the choice between

reasoning in an ideal fashion about X or using the same energy to reason

very well (but less than ideally) about X and take your kid fishing, most

problems we face aren’t so significant that it would be worth it to miss out

on the fishing trip. But there is a stronger point to make. When a reasoner

has a choice between two tractable reasoning strategies (i.e., reasoning

strategies she can actually employ), sometimes the reasoner ought to adopt

the cheaper and less reliable strategy—even from a purely epistemic perspective.

This will occur when the opportunity cost comes in the form of a

forsaken epistemic benefit. In these cases, part of the cost of devoting those

resources to one problem is not having made any headway on some other

problem.

There are many examples of highly reliable reasoning strategies that

come with high costs. For example, any epistemological theory that

recommends Bayes’ Rule for updating belief is recommending a reasoning

strategy that is more reliable but also more expensive than ones a reasoner

is likely using. Another high cost prescription is that reasoners’ knowledge

should be closed under entailment (Cherniak 1986). Many have made the

point that strategies that are in practice impossible to implement cannot

enhance epistemic excellence. But given our discussion of opportunity

costs, there’s a brash lesson to draw: A reasoning strategy that is more

costly and reliable, but not so costly that it can’t be used, still does not

necessarily enhance epistemic excellence. By expending more resources on

a new and improved reasoning strategy, one inevitably takes resources that

could be used elsewhere. And if those extra resources could be better used

elsewhere, then one could be a better reasoner if one retained the less

reliable reasoning strategy and used the extra resources more effectively.

Ameliorative Psychology recommends a number of reasoning strategies

that would likely be more expensive to implement and execute than

the reasoning strategies most people currently employ. For example, deciding

to use frequency formats or a consider-the-opposite strategy (see

chapter 9) comes with nonnegligible start-up costs. And even if we ignore

start-up costs, they are likely to be more expensive to execute than most

reasoners’ default strategies. As a result, it is not inevitable that such

68 Epistemology and the Psychology of Human Judgment

reasoning strategies will make someone a better reasoner—even though

such strategies are more reliable than reasoners’ actual reasoning strategies

and can in practice be implemented and executed.