3. The cost-benefit imperative
К оглавлению1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 1617 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67
68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84
85 86 87 88 89 90 91 92 93 94
Our cost-benefit approach to epistemology takes elapsed time to be a surrogate
for epistemic costs and reliability to be a surrogate for epistemic benefits.
This approach has at least two important virtues. The first is that our
surrogates (time and reliability) are measurable. This means that the central
components of applied epistemology (or at least their rough approximations)
can be empirically determined in the following sense: (a) The costbenefit
curve is determined by observed outcomes in performance; and (b)
the curve can then be successfully used as a basis for predictions of performance.
So the central theoretical components of applied epistemology—
or at least rough approximations of them—can in principle be tested for
accuracy, rather than for their ability to stand up to imagined counterexamples.
The second virtue of our approach is that our surrogates roughly
track the properties of interest (the costs and benefits of reasoning). As a
result, the central tool of applied epistemology is reasonably accurate.
At this point, one might point out that successful SPRs have typically
been introduced without the explicit use of the formal machinery of costbenefit
analysis we have introduced here. So one might wonder whether
88 Epistemology and the Psychology of Human Judgment
this machinery is really required to address the efficient allocation of cognitive
resources. Are we shooting a mosquito with a bazooka?We don’t think
so, for two reasons. First, cost-benefit curves are good teaching tools. When
repairing individual reasoning strategies, it is helpful for the individual
to see, in stark and unapologetic terms, how poorly they are performing,
even by their own lights. And nothing does that like a curve—even if the
curve does not capture everything of value. Cost-benefit curves are at once
painfully accessible and mercifully impersonal. Consider the Goldberg
Rule we introduced in chapter 2. This rule predicts whether a psychiatric
patient is neurotic or psychotic on the basis of an MMPI profile. When
tested on a set of 861 patients, the Goldberg Rule had a 70% hit rate;
clinicians’ hit rates varied from a low of 55% to a high of 67%. We can set
out the choice between the Goldberg Rule and clinical prediction in terms
of what their cost-benefit curves might look like (see Figure 5.1).
The cost-benefit curve for the Goldberg Rule is very steep and hits its
near-maximum reliability after a rather modest expenditure of resources.
That’s because it doesn’t require many resources to achieve a high degree
of accuracy; spending more resources (by checking whether one has
plugged in the proper values and done the arithmetic correctly) is likely to
bring very small increments in reliability. Clinical prediction requires
greater resources than the Goldberg Rule but never achieves its reliability.
This point can be made vivid with a curve.
There is a second way that an explicit cost-benefit approach can be
useful. It can help to bring a certain kind of discipline to reasoners. Recall
the selective defection findings. Those who are given the Goldberg Rule
and allowed to selectively defect from it end up reasoning less reliably than
the rule itself; and many who know about the interview effect nonetheless
insist on doing unstructured interviews and drawing conclusions from
them. This hapless defection is typically undesirable. If we have two strategies
for solving a problem and one is more reliable, it is folly to use the
less reliable strategy to correct the more reliable one. There are some very
limited situations in which defection is warranted (see our discussion in
chapter 2). We can represent the costs and benefits of selective defection
with a cost-benefit curve, which might have something like the shape
shown in Figure 5.2. This curve suggests that with minimal cognitive resources
(i.e., those resources necessary to find the relevant scores, do the
simple arithmetic, and determine whether the sum is or is not greater than
45), the reasoner can attain a 70% accuracy rate. But by devoting more resources
to the problem (i.e., by using information from the MMPI profile
to try to improve on the Goldberg Rule’s prediction), the reasoner
degrades his reliability. This finding—that there is a point beyond which
the additional effort associated with considering more information degrades
performance—was also found in the interview effect (chapter 2). So when our cognitive limitations tempt us with a reasoning strategy that is both
subjectively seductive and systematically defective, it is time to lean on a
cognitive prosthetic. Cost-benefit analysis is that cognitive prosthetic.
We understand the temptations of defection. We know what it’s like
to use a reasoning strategy of proven reliability when it seems to give an
answer not warranted by the evidence. It feels like you’re about to make an
unnecessary error. And maybe you are. But in order to make fewer errors
overall, we have to accept that we will sometimes make errors we could
have corrected—errors that we recognized as errors before we made them
but made them nonetheless (Einhorn 1986). (Of course, the point is that
in these situations, more often than not, what we think will be an error
in fact won’t be.) People often lack the discipline to adhere to a superior
strategy that doesn’t ‘‘feel’’ right. Reasoning in a way that sometimes
‘‘feels’’ wrong takes discipline. And one way to impose that discipline is to
think about applied epistemology in terms of costs and benefits. All reasoning
strategies will lead to error (costs). When it comes to deciding
whether to defect from an SPR, the net benefits of defection (more errors
with greater effort) are typically outweighed by the net benefits of diffident
acquiescence to the SPR. When it comes to learning to reason better,
discipline is essential; and we can think of no more effective way to impose
that discipline than with a cost-benefit approach to applied epistemology.
Robyn Dawes (2001) notes that piloting an airplane by sight can give
you the powerful impression that you are flying right-side up when you
are actually upside down. So pilots learn to fly by instruments. This takes
some doing, but pilots are repaid with longer lives. To the extent that
individuals appreciate the benefits of averting costly error, they need to fly
by cost-benefit instruments. Adhering to a cost-benefit analysis may feel
wrong, but then again so does flying by instruments at first.
We fear this discussion may have made us appear like unduly strict
disciplinarians, so let us end on a gentler note. It would be irresponsible
for any applied epistemology that announces the importance of efficiency
to ignore the controllable inefficiency of reasoners. And we worry about
just how far discipline can really go in saving reasoners from ill-fated
temptation. For certain sorts of reasoning problems, applied epistemology
might well recommend strategies—reasoning or otherwise—that are designed
to develop and foster healthy reasoning dispositions. Why labor in
the construction of reasoning rules designed to correct our errant cognitive
impulses when we can cultivate actors who are not seduced by those
detrimental impulses in the first place? This point is reflected in good
advice for parents: It is probably less costly to cultivate your child’s many
The Costs and Benefits of Excellent Judgment 91
interests—keeping them very busy and focused on satisfying activities—
than to teach them how to control or correct their drug addictions later.
Similarly, when thinking about the various ways people reason badly, it
may be easier to cultivate new habits than to revise how we reason. If S’s
reasoning rashly discounts the future, rather than force her to adopt new
reasoning strategies, it might be more effective to set up an automatic
withdrawal from her bank account into a retirement fund that comes with
stiff penalties for early withdrawals. Suppose S is tempted to become an
active stock-market trader because he takes a short-term rise in his portfolio
to be evidence of his financial cleverness. Rather than fill his head
with a lot of theories and statistics, it is probably easier to force him to
focus on time horizons of at least 10 years in the stock market and thereby
avoid the temptation of active trading. And what about an academic
department tempted by the interview effect? Take the money the department
spends to send interviewers to conferences and transfer it into research
accounts for junior faculty. The result will likely be better hires and
happier, more productive junior faculty.
Our cost-benefit approach to epistemology takes elapsed time to be a surrogate
for epistemic costs and reliability to be a surrogate for epistemic benefits.
This approach has at least two important virtues. The first is that our
surrogates (time and reliability) are measurable. This means that the central
components of applied epistemology (or at least their rough approximations)
can be empirically determined in the following sense: (a) The costbenefit
curve is determined by observed outcomes in performance; and (b)
the curve can then be successfully used as a basis for predictions of performance.
So the central theoretical components of applied epistemology—
or at least rough approximations of them—can in principle be tested for
accuracy, rather than for their ability to stand up to imagined counterexamples.
The second virtue of our approach is that our surrogates roughly
track the properties of interest (the costs and benefits of reasoning). As a
result, the central tool of applied epistemology is reasonably accurate.
At this point, one might point out that successful SPRs have typically
been introduced without the explicit use of the formal machinery of costbenefit
analysis we have introduced here. So one might wonder whether
88 Epistemology and the Psychology of Human Judgment
this machinery is really required to address the efficient allocation of cognitive
resources. Are we shooting a mosquito with a bazooka?We don’t think
so, for two reasons. First, cost-benefit curves are good teaching tools. When
repairing individual reasoning strategies, it is helpful for the individual
to see, in stark and unapologetic terms, how poorly they are performing,
even by their own lights. And nothing does that like a curve—even if the
curve does not capture everything of value. Cost-benefit curves are at once
painfully accessible and mercifully impersonal. Consider the Goldberg
Rule we introduced in chapter 2. This rule predicts whether a psychiatric
patient is neurotic or psychotic on the basis of an MMPI profile. When
tested on a set of 861 patients, the Goldberg Rule had a 70% hit rate;
clinicians’ hit rates varied from a low of 55% to a high of 67%. We can set
out the choice between the Goldberg Rule and clinical prediction in terms
of what their cost-benefit curves might look like (see Figure 5.1).
The cost-benefit curve for the Goldberg Rule is very steep and hits its
near-maximum reliability after a rather modest expenditure of resources.
That’s because it doesn’t require many resources to achieve a high degree
of accuracy; spending more resources (by checking whether one has
plugged in the proper values and done the arithmetic correctly) is likely to
bring very small increments in reliability. Clinical prediction requires
greater resources than the Goldberg Rule but never achieves its reliability.
This point can be made vivid with a curve.
There is a second way that an explicit cost-benefit approach can be
useful. It can help to bring a certain kind of discipline to reasoners. Recall
the selective defection findings. Those who are given the Goldberg Rule
and allowed to selectively defect from it end up reasoning less reliably than
the rule itself; and many who know about the interview effect nonetheless
insist on doing unstructured interviews and drawing conclusions from
them. This hapless defection is typically undesirable. If we have two strategies
for solving a problem and one is more reliable, it is folly to use the
less reliable strategy to correct the more reliable one. There are some very
limited situations in which defection is warranted (see our discussion in
chapter 2). We can represent the costs and benefits of selective defection
with a cost-benefit curve, which might have something like the shape
shown in Figure 5.2. This curve suggests that with minimal cognitive resources
(i.e., those resources necessary to find the relevant scores, do the
simple arithmetic, and determine whether the sum is or is not greater than
45), the reasoner can attain a 70% accuracy rate. But by devoting more resources
to the problem (i.e., by using information from the MMPI profile
to try to improve on the Goldberg Rule’s prediction), the reasoner
degrades his reliability. This finding—that there is a point beyond which
the additional effort associated with considering more information degrades
performance—was also found in the interview effect (chapter 2). So when our cognitive limitations tempt us with a reasoning strategy that is both
subjectively seductive and systematically defective, it is time to lean on a
cognitive prosthetic. Cost-benefit analysis is that cognitive prosthetic.
We understand the temptations of defection. We know what it’s like
to use a reasoning strategy of proven reliability when it seems to give an
answer not warranted by the evidence. It feels like you’re about to make an
unnecessary error. And maybe you are. But in order to make fewer errors
overall, we have to accept that we will sometimes make errors we could
have corrected—errors that we recognized as errors before we made them
but made them nonetheless (Einhorn 1986). (Of course, the point is that
in these situations, more often than not, what we think will be an error
in fact won’t be.) People often lack the discipline to adhere to a superior
strategy that doesn’t ‘‘feel’’ right. Reasoning in a way that sometimes
‘‘feels’’ wrong takes discipline. And one way to impose that discipline is to
think about applied epistemology in terms of costs and benefits. All reasoning
strategies will lead to error (costs). When it comes to deciding
whether to defect from an SPR, the net benefits of defection (more errors
with greater effort) are typically outweighed by the net benefits of diffident
acquiescence to the SPR. When it comes to learning to reason better,
discipline is essential; and we can think of no more effective way to impose
that discipline than with a cost-benefit approach to applied epistemology.
Robyn Dawes (2001) notes that piloting an airplane by sight can give
you the powerful impression that you are flying right-side up when you
are actually upside down. So pilots learn to fly by instruments. This takes
some doing, but pilots are repaid with longer lives. To the extent that
individuals appreciate the benefits of averting costly error, they need to fly
by cost-benefit instruments. Adhering to a cost-benefit analysis may feel
wrong, but then again so does flying by instruments at first.
We fear this discussion may have made us appear like unduly strict
disciplinarians, so let us end on a gentler note. It would be irresponsible
for any applied epistemology that announces the importance of efficiency
to ignore the controllable inefficiency of reasoners. And we worry about
just how far discipline can really go in saving reasoners from ill-fated
temptation. For certain sorts of reasoning problems, applied epistemology
might well recommend strategies—reasoning or otherwise—that are designed
to develop and foster healthy reasoning dispositions. Why labor in
the construction of reasoning rules designed to correct our errant cognitive
impulses when we can cultivate actors who are not seduced by those
detrimental impulses in the first place? This point is reflected in good
advice for parents: It is probably less costly to cultivate your child’s many
The Costs and Benefits of Excellent Judgment 91
interests—keeping them very busy and focused on satisfying activities—
than to teach them how to control or correct their drug addictions later.
Similarly, when thinking about the various ways people reason badly, it
may be easier to cultivate new habits than to revise how we reason. If S’s
reasoning rashly discounts the future, rather than force her to adopt new
reasoning strategies, it might be more effective to set up an automatic
withdrawal from her bank account into a retirement fund that comes with
stiff penalties for early withdrawals. Suppose S is tempted to become an
active stock-market trader because he takes a short-term rise in his portfolio
to be evidence of his financial cleverness. Rather than fill his head
with a lot of theories and statistics, it is probably easier to force him to
focus on time horizons of at least 10 years in the stock market and thereby
avoid the temptation of active trading. And what about an academic
department tempted by the interview effect? Take the money the department
spends to send interviewers to conferences and transfer it into research
accounts for junior faculty. The result will likely be better hires and
happier, more productive junior faculty.