4. A practical framework for improved reasoning
К оглавлению1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 1617 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67
68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84
85 86 87 88 89 90 91 92 93 94
While we have not yet fully spelled out the normative framework that
supports the prescriptions of Ameliorative Psychology, we have described
a broadly cost-benefit approach to epistemology that takes significant
truths to be a primary benefit. Even with this sketchy theoretical framework
in hand, we can begin to piece together a unified approach to thinking
about applied epistemology, or the ways in which people’s reasoning
can be improved.
Applied epistemology is essentially about second-order reasoning
strategies. It concerns thinking about how we can better think about the
world. Our view takes applied epistemology to involve a cost-benefit approach
to thinking about how we ought to allocate cognitive resources and
replace old reasoning strategies with new ones. Getting clear about the
nature of the costs and benefits of reasoning is a tricky issue, one we will
address in chapter 5. For now, we can introduce this cost-benefit approach
with an artificial but familiar epistemic challenge: an aptitude test. Suppose
a test has two different parts, and a Test Taker is disposed to apply
different reasoning strategies to each part of the test. We can define a crude
notion of epistemic benefits in this particular setting in terms of correct
answers, and we can define a notion of epistemic costs in terms of
elapsed time. Very roughly, Test Taker’s reasoning on the test is better to
the extent he gets more right answers in a shorter amount of time. (This
view has obvious problems; we introduce it here only for illustrative
purposes.)
Suppose Test Taker is using strategy A on the verbal section of the test
and strategy B on the quantitative section of the test. We can represent
these two strategies using cost-benefit curves that plot the total number of
right answers the strategy can be expected to generate per unit of time. We
will assume that n is the total number of problems on each section of
the test (see Figure 3.1). The cost-benefit curves have a particular kind
of shape—a rapid increase with a steady leveling off. This leveling off
represents a reasoning strategy’s diminishing marginal utility : Increasing resources expended on the reasoning strategy brings steadily fewer benefits.
This is why cost-benefit curves are typically hump shaped rather than
straight or upward sloping. Reasoning, like most of life, is full of examples
of diminishing marginal returns. For instance, if we were to spend eighteen
more years lovingly polishing this book, it would end up being only
slightly better than it is.
From a cost-benefit perspective, then, the obvious question to ask is:
What is the best way to distribute Test Taker’s finite resources to these two
reasoning strategies? The most effective allocation, the one that would
maximize expected reliability (or accuracy) would be the one that made
the marginal expected reliability (MER) of both reasoning strategies equal.
The marginal expected reliability of a reasoning strategy given some quantity
of resources expended on that strategy is basically the benefit one gets
from the last resource expended on that reasoning strategy. If on Figure
3.2, the cost expended on A is ca, then the MER of that reasoning strategy
at that cost is given by the tangent of the cost-benefit curve at ca: Dx/Dy. If
Test Taker has (caюcb) resources, then to maximize his right answers, he
should devote ca resources to strategy A and cb resources to strategy B. At
those points, the MER of both cost-benefit curves is identical. If Test Taker
were to devote fewer than ca resources to A and greater than cb resources
to B, he would lose net reliability—he’d lose more truths sliding down A’s
cost-benefit curve than he would gain by moving up B’s cost-benefit curve.
Figure 3.2. Optimizing resource allocation: Equalizing
marginal expected accuracy.
The same general point would hold if Test Taker were to devote greater
resources to A and fewer to B.
In order to think clearly about applied epistemology, it is important
to recognize that reliability is a resource-dependent notion. How reliable a
reasoning strategy is depends on the resources expended on it. This insight
is built right into the cost-benefit curves: A reasoning strategy’s reliability
is a function of the amount of resources devoted to it. To see why the
resource dependence of reliability is important to applied epistemology,
consider the example depicted in Figure 3.3. Suppose there are three strategies
available to Test Taker for solving the quantitative problems on the
aptitude test. Among these three strategies, which is the most reliable?
That’s a poorly framed question (sort of like, ‘‘Is Larry taller than?’’). At
low costs (e.g., at c), D is the most reliable strategy; at high costs (e.g., at
c1), E is the most reliable strategy. In this case, there is no strategy that is
more reliable at all costs. There is, in short, no strategy that dominates all
other strategies. Now suppose also that the line at c represents the maximum
resources Test Taker can employ on these problems. So for all
attainable possibilities, strategies C and D dominate strategy E. Further,
strategy D dominates strategy C. Given this set of options, it is clear that D
is the epistemically best strategy Test Taker can employ. If he is currently
using strategy C or E, by switching to D, he can attain the same level of
reliability more cheaply, or he can attain greater reliability at the same cost.
(There is a problem here about individuating reasoning strategies. At c on
Figure 3.3. Resource-dependence of accuracy.
Figure 3.3, it’s not clear it makes sense to say that E is even implemented.
The question of whether a reasoning strategy has in fact been implemented
at a particular point along the cost-benefit curve is a tricky one, and one
that probably does not always admit of a definite answer. It can only be
adequately addressed by examining the details of how it is employed by a
reasoner in a particular context.)
There is onemore itemto note when doing applied epistemology. So far,
our discussions of the cost of reasoning strategies have focused on the resources
(represented by the time) it takes to execute a reasoning strategy. But
we have ignored a very important class of costs—start-up costs. These are
costs associated with adopting new reasoning strategies. Such costs include
search costs (the cost of searching for more reliable reasoning strategies) and
implementation costs (the cost of learning to use, and then deploying, a new
strategy). Our discussion of replacing C with D has assumed that D incurs no
start-up costs. But this is unrealistic. So let’s suppose that there are start-up
costs (s) associated with replacing C with D, as depicted in Figure 3.4. Now,
even though D dominates C when start-up costs are ignored, it doesn’t when
they’re not. In fact, Test Takermight become a worse reasoner by replacing D
with C. One obvious way this might happen is if paying the start-up costs for
adopting D is simply beyond Test Taker’s means. In that case, he has traded
in a reasoning strategy (C) that gives himsome right answers for another (D)
that he can’t even use—so he gets no right answers.
Start-up costs tend to be a conservative epistemic force—they give
default or current reasoning strategies a built-in advantage when it comes
to epistemic excellence (Sklar 1975). A number of philosophers accommodate
start-up costs in their accounts of belief-change. For example, the
so-called conservation of belief is the tendency for people to not change their beliefs without substantial reason (Harman 1986). One reason for this
conservatism is start-up costs. But it is important to understand that the
relative importance of start-up costs is associated with the time frame in
which we make our epistemic judgments. For example, suppose Sam is
faced with a stack of 200 applications that must be ranked within 24 hours,
and he is comfortable with his current reasoning strategy. The start-up
costs associated with any alternative reasoning strategy for ranking those
200 dossiers in the next 24 hours may be so high that Sam can’t do better
than use his current strategy. In other words, by the time Sam found a
better strategy and learned how to use it, he would not have the resources
to actually rank the dossiers. So even if some other strategy is clearly more
reliable than the one Sam uses, that’s no help if Sam can’t find, learn, and
execute the strategy in a timely fashion. But now suppose we take a longer
view. Suppose we ask what strategy Sam should use on the dossiers he will
face every year for the next 30 years. In this case, the start-up costs associated
with adopting a new strategy might be easily borne. Further, the
start-up costs might be insignificant next to the long-term execution costs
of the competing strategies. If the new strategy were significantly easier to
use than the old, in the long run, it might be cheaper to pay the start-up
costs and adopt the new strategy.
We now have in hand some very basic tools of applied epistemology—
cost-benefit curves, start-up costs, and marginal expected reliability. This
approach to applied epistemology provides new insights and useful categories
for understanding reasoning excellence. One insight yielded by this
cost-benefit approach to epistemology is that there are four (and only
four) ways one can become a better reasoner. This fourfold, exhaustive
characterization of ‘‘improved reasoning’’ is (we believe) original, and it
raises practical possibilities for improved reasoning that have been largely
overlooked in the epistemological literature.
A good way to introduce the Four Ways is to focus on Test Taker’s
approach to the aptitude test. Three of the four ways one can become a better
reasoner are represented in Figure 3.5. This figure represents four possible
outcomes of replacing one reasoning strategy with another. The horizontal
dimension represents the costs of the new strategy as compared to the old
one (higher vs. same or lower); and the vertical dimension represents the
benefits of the new strategy at that cost compared to the old one (greater vs.
same or less). The first two ways one can become a better reasoner involve
adopting new reasoning strategies that bring greater benefits—more right
answers (or, in more realistic cases, more significant truths). Let’s consider
some illustrations of the Four Ways to better reasoning.
While we have not yet fully spelled out the normative framework that
supports the prescriptions of Ameliorative Psychology, we have described
a broadly cost-benefit approach to epistemology that takes significant
truths to be a primary benefit. Even with this sketchy theoretical framework
in hand, we can begin to piece together a unified approach to thinking
about applied epistemology, or the ways in which people’s reasoning
can be improved.
Applied epistemology is essentially about second-order reasoning
strategies. It concerns thinking about how we can better think about the
world. Our view takes applied epistemology to involve a cost-benefit approach
to thinking about how we ought to allocate cognitive resources and
replace old reasoning strategies with new ones. Getting clear about the
nature of the costs and benefits of reasoning is a tricky issue, one we will
address in chapter 5. For now, we can introduce this cost-benefit approach
with an artificial but familiar epistemic challenge: an aptitude test. Suppose
a test has two different parts, and a Test Taker is disposed to apply
different reasoning strategies to each part of the test. We can define a crude
notion of epistemic benefits in this particular setting in terms of correct
answers, and we can define a notion of epistemic costs in terms of
elapsed time. Very roughly, Test Taker’s reasoning on the test is better to
the extent he gets more right answers in a shorter amount of time. (This
view has obvious problems; we introduce it here only for illustrative
purposes.)
Suppose Test Taker is using strategy A on the verbal section of the test
and strategy B on the quantitative section of the test. We can represent
these two strategies using cost-benefit curves that plot the total number of
right answers the strategy can be expected to generate per unit of time. We
will assume that n is the total number of problems on each section of
the test (see Figure 3.1). The cost-benefit curves have a particular kind
of shape—a rapid increase with a steady leveling off. This leveling off
represents a reasoning strategy’s diminishing marginal utility : Increasing resources expended on the reasoning strategy brings steadily fewer benefits.
This is why cost-benefit curves are typically hump shaped rather than
straight or upward sloping. Reasoning, like most of life, is full of examples
of diminishing marginal returns. For instance, if we were to spend eighteen
more years lovingly polishing this book, it would end up being only
slightly better than it is.
From a cost-benefit perspective, then, the obvious question to ask is:
What is the best way to distribute Test Taker’s finite resources to these two
reasoning strategies? The most effective allocation, the one that would
maximize expected reliability (or accuracy) would be the one that made
the marginal expected reliability (MER) of both reasoning strategies equal.
The marginal expected reliability of a reasoning strategy given some quantity
of resources expended on that strategy is basically the benefit one gets
from the last resource expended on that reasoning strategy. If on Figure
3.2, the cost expended on A is ca, then the MER of that reasoning strategy
at that cost is given by the tangent of the cost-benefit curve at ca: Dx/Dy. If
Test Taker has (caюcb) resources, then to maximize his right answers, he
should devote ca resources to strategy A and cb resources to strategy B. At
those points, the MER of both cost-benefit curves is identical. If Test Taker
were to devote fewer than ca resources to A and greater than cb resources
to B, he would lose net reliability—he’d lose more truths sliding down A’s
cost-benefit curve than he would gain by moving up B’s cost-benefit curve.
Figure 3.2. Optimizing resource allocation: Equalizing
marginal expected accuracy.
The same general point would hold if Test Taker were to devote greater
resources to A and fewer to B.
In order to think clearly about applied epistemology, it is important
to recognize that reliability is a resource-dependent notion. How reliable a
reasoning strategy is depends on the resources expended on it. This insight
is built right into the cost-benefit curves: A reasoning strategy’s reliability
is a function of the amount of resources devoted to it. To see why the
resource dependence of reliability is important to applied epistemology,
consider the example depicted in Figure 3.3. Suppose there are three strategies
available to Test Taker for solving the quantitative problems on the
aptitude test. Among these three strategies, which is the most reliable?
That’s a poorly framed question (sort of like, ‘‘Is Larry taller than?’’). At
low costs (e.g., at c), D is the most reliable strategy; at high costs (e.g., at
c1), E is the most reliable strategy. In this case, there is no strategy that is
more reliable at all costs. There is, in short, no strategy that dominates all
other strategies. Now suppose also that the line at c represents the maximum
resources Test Taker can employ on these problems. So for all
attainable possibilities, strategies C and D dominate strategy E. Further,
strategy D dominates strategy C. Given this set of options, it is clear that D
is the epistemically best strategy Test Taker can employ. If he is currently
using strategy C or E, by switching to D, he can attain the same level of
reliability more cheaply, or he can attain greater reliability at the same cost.
(There is a problem here about individuating reasoning strategies. At c on
Figure 3.3. Resource-dependence of accuracy.
Figure 3.3, it’s not clear it makes sense to say that E is even implemented.
The question of whether a reasoning strategy has in fact been implemented
at a particular point along the cost-benefit curve is a tricky one, and one
that probably does not always admit of a definite answer. It can only be
adequately addressed by examining the details of how it is employed by a
reasoner in a particular context.)
There is onemore itemto note when doing applied epistemology. So far,
our discussions of the cost of reasoning strategies have focused on the resources
(represented by the time) it takes to execute a reasoning strategy. But
we have ignored a very important class of costs—start-up costs. These are
costs associated with adopting new reasoning strategies. Such costs include
search costs (the cost of searching for more reliable reasoning strategies) and
implementation costs (the cost of learning to use, and then deploying, a new
strategy). Our discussion of replacing C with D has assumed that D incurs no
start-up costs. But this is unrealistic. So let’s suppose that there are start-up
costs (s) associated with replacing C with D, as depicted in Figure 3.4. Now,
even though D dominates C when start-up costs are ignored, it doesn’t when
they’re not. In fact, Test Takermight become a worse reasoner by replacing D
with C. One obvious way this might happen is if paying the start-up costs for
adopting D is simply beyond Test Taker’s means. In that case, he has traded
in a reasoning strategy (C) that gives himsome right answers for another (D)
that he can’t even use—so he gets no right answers.
Start-up costs tend to be a conservative epistemic force—they give
default or current reasoning strategies a built-in advantage when it comes
to epistemic excellence (Sklar 1975). A number of philosophers accommodate
start-up costs in their accounts of belief-change. For example, the
so-called conservation of belief is the tendency for people to not change their beliefs without substantial reason (Harman 1986). One reason for this
conservatism is start-up costs. But it is important to understand that the
relative importance of start-up costs is associated with the time frame in
which we make our epistemic judgments. For example, suppose Sam is
faced with a stack of 200 applications that must be ranked within 24 hours,
and he is comfortable with his current reasoning strategy. The start-up
costs associated with any alternative reasoning strategy for ranking those
200 dossiers in the next 24 hours may be so high that Sam can’t do better
than use his current strategy. In other words, by the time Sam found a
better strategy and learned how to use it, he would not have the resources
to actually rank the dossiers. So even if some other strategy is clearly more
reliable than the one Sam uses, that’s no help if Sam can’t find, learn, and
execute the strategy in a timely fashion. But now suppose we take a longer
view. Suppose we ask what strategy Sam should use on the dossiers he will
face every year for the next 30 years. In this case, the start-up costs associated
with adopting a new strategy might be easily borne. Further, the
start-up costs might be insignificant next to the long-term execution costs
of the competing strategies. If the new strategy were significantly easier to
use than the old, in the long run, it might be cheaper to pay the start-up
costs and adopt the new strategy.
We now have in hand some very basic tools of applied epistemology—
cost-benefit curves, start-up costs, and marginal expected reliability. This
approach to applied epistemology provides new insights and useful categories
for understanding reasoning excellence. One insight yielded by this
cost-benefit approach to epistemology is that there are four (and only
four) ways one can become a better reasoner. This fourfold, exhaustive
characterization of ‘‘improved reasoning’’ is (we believe) original, and it
raises practical possibilities for improved reasoning that have been largely
overlooked in the epistemological literature.
A good way to introduce the Four Ways is to focus on Test Taker’s
approach to the aptitude test. Three of the four ways one can become a better
reasoner are represented in Figure 3.5. This figure represents four possible
outcomes of replacing one reasoning strategy with another. The horizontal
dimension represents the costs of the new strategy as compared to the old
one (higher vs. same or lower); and the vertical dimension represents the
benefits of the new strategy at that cost compared to the old one (greater vs.
same or less). The first two ways one can become a better reasoner involve
adopting new reasoning strategies that bring greater benefits—more right
answers (or, in more realistic cases, more significant truths). Let’s consider
some illustrations of the Four Ways to better reasoning.