3. The potential unavailability of objective reasons
К оглавлению1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 1617 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67
68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84
85 86 87 88 89 90 91 92 93 94
The significance of a problem is determined by the strength of the reasons
one has for devoting resources to it. But often people through no fault of
Epistemic Significance 99
their own don’t have access to those reasons. We often reasonably believe
that a problem is significant when it’s not. Trying to predict what gift a
spouse would enjoy for an anniversary might seem a fairly significant
problem, deserving to be pondered in considerable detail. But it’s probably
not if the spouse runs off with the neighbor a week before the anniversary.
Further, often people through no fault of their own will be correct about
whether a problem is significant but wrong about why. Stich offers the
example of the person who reasons to a true belief about when her plane
leaves but who doesn’t know the plane is doomed to crash (1990). The
problem of finding out when the flight left was significant, but not for
the reason she thought. The fact that the significance of particular problems
will sometimes (or perhaps regularly) be unavailable to reasoners
might seem like a serious problem for our view. We claim to be offering
a normative epistemological theory that provides reasoning guidance. A
central aspect of our theory is that reasoners should focus on significant
problems. But we admit that reasoners will often not know which are the
significant problems. So how can our theory offer people useful reasoning
advice?
Even though a reasoner might not have a good sense of what problems
are most significant for him to tackle, this does not undermine our
theory. First, any theory that takes significance to be important will have
this problem; and any theory that does not take significance to be important
will be incapable of making positive normative recommendations.
So this worry is an unfortunate feature of the human condition, not a
weakness of our view. Second, recall the role that significance is supposed
to play in our epistemological theory. It directs reasoners to be prepared to
spend more resources improving or replacing reasoning strategies that as a
general rule tend to have in their range significant reasoning problems. (It
might also direct reasoners to avoid spending resources on problems that
are negatively significant.) So the notion of significance plays a regulative
role in guiding the research of a prescriptive epistemology. A priority for
epistemology is to develop excellent reasoning strategies (i.e., reliable and
tractable for ordinary reasoners) that can be used on significant problems.
The fact that individual reasoners will sometimes be quite mistaken about
which problems facing them are the significant ones does not undermine
the epistemological project of recommending excellent reasoning strategies
(i.e., strategies that are robustly reliable, tractable, and applicable to
significant problems).
The call to allocate our cognitive resources to significant problems
places specific demands on the excellent reasoner. The most important demand concerns setting priorities. The priorities of the excellent reasoner
(and more generally, of the wise person) are set so that they may serve as a
means to human flourishing. Sometimes the excellent reasoner must replace
hot, spontaneous judgment with the cool administration of our
epistemic priorities. We might prioritize our projects so that they keep us
happily occupied. We might place a priority on family and friends because
their well-being matters to us. But when our interests are stable and
healthy, we don’t have to explicitly arrange these priorities—our interests
spontaneously direct us to significant problems. Our decision to have
children is more often the spontaneous result of a loving relationship than
it is the issue of a cold calculation that it will pay off in the long run. A
toned body can just as easily be the result of pleasant sport as it can be the
joyless consequence of scheduled maintenance. And like the beautiful dust
on a butterfly’s wings, these spontaneous interests result from natural ends
that subtly sculpt our lives. When determining the significance of the problems
we face, we should attend to these contours.
We should emphasize that a successful epistemological tradition will
not demand that the responsibility for reasoning excellence be shouldered
entirely by individuals. The well-ordered social presence of a reason-guiding
epistemology should promote the proper distribution of epistemic responsibility.
Institutions can make it more likely that individuals will act responsibly,
through for example, proper training, institutional procedures,
a well-designed system of incentives, or formal or informal sanctions.
The objection we are considering is an instance of a more general
worry about our theory. Strategic Reliabilism is a theory that sets forth the
conditions of reasoning excellence. This theory also holds out the promise
of an applied component, which will include reasoning advice we have
strong empirical reason to think is good reasoning advice. At the moment,
the practical content of Strategic Reliabilism is limited by the current state
of our well-tested, empirical knowledge about what sorts of reasoning
strategies are robustly reliable, tractable, and focused on significant matters.
Although limited, our view still recommends a number of specific strategies
that most people should adopt (e.g., frequency formats for diagnosis problems,
the consider-the-opposite strategy to counteract overconfidence, and
others to be discussed in chapter 9). But we cannot guarantee that people
will follow this advice—some will not follow our advice because they have
never been introduced to it, others because they decide to ignore it. But
these possibilities are no objection to our theory. Our theory provides
useful advice—but that doesn’t mean it provides advice that everyone
can always use no matter what. An analogy might be helpful. The owner’s manual for a car provides useful advice. It doesn’t follow that everyone
regardless of their skill or knowledge can use that advice profitably.
There is another aspect to the owner’s manual analogy. If there exists
only one copy of a Chevy Vega owner’s manual and it is locked in a vault
in Detroit, it is not available enough to be genuinely useful to Vega owners
(who are likely to need genuine help). Similarly, if the advice of Strategic
Reliabilism is to be restricted to highly specialized journals, then it will not
be available enough to be genuinely useful. That is why our view takes
seriously the idea that epistemology, like any science, ought to be a wellordered
social system (Kitcher 2001). A well-ordered social system for
epistemology would have at least two features. First, in order to achieve its
ameliorative potential, epistemology should be organized so that it provides
a way to effectively communicate its established findings, particularly
its practical advice, to appropriate audiences. Second, in order to minimize
the risk of promoting harmful or mistaken findings, epistemology
should be organized so that whatever findings are communicated widely
have passed rigorous empirical scrutiny.
Recognizing the importance of significance in epistemology opens up
a pair of empirical issues that are perhaps deserving of more study: First,
what sorts of problems are significant that people tend to think are not
significant and so perhaps reason poorly or not enough about? For example,
people tend to unduly discount the future, as when they overvalue
small current increments of money compared to their compounded value
in the future. And second, what sorts of problems are not significant (or
perhaps ‘‘negatively’’ significant) that people tend to believe are significant
and so perhaps spend too much time and energy on? For example, people
tend to unduly focus on vivid low probability risks at the expense of pallid
but much higher probability risks. Given the empirical nature of significance,
no theory can guarantee that significant problems are psychologically
available to us. The best our theory can do is to ensure that it
recommend strategies that will improve our reasoning about matters of
significance.
The significance of a problem is determined by the strength of the reasons
one has for devoting resources to it. But often people through no fault of
Epistemic Significance 99
their own don’t have access to those reasons. We often reasonably believe
that a problem is significant when it’s not. Trying to predict what gift a
spouse would enjoy for an anniversary might seem a fairly significant
problem, deserving to be pondered in considerable detail. But it’s probably
not if the spouse runs off with the neighbor a week before the anniversary.
Further, often people through no fault of their own will be correct about
whether a problem is significant but wrong about why. Stich offers the
example of the person who reasons to a true belief about when her plane
leaves but who doesn’t know the plane is doomed to crash (1990). The
problem of finding out when the flight left was significant, but not for
the reason she thought. The fact that the significance of particular problems
will sometimes (or perhaps regularly) be unavailable to reasoners
might seem like a serious problem for our view. We claim to be offering
a normative epistemological theory that provides reasoning guidance. A
central aspect of our theory is that reasoners should focus on significant
problems. But we admit that reasoners will often not know which are the
significant problems. So how can our theory offer people useful reasoning
advice?
Even though a reasoner might not have a good sense of what problems
are most significant for him to tackle, this does not undermine our
theory. First, any theory that takes significance to be important will have
this problem; and any theory that does not take significance to be important
will be incapable of making positive normative recommendations.
So this worry is an unfortunate feature of the human condition, not a
weakness of our view. Second, recall the role that significance is supposed
to play in our epistemological theory. It directs reasoners to be prepared to
spend more resources improving or replacing reasoning strategies that as a
general rule tend to have in their range significant reasoning problems. (It
might also direct reasoners to avoid spending resources on problems that
are negatively significant.) So the notion of significance plays a regulative
role in guiding the research of a prescriptive epistemology. A priority for
epistemology is to develop excellent reasoning strategies (i.e., reliable and
tractable for ordinary reasoners) that can be used on significant problems.
The fact that individual reasoners will sometimes be quite mistaken about
which problems facing them are the significant ones does not undermine
the epistemological project of recommending excellent reasoning strategies
(i.e., strategies that are robustly reliable, tractable, and applicable to
significant problems).
The call to allocate our cognitive resources to significant problems
places specific demands on the excellent reasoner. The most important demand concerns setting priorities. The priorities of the excellent reasoner
(and more generally, of the wise person) are set so that they may serve as a
means to human flourishing. Sometimes the excellent reasoner must replace
hot, spontaneous judgment with the cool administration of our
epistemic priorities. We might prioritize our projects so that they keep us
happily occupied. We might place a priority on family and friends because
their well-being matters to us. But when our interests are stable and
healthy, we don’t have to explicitly arrange these priorities—our interests
spontaneously direct us to significant problems. Our decision to have
children is more often the spontaneous result of a loving relationship than
it is the issue of a cold calculation that it will pay off in the long run. A
toned body can just as easily be the result of pleasant sport as it can be the
joyless consequence of scheduled maintenance. And like the beautiful dust
on a butterfly’s wings, these spontaneous interests result from natural ends
that subtly sculpt our lives. When determining the significance of the problems
we face, we should attend to these contours.
We should emphasize that a successful epistemological tradition will
not demand that the responsibility for reasoning excellence be shouldered
entirely by individuals. The well-ordered social presence of a reason-guiding
epistemology should promote the proper distribution of epistemic responsibility.
Institutions can make it more likely that individuals will act responsibly,
through for example, proper training, institutional procedures,
a well-designed system of incentives, or formal or informal sanctions.
The objection we are considering is an instance of a more general
worry about our theory. Strategic Reliabilism is a theory that sets forth the
conditions of reasoning excellence. This theory also holds out the promise
of an applied component, which will include reasoning advice we have
strong empirical reason to think is good reasoning advice. At the moment,
the practical content of Strategic Reliabilism is limited by the current state
of our well-tested, empirical knowledge about what sorts of reasoning
strategies are robustly reliable, tractable, and focused on significant matters.
Although limited, our view still recommends a number of specific strategies
that most people should adopt (e.g., frequency formats for diagnosis problems,
the consider-the-opposite strategy to counteract overconfidence, and
others to be discussed in chapter 9). But we cannot guarantee that people
will follow this advice—some will not follow our advice because they have
never been introduced to it, others because they decide to ignore it. But
these possibilities are no objection to our theory. Our theory provides
useful advice—but that doesn’t mean it provides advice that everyone
can always use no matter what. An analogy might be helpful. The owner’s manual for a car provides useful advice. It doesn’t follow that everyone
regardless of their skill or knowledge can use that advice profitably.
There is another aspect to the owner’s manual analogy. If there exists
only one copy of a Chevy Vega owner’s manual and it is locked in a vault
in Detroit, it is not available enough to be genuinely useful to Vega owners
(who are likely to need genuine help). Similarly, if the advice of Strategic
Reliabilism is to be restricted to highly specialized journals, then it will not
be available enough to be genuinely useful. That is why our view takes
seriously the idea that epistemology, like any science, ought to be a wellordered
social system (Kitcher 2001). A well-ordered social system for
epistemology would have at least two features. First, in order to achieve its
ameliorative potential, epistemology should be organized so that it provides
a way to effectively communicate its established findings, particularly
its practical advice, to appropriate audiences. Second, in order to minimize
the risk of promoting harmful or mistaken findings, epistemology
should be organized so that whatever findings are communicated widely
have passed rigorous empirical scrutiny.
Recognizing the importance of significance in epistemology opens up
a pair of empirical issues that are perhaps deserving of more study: First,
what sorts of problems are significant that people tend to think are not
significant and so perhaps reason poorly or not enough about? For example,
people tend to unduly discount the future, as when they overvalue
small current increments of money compared to their compounded value
in the future. And second, what sorts of problems are not significant (or
perhaps ‘‘negatively’’ significant) that people tend to believe are significant
and so perhaps spend too much time and energy on? For example, people
tend to unduly focus on vivid low probability risks at the expense of pallid
but much higher probability risks. Given the empirical nature of significance,
no theory can guarantee that significant problems are psychologically
available to us. The best our theory can do is to ensure that it
recommend strategies that will improve our reasoning about matters of
significance.