6. Counterexamples, counterexamples
К оглавлению1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 1617 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67
68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84
85 86 87 88 89 90 91 92 93 94
A number of counterexamples against reliabilist theories of justification depend
on a disconnect between the reliability of a particular belief-forming
mechanism and the subject’s evidence for trusting that mechanism. To take a
classic case, a reasoner might have a perfectly reliable clairvoyant beliefforming
mechanism but no evidence for trusting it—in fact she might have
positive reasons for not trusting it (BonJour 1980, Putnam 1983). The reliable
clairvoyant case raises hard problems for Strategic Reliabilism (as do other
examples of this sort). According to Strategic Reliabilism, what would it be for
the reliable clairvoyant to reason in an excellent fashion when she has reasons
not to trust her clairvoyant powers? And more generally, how does Strategic Reliabilism handle cases in which a reasoning strategy is reliable (or unreliable)
and the subject has strong reason to believe the opposite?
There are many examples that are going to be hard cases for Strategic
Reliabilism, and this includes cases in which there is a disconnect between
the reliability of a reasoning strategy and the subject’s evidence for trusting
it. The strength of Strategic Reliabilism does not reside in the ease with
which it can be applied to cases in order to make straightforward, univocal
epistemic judgments. The strength of Strategic Reliabilism is its reasonguiding
capacity. Strategic Reliabilism provides a framework for identifying
and developing excellent reasoning strategies—robustly reliable
reasoning strategies for tackling significant problems. This is reversed for
theories of SAE. A theory of SAE is supposed to be able to be applied to
cases in order to determine whether particular beliefs are justified or not.
But theories of SAE don’t provide much in the way of useful reasonguiding
resources (a point we have endlessly harped on in this book). And
so we are content to admit that there will be plenty of hard cases in which
a reasoner uses a number of different reasoning strategies and Strategic
Reliabilism takes some of them to be excellent and others to be less so. The
fact that Strategic Reliabilism does not always yield a simple, univocal
normative judgment is a problem only if epistemic judgments of reasoning
excellence must always be simple and univocal. But people reason in wonderfully
complex and varied ways. Why should we expect our assessments
of every instance of human reasoning to be simple?
Although we have admitted that the strength of Strategic Reliabilism is
not its ability to be applied to particular cases, we should not overstate this
point. There is no principled reason why we can’t apply Strategic Reliabilism
to very complicated cases. There are, however, two thoroughly
practical reasons why the application of Strategic Reliabilism can be difficult.
First, in order to apply Strategic Reliabilism to (say) the clairvoyant
case, we need to know a lot about what reasoning strategies the clairvoyant
is using. The SAE literature tends to ignore this, except to say that by
hypothesis the subject’s clairvoyance is reliable. But we are not told much
about how the clairvoyance works or about the nature of the clairvoyant’s
second-order reasoning strategies about whether to trust her clairvoyant
powers. The SAE literature does not give details about such reasoning
strategies because the theories of SAE, including process reliabilism, are
theories of justification; and justification is a property of belief tokens.
Details about the workings of the clairvoyant’s reasoning strategies are
irrelevant to theories of SAE. But even if we are given lots of details about how the clairvoyant is reasoning, there is a second reason Strategic Reliabilism
can be practically difficult to apply. The assessment of a particular
reasoning strategy employed by the clairvoyant depends on many factors
we might not know. For example, we would need to know the reliability
scores of the clairvoyant’s reasoning strategy; and if we wanted to make
relative judgments, we’d need to know the reliability scores of its competitor
strategies. (We would need to know more about these strategies as
well—their robustness, their costs and the significance of the problems in
their ranges.) There is no principled reason we couldn’t find out about
these matters. But in absence of detailed information about them, it will be
very difficult to apply Strategic Reliabilism to particular cases. Strategic
Reliabilism is hard to apply, but not because Strategic Reliabilism is so
abstract it cannot be applied to real cases. The reason Strategic Reliabilism
is hard to apply is that we need to know a lot in order to apply it.
A number of counterexamples against reliabilist theories of justification depend
on a disconnect between the reliability of a particular belief-forming
mechanism and the subject’s evidence for trusting that mechanism. To take a
classic case, a reasoner might have a perfectly reliable clairvoyant beliefforming
mechanism but no evidence for trusting it—in fact she might have
positive reasons for not trusting it (BonJour 1980, Putnam 1983). The reliable
clairvoyant case raises hard problems for Strategic Reliabilism (as do other
examples of this sort). According to Strategic Reliabilism, what would it be for
the reliable clairvoyant to reason in an excellent fashion when she has reasons
not to trust her clairvoyant powers? And more generally, how does Strategic Reliabilism handle cases in which a reasoning strategy is reliable (or unreliable)
and the subject has strong reason to believe the opposite?
There are many examples that are going to be hard cases for Strategic
Reliabilism, and this includes cases in which there is a disconnect between
the reliability of a reasoning strategy and the subject’s evidence for trusting
it. The strength of Strategic Reliabilism does not reside in the ease with
which it can be applied to cases in order to make straightforward, univocal
epistemic judgments. The strength of Strategic Reliabilism is its reasonguiding
capacity. Strategic Reliabilism provides a framework for identifying
and developing excellent reasoning strategies—robustly reliable
reasoning strategies for tackling significant problems. This is reversed for
theories of SAE. A theory of SAE is supposed to be able to be applied to
cases in order to determine whether particular beliefs are justified or not.
But theories of SAE don’t provide much in the way of useful reasonguiding
resources (a point we have endlessly harped on in this book). And
so we are content to admit that there will be plenty of hard cases in which
a reasoner uses a number of different reasoning strategies and Strategic
Reliabilism takes some of them to be excellent and others to be less so. The
fact that Strategic Reliabilism does not always yield a simple, univocal
normative judgment is a problem only if epistemic judgments of reasoning
excellence must always be simple and univocal. But people reason in wonderfully
complex and varied ways. Why should we expect our assessments
of every instance of human reasoning to be simple?
Although we have admitted that the strength of Strategic Reliabilism is
not its ability to be applied to particular cases, we should not overstate this
point. There is no principled reason why we can’t apply Strategic Reliabilism
to very complicated cases. There are, however, two thoroughly
practical reasons why the application of Strategic Reliabilism can be difficult.
First, in order to apply Strategic Reliabilism to (say) the clairvoyant
case, we need to know a lot about what reasoning strategies the clairvoyant
is using. The SAE literature tends to ignore this, except to say that by
hypothesis the subject’s clairvoyance is reliable. But we are not told much
about how the clairvoyance works or about the nature of the clairvoyant’s
second-order reasoning strategies about whether to trust her clairvoyant
powers. The SAE literature does not give details about such reasoning
strategies because the theories of SAE, including process reliabilism, are
theories of justification; and justification is a property of belief tokens.
Details about the workings of the clairvoyant’s reasoning strategies are
irrelevant to theories of SAE. But even if we are given lots of details about how the clairvoyant is reasoning, there is a second reason Strategic Reliabilism
can be practically difficult to apply. The assessment of a particular
reasoning strategy employed by the clairvoyant depends on many factors
we might not know. For example, we would need to know the reliability
scores of the clairvoyant’s reasoning strategy; and if we wanted to make
relative judgments, we’d need to know the reliability scores of its competitor
strategies. (We would need to know more about these strategies as
well—their robustness, their costs and the significance of the problems in
their ranges.) There is no principled reason we couldn’t find out about
these matters. But in absence of detailed information about them, it will be
very difficult to apply Strategic Reliabilism to particular cases. Strategic
Reliabilism is hard to apply, but not because Strategic Reliabilism is so
abstract it cannot be applied to real cases. The reason Strategic Reliabilism
is hard to apply is that we need to know a lot in order to apply it.