10. The generality problem

К оглавлению1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 
68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 
85 86 87 88 89 90 91 92 93 94 

Your view, Strategic Reliabilism, seems to fall victim to the generality

problem. The generality problem arises because there is more than one way to

characterize the belief-forming mechanism that produces a particular belief.

Some of these characterizations will denote a reliable process, whereas other

characterizations will not. Without some way of deciding which of these

processes to count as the one that produced the belief, the reliabilist runs the

risk of having to say that such a belief is both justified (because it was

produced by a reliable mechanism) and unjustified (because it was produced

by an unreliable mechanism). And that’s absurd (Goldman 1979, Feldman

1985). Here is Richard Feldman’s characterization of the problem:

The fact that every belief results from a process token that is an instance of

many types, some reliable and some not, may partly account for the initial

attraction of the reliability theory. In thinking about particular beliefs one

can first decide intuitively whether the belief is justified and then go on to

describe the process responsible for the belief in a way that appears to make

the theory have the right result. Similarly, of course, critics of the theory can

describe processes in ways that seem to make the theory have false consequences.

For example, Laurence BonJour has proposed as counter-examples

to the reliability theory cases in which a person believes things as a result of

clairvoyance. In his examples, clairvoyance is a reliable process but the person

has no reason to think that it is reliable. BonJour claims that the reliability

theory has the incorrect consequence that the person’s beliefs are

justified. He assumes, however, that the relevant process type is clairvoyance.

If one instead assumes that the relevant type is ‘‘believing something as

a result of a process one has no reason to trust’’ the reliability theory seems

to have different implications for these cases (1985, 160).

So how can Strategic Reliabilism overcome the generality problem?

In thinking about how Strategic Reliabilism handles the generality problem,

it will be useful to consider a particular example. Suppose that

whenever S is faced with the task of making predictions about human

performance, she always uses what we might call the human performance

predictor (HPP): She considers only the two lines of evidence she believes

are most predictive, weighs them equally, and predicts that higher scores

will be more highly correlated with better performance. In some sense, this

is a meta-strategy, since it is a strategy for formulating strategies for making

predictions about human performance. Now S is faced with some admissions

problems, so she uses HPP: She considers only the two lines of evidence

she deems most predictive (say, high school rank and test score

rank), weighs them equally, and predicts that the best students will be those

with the highest scores. We have already seen this reasoning strategy—it is

ASPR (chapter 4, section 1). HPP and ASPR are nested reasoning strategies:

ASPR’s range (i.e., admissions problems) is a proper subset of HPP’s range.

Now suppose that after having used these nested strategies to make a

prediction about an admissions problem, S comes to believe that Jones

will be a more successful student than Smith. Suppose further that ASPR is

very reliable (i.e., it makes a high percentage of true predictions on admissions

problems), but the more general HPP is not (i.e., while it leads to

reliable predictions on admissions problems, it leads to very unreliable

predictions on other sorts of human prediction problems). The classical

reliabilist about justification is faced with a problem. S’s belief was the

product of a reliable belief-forming process (ASPR), and so on reliabilist

grounds is justified. But S’s belief was also the product of an unreliable

belief-forming process (HPP), and so on reliabilist grounds is unjustified.

The reliabilist seems committed to claiming that S’s belief that Jones will

be a more successful student than Smith is both justified and unjustified.

Contradiction.

Goldman (1986) tries to solve the generality problem by arguing that

the correct way to characterize the mechanism that produces a belief token

is in terms of the narrowest causally operative process involved in its

production. Thus, Goldman would argue that S’s belief is justified, since

the narrowest causally operative process involved in its production (i.e.,

ASPR) is reliable. On the other hand, if ASPR had been unreliable and the

more general HPP had been reliable, Goldman would deem the belief

unjustified. For our purposes, what’s right about Goldman’s suggestion is

that any form of reliabilism need only countenance psychologically real, causally operative processes. But if we take reliabilism to be a theory about

epistemic excellence rather than a theory about epistemic justification (i.e.,

if we accept Strategic Reliabilism instead of classical reliabilism), we can

simply avoid the generality problem altogether.

How is that?

Strategic Reliabilism aims to assess reasoning processes rather than belief

tokens. Suppose it is possible for a belief token to be produced by a reliable

process (on one characterization) and by an unreliable process (on a

different characterization). We can pass a positive judgment on the first

process and a negative judgment about the second process. There is no

need for the reliabilist about excellence to demand a unique characterization

of the process that produces a belief token. To take the example

spelled out above, the strategic reliabilist might judge S’s use of ASPR to

have been epistemically excellent, though this will depend on the reliability

and ease of use of competitor strategies. On the other hand, the strategic

reliabilist might judge S’s use of the HPP to have been not epistemically

excellent (though this again will depend on the quality of the competition).

It is trivial that different reasoning strategies can have different,

incompatible epistemic properties. So there is no need for the Strategic

Reliabilist to demand a unique characterization of the process that produces

a belief token. And so there is no generality problem.

We should note that Earl Conee and Richard Feldman take the

generality problem to be devastating to classical process reliabilism.

In the absence of a brand new idea about relevant types, the problem looks

insoluble. Consequently, process reliability theories of justification and

knowledge look hopeless. (1998, p.24)

So if our view is able to overcome the generality problem, apparently this

is news.

But it still seems that the generality problem raises a worry about Strategic

Reliabilism. After all, a theory of epistemic excellence should tell us whether

S’s reasoning to the belief that Jones will be a more successful student than

Smith was excellent or was not excellent. To do that, the theory needs to

decide whether S’s reasoning was excellent because the belief was the result of

a reliable process (ASPR) or not excellent because the belief was the result of

an unreliable process (HPP). So it would appear that the generality problem

arises in a slightly new guise for Strategic Reliabilism.

This is not right. We take epistemic excellence to be a property of a

temporal process that’s dedicated to the achievement of certain specific

goals. If we want to know whether a state (i.e., a belief) was the result of an

epistemically excellent reasoning process, then it’s important to specify

what reasoning process we mean to assess. If we specify the reasoning

narrowly, so that the belief is the result of ASPR, then the reasoning is

excellent. If we specify the reasoning broadly, so that the belief is the result

of HPP, then the reasoning is not excellent. If we want to know whether

the entire voluntary reasoning process, involving both predictors, was

excellent, then there is no single, univocal, uncomplicated assessment. In

some ways it was excellent, and in some ways it was not. We can describe

in quite a bit of detail the precise ways in which the reasoning was excellent

and the precise ways in which it was not. But our theory yields no

single, univocal, uncomplicated assessment of this episode of reasoning.

And surely, that is a virtue of our theory.

But isn’t it odd for you to simply say that there are episodes of reasoning that

are in some ways excellent, and in other ways not? You don’t seem inclined to

say much about the epistemic quality of the reasoning in general. Resting

content with this conclusion might reasonably strike one as stubbornly unambitious

and perversely indolent.

There are two points to make against this worry. First, accurate theories

about complicated subjects will sometimes yield complicated judgments.

While the desire for simplicity is understandable, the advice often attributed

to Einstein seems apt: theories should be as simple as possible, but no

simpler. Second, from our perspective, epistemology is a forward-looking

enterprise. So while epistemology inevitably involves passing judgments

about the epistemic quality of people’s reasoning and beliefs, evaluating the

past is not the main point of epistemology. The main point of epistemology

is to offer clear, usable criteria for epistemic excellence that will yield

judgments about the relative quality of competing reasoning strategies. So

going back to the example, the fundamental issue for us is not whether

there is some way to characterize S’s reasoning so that we may pass simple

epistemic judgments. The real issue for epistemology to address is: What

are the epistemically better ways S might reason about significant issues

(and, of course, what makes those reasoning strategies better)?

But this still seems problematic. Besides insisting that an account of a process

be ‘‘psychologically real,’’ you do not favor any particular way of individuating belief-forming mechanisms when it comes to passing judgments of epistemic

excellence. But a reasoning episode might involve dozens, or even hundreds, of

such processes. Do you really want to say that for some reasoning episodes, every

psychologically real belief-forming mechanism has its own epistemic worth?

Well, yes. There is no theoretical problem with this result. Some might

worry that this result will make epistemology impossibly complex. It’s true

that it might take a superhuman effort to actually try to evaluate all the

processes that went into the production of a single belief. But it’s also true

that as a practical matter, there is seldom a need to evaluate all the processes

that went into producing a belief. Our efforts have typically been

directed at voluntary reasoning strategies—strategies reasoners can choose

to use or not to use. That’s not to say that involuntary reasoning processes

should be completely ignored. In fact, in our view, epistemology must pay

closer attention to such processes. For example, a practical epistemology

will offer voluntary reasoning strategies that correct involuntary reasoning

processes (e.g., don’t trust your visual color experiences in artificial light).

Your view, Strategic Reliabilism, seems to fall victim to the generality

problem. The generality problem arises because there is more than one way to

characterize the belief-forming mechanism that produces a particular belief.

Some of these characterizations will denote a reliable process, whereas other

characterizations will not. Without some way of deciding which of these

processes to count as the one that produced the belief, the reliabilist runs the

risk of having to say that such a belief is both justified (because it was

produced by a reliable mechanism) and unjustified (because it was produced

by an unreliable mechanism). And that’s absurd (Goldman 1979, Feldman

1985). Here is Richard Feldman’s characterization of the problem:

The fact that every belief results from a process token that is an instance of

many types, some reliable and some not, may partly account for the initial

attraction of the reliability theory. In thinking about particular beliefs one

can first decide intuitively whether the belief is justified and then go on to

describe the process responsible for the belief in a way that appears to make

the theory have the right result. Similarly, of course, critics of the theory can

describe processes in ways that seem to make the theory have false consequences.

For example, Laurence BonJour has proposed as counter-examples

to the reliability theory cases in which a person believes things as a result of

clairvoyance. In his examples, clairvoyance is a reliable process but the person

has no reason to think that it is reliable. BonJour claims that the reliability

theory has the incorrect consequence that the person’s beliefs are

justified. He assumes, however, that the relevant process type is clairvoyance.

If one instead assumes that the relevant type is ‘‘believing something as

a result of a process one has no reason to trust’’ the reliability theory seems

to have different implications for these cases (1985, 160).

So how can Strategic Reliabilism overcome the generality problem?

In thinking about how Strategic Reliabilism handles the generality problem,

it will be useful to consider a particular example. Suppose that

whenever S is faced with the task of making predictions about human

performance, she always uses what we might call the human performance

predictor (HPP): She considers only the two lines of evidence she believes

are most predictive, weighs them equally, and predicts that higher scores

will be more highly correlated with better performance. In some sense, this

is a meta-strategy, since it is a strategy for formulating strategies for making

predictions about human performance. Now S is faced with some admissions

problems, so she uses HPP: She considers only the two lines of evidence

she deems most predictive (say, high school rank and test score

rank), weighs them equally, and predicts that the best students will be those

with the highest scores. We have already seen this reasoning strategy—it is

ASPR (chapter 4, section 1). HPP and ASPR are nested reasoning strategies:

ASPR’s range (i.e., admissions problems) is a proper subset of HPP’s range.

Now suppose that after having used these nested strategies to make a

prediction about an admissions problem, S comes to believe that Jones

will be a more successful student than Smith. Suppose further that ASPR is

very reliable (i.e., it makes a high percentage of true predictions on admissions

problems), but the more general HPP is not (i.e., while it leads to

reliable predictions on admissions problems, it leads to very unreliable

predictions on other sorts of human prediction problems). The classical

reliabilist about justification is faced with a problem. S’s belief was the

product of a reliable belief-forming process (ASPR), and so on reliabilist

grounds is justified. But S’s belief was also the product of an unreliable

belief-forming process (HPP), and so on reliabilist grounds is unjustified.

The reliabilist seems committed to claiming that S’s belief that Jones will

be a more successful student than Smith is both justified and unjustified.

Contradiction.

Goldman (1986) tries to solve the generality problem by arguing that

the correct way to characterize the mechanism that produces a belief token

is in terms of the narrowest causally operative process involved in its

production. Thus, Goldman would argue that S’s belief is justified, since

the narrowest causally operative process involved in its production (i.e.,

ASPR) is reliable. On the other hand, if ASPR had been unreliable and the

more general HPP had been reliable, Goldman would deem the belief

unjustified. For our purposes, what’s right about Goldman’s suggestion is

that any form of reliabilism need only countenance psychologically real, causally operative processes. But if we take reliabilism to be a theory about

epistemic excellence rather than a theory about epistemic justification (i.e.,

if we accept Strategic Reliabilism instead of classical reliabilism), we can

simply avoid the generality problem altogether.

How is that?

Strategic Reliabilism aims to assess reasoning processes rather than belief

tokens. Suppose it is possible for a belief token to be produced by a reliable

process (on one characterization) and by an unreliable process (on a

different characterization). We can pass a positive judgment on the first

process and a negative judgment about the second process. There is no

need for the reliabilist about excellence to demand a unique characterization

of the process that produces a belief token. To take the example

spelled out above, the strategic reliabilist might judge S’s use of ASPR to

have been epistemically excellent, though this will depend on the reliability

and ease of use of competitor strategies. On the other hand, the strategic

reliabilist might judge S’s use of the HPP to have been not epistemically

excellent (though this again will depend on the quality of the competition).

It is trivial that different reasoning strategies can have different,

incompatible epistemic properties. So there is no need for the Strategic

Reliabilist to demand a unique characterization of the process that produces

a belief token. And so there is no generality problem.

We should note that Earl Conee and Richard Feldman take the

generality problem to be devastating to classical process reliabilism.

In the absence of a brand new idea about relevant types, the problem looks

insoluble. Consequently, process reliability theories of justification and

knowledge look hopeless. (1998, p.24)

So if our view is able to overcome the generality problem, apparently this

is news.

But it still seems that the generality problem raises a worry about Strategic

Reliabilism. After all, a theory of epistemic excellence should tell us whether

S’s reasoning to the belief that Jones will be a more successful student than

Smith was excellent or was not excellent. To do that, the theory needs to

decide whether S’s reasoning was excellent because the belief was the result of

a reliable process (ASPR) or not excellent because the belief was the result of

an unreliable process (HPP). So it would appear that the generality problem

arises in a slightly new guise for Strategic Reliabilism.

This is not right. We take epistemic excellence to be a property of a

temporal process that’s dedicated to the achievement of certain specific

goals. If we want to know whether a state (i.e., a belief) was the result of an

epistemically excellent reasoning process, then it’s important to specify

what reasoning process we mean to assess. If we specify the reasoning

narrowly, so that the belief is the result of ASPR, then the reasoning is

excellent. If we specify the reasoning broadly, so that the belief is the result

of HPP, then the reasoning is not excellent. If we want to know whether

the entire voluntary reasoning process, involving both predictors, was

excellent, then there is no single, univocal, uncomplicated assessment. In

some ways it was excellent, and in some ways it was not. We can describe

in quite a bit of detail the precise ways in which the reasoning was excellent

and the precise ways in which it was not. But our theory yields no

single, univocal, uncomplicated assessment of this episode of reasoning.

And surely, that is a virtue of our theory.

But isn’t it odd for you to simply say that there are episodes of reasoning that

are in some ways excellent, and in other ways not? You don’t seem inclined to

say much about the epistemic quality of the reasoning in general. Resting

content with this conclusion might reasonably strike one as stubbornly unambitious

and perversely indolent.

There are two points to make against this worry. First, accurate theories

about complicated subjects will sometimes yield complicated judgments.

While the desire for simplicity is understandable, the advice often attributed

to Einstein seems apt: theories should be as simple as possible, but no

simpler. Second, from our perspective, epistemology is a forward-looking

enterprise. So while epistemology inevitably involves passing judgments

about the epistemic quality of people’s reasoning and beliefs, evaluating the

past is not the main point of epistemology. The main point of epistemology

is to offer clear, usable criteria for epistemic excellence that will yield

judgments about the relative quality of competing reasoning strategies. So

going back to the example, the fundamental issue for us is not whether

there is some way to characterize S’s reasoning so that we may pass simple

epistemic judgments. The real issue for epistemology to address is: What

are the epistemically better ways S might reason about significant issues

(and, of course, what makes those reasoning strategies better)?

But this still seems problematic. Besides insisting that an account of a process

be ‘‘psychologically real,’’ you do not favor any particular way of individuating belief-forming mechanisms when it comes to passing judgments of epistemic

excellence. But a reasoning episode might involve dozens, or even hundreds, of

such processes. Do you really want to say that for some reasoning episodes, every

psychologically real belief-forming mechanism has its own epistemic worth?

Well, yes. There is no theoretical problem with this result. Some might

worry that this result will make epistemology impossibly complex. It’s true

that it might take a superhuman effort to actually try to evaluate all the

processes that went into the production of a single belief. But it’s also true

that as a practical matter, there is seldom a need to evaluate all the processes

that went into producing a belief. Our efforts have typically been

directed at voluntary reasoning strategies—strategies reasoners can choose

to use or not to use. That’s not to say that involuntary reasoning processes

should be completely ignored. In fact, in our view, epistemology must pay

closer attention to such processes. For example, a practical epistemology

will offer voluntary reasoning strategies that correct involuntary reasoning

processes (e.g., don’t trust your visual color experiences in artificial light).