9. Abuse worries

К оглавлению1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 
68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 
85 86 87 88 89 90 91 92 93 94 

You advocate the increased use of SPRs. But some SPRs depend for their

success on not being widely known. For example, the details of the credit scoring models used by financial institutions are kept secret so that people

cannot ‘‘play’’ them by engaging in activities solely for the purpose of improving

their scores. Expanding the use of SPRs, particularly covert SPRs,

leaves open the possibility of significant abuse. It is not hard to envision

scenarios in which governments use SPRs to identify and persecute people

whose political or religious views are out-of-favor, or in which (say) insurance

companies use SPRs to identify people with health risks in order to restrict

their access to life or health insurance.

Before we get too head-up about the potential abuses of SPRs, we must

remember that honest policy assessment is comparative. We must compare

the threat of the increased use of SPRs to the threat posed by expert

judgment. Perhaps those suspicious of SPRs suppose that, while expert

judgment is inferior in accuracy, it is also less prone to abuse. But this is by

no means obvious. As Robyn Dawes has pointed out many times, expert

judgment is more mysterious, more covert and less available to public

inspection than SPRs (e.g., Dawes, 1994). SPRs are in principle publicly

available and they come with reliability scores—they do not suffer from

overconfidence. When a bank loan officer or a parole board member

makes a decision, third parties typically do not know what evidence they

took to be most important or how they weighed it. Indeed, most of us are

considerably worse at identifying the main factors involved in our reasoning

than we believe (Nisbett and Wilson, 1977). The loan officer who

makes relatively more and better loans to white males than to minorities

or women in the same financial situation might insist that he doesn’t take

race or gender into account. And unless we had pretty good evidence,

provided, for instance, by an explicit model, who could doubt him? Dawes

gives a terrific example of the sorts of abuses that can be avoided with

more objective SPRs.

A colleague of mine in medical decision making tells of an investigation he

was asked to make by the dean of a large and prestigious medical school to

try to determine why it was unsuccessful in recruiting female students. My

colleague studied the problem statistically ‘‘from the outside’’ and identified

a major source of the problem. One of the older professors had cut back on

his practice to devote time to interviewing applicants to the school. He

assessed such characteristics as ‘‘emotional maturity,’’ ‘‘seriousness of interest

in medicine,’’ and ‘‘neuroticism.’’ Whenever he interviewed an unmarried

female applicant, he concluded she was ‘‘immature.’’ When he

interviewed a married one, he concluded she was ‘‘not sufficiently interested

in medicine,’’ and when he interviewed a divorced one, he concluded she was ‘‘neurotic.’’ Not many women were positively evaluated on these

dimensions. . . . (Dawes 1988, 219)

This example makes clear that ‘‘expert’’ judgment is no defense against

bias and discrimination.

We are badly in need of some cost-benefit judgment here. We know

that well designed SPRs are more accurate than expert judgment. (For a

treatment explicitly sensitive to the threat of SPR abuse, see Monahan, submitted.)

Using SPRs will lead to fewer errors in parole decisions, clinical psychiatric

diagnosis, medical diagnosis, college admission, personnel selection,

and many more domains of life.While SPRs can be abused, expert judgment

may leave even greater potential for abuse. In absence of some reasonable

evidence for thinking that SPRs bring more serious costs than expert

judgment, the case for SPRs is straightforward. For those who insist on

holding out, itmight be useful to imagine the situation reversed. Suppose we

had found that experts are typically more reliable than the best SPRs.Would

it be reasonable to insist on using SPRs because of an ill-defined concern

about the potential abuse of expert judgment?

Strategic Reliabilism does not recommend SPRs because they are secret

(when they are secret). It recommends SPRs because they are the tools most

likely to (say) discriminate a person who will default on a loan from one

who won’t. Any procedure for making high stakes decisions comes with the

potential of harmful errors. In the case of SPRs, we can reasonably expect

certain kinds of errors. An undertrained or overworked credit-scoring

employee might make a keystroke error, or a troubled employee might

willfully enter incorrect information. A sensitive application of our view to

a social institution would recognize the potential for such errors and would

recommend the implementation of corrective procedures. Nothing in

Strategic Reliabilism supports using SPRs irresponsibly—just the opposite.

Still, what about the possibility of abuse that comes with SPRs being used

for dastardly ends? Here we come to the limits of what epistemology can

do. A monster like Hitler might employ SPRs to reason in an excellent

manner. And that possibility is of course frightening. But it is no objection

to our epistemological theory that it doesn’t have the resources to condemn

the wicked. Physics and chemistry don’t either. And neither do the traditional

theories of SAE. That is a job for moral and political theory.

There is another issue that may be an appropriate concern. If a SPR

appeals to factors an individual cannot control, there is potential for serious

abuse. For example, we can imagine a SPR that uses variables that

appeal to race in making (say) credit decisions. Now, as a matter of fact, it turns out that the best models we have appeal to past behavior: ‘‘In a

majority of situations, an individual’s past behavior is the best predictor of

future behavior. That doesn’t mean that people are incapable of changing.

Certainly many of us do, often profoundly. What it does mean is that no

one has yet devised a method for determining who will change, or how or

when . . . But if we are responsible for anything, it is our own behavior.

Thus, the statistical approach often weights most that for which we have

the greatest responsibility’’ (Dawes 1994, 105). But if someday a successful

SPR does discriminate along questionable dimensions, it is always an open

moral question whether we should use it.

You advocate the increased use of SPRs. But some SPRs depend for their

success on not being widely known. For example, the details of the credit scoring models used by financial institutions are kept secret so that people

cannot ‘‘play’’ them by engaging in activities solely for the purpose of improving

their scores. Expanding the use of SPRs, particularly covert SPRs,

leaves open the possibility of significant abuse. It is not hard to envision

scenarios in which governments use SPRs to identify and persecute people

whose political or religious views are out-of-favor, or in which (say) insurance

companies use SPRs to identify people with health risks in order to restrict

their access to life or health insurance.

Before we get too head-up about the potential abuses of SPRs, we must

remember that honest policy assessment is comparative. We must compare

the threat of the increased use of SPRs to the threat posed by expert

judgment. Perhaps those suspicious of SPRs suppose that, while expert

judgment is inferior in accuracy, it is also less prone to abuse. But this is by

no means obvious. As Robyn Dawes has pointed out many times, expert

judgment is more mysterious, more covert and less available to public

inspection than SPRs (e.g., Dawes, 1994). SPRs are in principle publicly

available and they come with reliability scores—they do not suffer from

overconfidence. When a bank loan officer or a parole board member

makes a decision, third parties typically do not know what evidence they

took to be most important or how they weighed it. Indeed, most of us are

considerably worse at identifying the main factors involved in our reasoning

than we believe (Nisbett and Wilson, 1977). The loan officer who

makes relatively more and better loans to white males than to minorities

or women in the same financial situation might insist that he doesn’t take

race or gender into account. And unless we had pretty good evidence,

provided, for instance, by an explicit model, who could doubt him? Dawes

gives a terrific example of the sorts of abuses that can be avoided with

more objective SPRs.

A colleague of mine in medical decision making tells of an investigation he

was asked to make by the dean of a large and prestigious medical school to

try to determine why it was unsuccessful in recruiting female students. My

colleague studied the problem statistically ‘‘from the outside’’ and identified

a major source of the problem. One of the older professors had cut back on

his practice to devote time to interviewing applicants to the school. He

assessed such characteristics as ‘‘emotional maturity,’’ ‘‘seriousness of interest

in medicine,’’ and ‘‘neuroticism.’’ Whenever he interviewed an unmarried

female applicant, he concluded she was ‘‘immature.’’ When he

interviewed a married one, he concluded she was ‘‘not sufficiently interested

in medicine,’’ and when he interviewed a divorced one, he concluded she was ‘‘neurotic.’’ Not many women were positively evaluated on these

dimensions. . . . (Dawes 1988, 219)

This example makes clear that ‘‘expert’’ judgment is no defense against

bias and discrimination.

We are badly in need of some cost-benefit judgment here. We know

that well designed SPRs are more accurate than expert judgment. (For a

treatment explicitly sensitive to the threat of SPR abuse, see Monahan, submitted.)

Using SPRs will lead to fewer errors in parole decisions, clinical psychiatric

diagnosis, medical diagnosis, college admission, personnel selection,

and many more domains of life.While SPRs can be abused, expert judgment

may leave even greater potential for abuse. In absence of some reasonable

evidence for thinking that SPRs bring more serious costs than expert

judgment, the case for SPRs is straightforward. For those who insist on

holding out, itmight be useful to imagine the situation reversed. Suppose we

had found that experts are typically more reliable than the best SPRs.Would

it be reasonable to insist on using SPRs because of an ill-defined concern

about the potential abuse of expert judgment?

Strategic Reliabilism does not recommend SPRs because they are secret

(when they are secret). It recommends SPRs because they are the tools most

likely to (say) discriminate a person who will default on a loan from one

who won’t. Any procedure for making high stakes decisions comes with the

potential of harmful errors. In the case of SPRs, we can reasonably expect

certain kinds of errors. An undertrained or overworked credit-scoring

employee might make a keystroke error, or a troubled employee might

willfully enter incorrect information. A sensitive application of our view to

a social institution would recognize the potential for such errors and would

recommend the implementation of corrective procedures. Nothing in

Strategic Reliabilism supports using SPRs irresponsibly—just the opposite.

Still, what about the possibility of abuse that comes with SPRs being used

for dastardly ends? Here we come to the limits of what epistemology can

do. A monster like Hitler might employ SPRs to reason in an excellent

manner. And that possibility is of course frightening. But it is no objection

to our epistemological theory that it doesn’t have the resources to condemn

the wicked. Physics and chemistry don’t either. And neither do the traditional

theories of SAE. That is a job for moral and political theory.

There is another issue that may be an appropriate concern. If a SPR

appeals to factors an individual cannot control, there is potential for serious

abuse. For example, we can imagine a SPR that uses variables that

appeal to race in making (say) credit decisions. Now, as a matter of fact, it turns out that the best models we have appeal to past behavior: ‘‘In a

majority of situations, an individual’s past behavior is the best predictor of

future behavior. That doesn’t mean that people are incapable of changing.

Certainly many of us do, often profoundly. What it does mean is that no

one has yet devised a method for determining who will change, or how or

when . . . But if we are responsible for anything, it is our own behavior.

Thus, the statistical approach often weights most that for which we have

the greatest responsibility’’ (Dawes 1994, 105). But if someday a successful

SPR does discriminate along questionable dimensions, it is always an open

moral question whether we should use it.