2. Gigerenzer’s conceptual reject-the-norm argument
К оглавлению1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 1617 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67
68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84
85 86 87 88 89 90 91 92 93 94
Gerd Gigerenzer begins his conceptual reject-the-norm argument by
noting that there are a number of different interpretations of the standard
axioms of probability (1996). He then argues that on a frequentist interpretation
of probability, subjects’ answers to many of the HB problemtasks
are not errors. The frequency interpretation of probability states that
the probability of an attribute A is the relative frequency with which A
occurs in an unlimited sequence of events. So when subjects are given a
problem-task that involves assigning a probability to a single event (e.g.,
the probability that this patient has a disease), Gigerenzer argues that from
a frequentist perspective, such probability statements are meaningless. So
far so good.
At this point, one might suppose that Gigerenzer’s argument is going
to turn empirical. After all, everyone admits that it is an interesting and
important question how these subjects represent the problem to themselves.
But Gigerenzer never attempts to argue that experimental subjects
are in fact consistently interpreting probability statements in a frequentist
way. This is ironic because when it comes to another HB problem-task,
Hertwig and Gigerenzer (1999) argue that in evaluating subjects, it is
essential to know how they are understanding the problem. (To be fair,
Gigerenzer often argues that ‘‘the mind is a frequentist.’’ But by this
he seems to mean that our minds are set up to solve problems framed
in terms of frequencies and not probabilities [e.g., Gigerenzer 1991].
Gigerenzer does not argue that subjects interpret probability statements in
a frequentist way; for example, he does not offer any evidence for thinking
that subjects take single-event probability statements to be meaningless.)
Putting aside worries about how subjects understand single-event
probabilities, it is worth exploring the normative assumptions behind
Gigerenzer’s frequentist arguments. As far as we know, Gigerenzer has not spelled his epistemological presuppositions in any detail. So it might be
useful to look at what he has to say about the single-event probability
problems. After introducing frequentist and subjectivist views of probability,
Gigerenzer argues:
I will stop here and summarize the normative issue. A discrepancy between
confidence in single events and relative frequencies in the long run is not an
error or a violation of probability theory from many experts’ points of view.
(Gigerenzer 1991, 88–9, emphasis added)
In discussing the well-known Linda problem, where a significant percentage
of subjects deem the probability of a conjunction to be higher than
the probability of one of the conjuncts, Gigerenzer argues:
For a frequentist, this problem has nothing to do with probability theory.
Subjects are asked for the probability of a single event (that Linda is a bank
teller), not for frequencies. . . .
To summarize the normative issue, what is called the ‘‘conjunction fallacy’’
is a violation of some subjective theories of probability, including Bayesian
theory. It is not, however, a violation of the major view of probability, the
frequentist conception. (Gigerenzer 1991, 92)
In discussing base rate neglect, Gigerenzer’s line is the same. Given certain
conceptions of probability, subjects’ answers are not a violation of probability
theory, and so not an error.
[S]ubjects were asked for the probability of a single event, that is, that ‘‘a
person found to have a positive result actually has the disease.’’ If the mind
is an intuitive statistician of the frequentist school, such a question has no
necessary connection to probability theory. (Gigerenzer 1991, 93)
So how does Gigerenzer handle base rate neglect? He notes that subjects
are asked for a single-event probability: Does a particular patient have a
disease? On a frequentist view of probability, it makes no sense to assign
probabilities to single events, so this question is meaningless. For a frequentist,
therefore, this problem is akin to the problem of deciding
whether to wear blue socks or red socks—probability doesn’t give us an
answer. No matter how the subject responds, that response is not a violation
of probability. As a result, subjects’ answers are not errors in the
sense that they are not violations of probability. But keep in mind,
Gigerenzer does not try to make the case that subjects are understanding
the problems in any particular way. The reason subjects’ answers are not
errors is that there is some interpretation of probability on which subjects’ answers are not a violation of the axioms of probability. In order for
Gigerenzer’s frequentist argument to work, he must be assuming something
like the following:
Gigerenzer’s Normative Assumption: If there are a number of different
‘‘legitimate’’ ways to solve a problem and a subject’s answer is not an error
on at least one of those ways of solving the problem, then regardless of how
the subject understands the problem, the subject’s answer is not an error.
Putting aside obvious worries about this formulation (including what
counts as a ‘‘letigimate’’ solution to a problem), we can grant that Gigerenzer’s
frequentist argument shows that HB problems that ask subjects to
assign probabilities to single events cannot in some sense make ‘‘errors.’’
And we can perfectly well spell out the sense of ‘‘error’’ that is meant:
Subjects’ answers are not violations of probability given some particular
conception (or conceptions) of probability. Rather than repeat this mouthful
every time we want to talk about this particular sort of error, let’s say
that Gigerenzer’s frequentist argument shows that for certain HB problems,
those that ask subjects to assign probabilities to single events, subjects cannot
make G-errors (or Gigerenzer-errors).
Granting that subjects don’t make G-errors leaves the most important
normative issues untouched. When someone neglects base rates and reasons
on the basis of a diagnostic test to the conclusion that he very likely
has cancer, there are lots of errors he hasn’t made. He hasn’t made a
G-error or violated the laws of logic; he isn’t guilty of a spelling or a
grammatical mistake; he hasn’t made a mistake by unwittingly engaging in
activity that is criminal or immoral; he hasn’t violated the rules of chess or
made a stupid move by failing to protect his queen; and he has not made
any errors of geometry or calculus. There is a galaxy of errors he hasn’t
made. But it doesn’t follow that he has reasoned well; nor does it follow
that he hasn’t made some other sort of error. Gigerenzer’s frequentist
argument leaves open the possibility that Gigerenzer will win the battle but
lose the war. That is, it is open to us to grant him the conclusion that
subjects don’t make G-errors but still argue that they reason poorly and
make significant errors.
To be fair, Gigerenzer is responding to a tradition that holds that
subjects are making errors because they suffer from ‘‘probability blindness’’
(Piattelli-Palmarini 1994, 130–32). Piattelli-Palmarini suggests the
following ‘‘probabilistic law: Any probabilistic intuition by anyone not specifically
tutored in probability calculus has a greater than 50 percent chance
of being wrong’’ (1994, 132). Here is perhaps the clearest articulation from
Kahneman and Tversky of what makes a subject’s answer an error:
‘‘The presence of an error of judgment is demonstrated by comparing
people’s responses either with an established fact . . . or with an accepted
rule of arithmetic, logic, or statistics’’ (1982, 493). Given these views
about what counts as an error, it is natural that Gigerenzer should have
focused on G-errors: on whether there is some interpretation such that
subjects’ answers do not violate the laws of probability. But if philosophers
have any useful role to play in Ameliorative Psychology, it is to critically
evaluate the epistemological assumptions underlying disputes about
normative matters. In this case, we suggest that these assumptions be
jettisoned.
This normative debate about whether to count subjects’ answers as
errors culminated in a somewhat heated exchange (Kahneman and Tversky
1996; Gigerenzer 1996). From our perspective, however, this debate takes
place within unnaturally narrow normative constraints. The parties to the
debate take the main issue to be whether subjects have violated the laws of
probability. Gigerenzer thinks that the mind is a frequentist, and given a
frequentist interpretation of probability, subjects often do not violate the
laws of probability (1991, 1996). Kahneman and Tversky argue that given
how subjects understand the problems (i.e., they don’t deem single-event
probability statements meaningless), subjects do violate the laws of probability
(1996). While there are any number of moves each side to this
debate can make, we will proceed by breaking free of the debate’s narrow
normative confines. This is a strategic decision. It is not based on the
assumption that this debate cannot proceed productively within its narrow
normative limits. Instead, our strategy depends on realizing that the
issue we’re most interested in is the quality of subjects’ reasoning; and that
is an issue we can address with Strategic Reliabilism. In other words,
Strategic Reliabilism provides us a framework for thinking about relative
reasoning excellence, which is typically what we’re most concerned about
when assessing a subject’s reasoning; and this framework will often allow
us to resolve disagreements about how to evaluate a particular episode of
reasoning. In some cases, when a normative disagreement has become
stuck on an issue other than the relative excellence of a subject’s reasoning,
Strategic Reliabilism can help us to break the stalemate. We bypass the
narrow issue on which we are stuck and focus on what we take to be the
main issue: How well are subjects reasoning?
The fundamental problem with Gigerenzer’s frequentist argument is
that people can make extraordinarily serious errors of reasoning that wouldn’t count as G-errors. For example, when a man tests positive for
prostate cancer, he wants to know whether he has prostate cancer. In order
to make decisions, he might want to ask: Given the positive result, what
are the chances that I actually have prostate cancer? In such a situation, it
is hard to imagine anyone seriously pointing out that frequentists would
deem this a meaningless question—or worse yet, explaining to the patient
that he is a frequentist, and so his own question can have no meaning for
him. If a doctor tells his patient that he has a 99% chance of having cancer,
the patient is surely going to have some sort of understanding of what is
being said. (It is unlikely, for example, that he will react with glee.) And
that understanding will play a role in what might well be life-or-death
decisions. When medical practitioners make diagnoses and ignore base
rates, people who are highly vulnerable will end up acting on misleading
information. And those actions will too often lead to horrible results—to
tragically mistaken decisions to treat or not treat a condition or to deep
psychological trauma. And no amount of philosophical pussyfooting can
change that. How are we to understand what sort of error the subject
makes and why his reasoning is less than excellent? We will discuss the
answer provided by Strategic Reliabilism in chapter 9. Our discussion will
make clear the irony of our critique of Gigerenzer’s conceptual reject-thenorm
argument: Gigerenzer perfectly well understands and accepts our
central contention—that base rate neglect involves poor reasoning and
some kind of error, even if it is not a G-error.
Gerd Gigerenzer begins his conceptual reject-the-norm argument by
noting that there are a number of different interpretations of the standard
axioms of probability (1996). He then argues that on a frequentist interpretation
of probability, subjects’ answers to many of the HB problemtasks
are not errors. The frequency interpretation of probability states that
the probability of an attribute A is the relative frequency with which A
occurs in an unlimited sequence of events. So when subjects are given a
problem-task that involves assigning a probability to a single event (e.g.,
the probability that this patient has a disease), Gigerenzer argues that from
a frequentist perspective, such probability statements are meaningless. So
far so good.
At this point, one might suppose that Gigerenzer’s argument is going
to turn empirical. After all, everyone admits that it is an interesting and
important question how these subjects represent the problem to themselves.
But Gigerenzer never attempts to argue that experimental subjects
are in fact consistently interpreting probability statements in a frequentist
way. This is ironic because when it comes to another HB problem-task,
Hertwig and Gigerenzer (1999) argue that in evaluating subjects, it is
essential to know how they are understanding the problem. (To be fair,
Gigerenzer often argues that ‘‘the mind is a frequentist.’’ But by this
he seems to mean that our minds are set up to solve problems framed
in terms of frequencies and not probabilities [e.g., Gigerenzer 1991].
Gigerenzer does not argue that subjects interpret probability statements in
a frequentist way; for example, he does not offer any evidence for thinking
that subjects take single-event probability statements to be meaningless.)
Putting aside worries about how subjects understand single-event
probabilities, it is worth exploring the normative assumptions behind
Gigerenzer’s frequentist arguments. As far as we know, Gigerenzer has not spelled his epistemological presuppositions in any detail. So it might be
useful to look at what he has to say about the single-event probability
problems. After introducing frequentist and subjectivist views of probability,
Gigerenzer argues:
I will stop here and summarize the normative issue. A discrepancy between
confidence in single events and relative frequencies in the long run is not an
error or a violation of probability theory from many experts’ points of view.
(Gigerenzer 1991, 88–9, emphasis added)
In discussing the well-known Linda problem, where a significant percentage
of subjects deem the probability of a conjunction to be higher than
the probability of one of the conjuncts, Gigerenzer argues:
For a frequentist, this problem has nothing to do with probability theory.
Subjects are asked for the probability of a single event (that Linda is a bank
teller), not for frequencies. . . .
To summarize the normative issue, what is called the ‘‘conjunction fallacy’’
is a violation of some subjective theories of probability, including Bayesian
theory. It is not, however, a violation of the major view of probability, the
frequentist conception. (Gigerenzer 1991, 92)
In discussing base rate neglect, Gigerenzer’s line is the same. Given certain
conceptions of probability, subjects’ answers are not a violation of probability
theory, and so not an error.
[S]ubjects were asked for the probability of a single event, that is, that ‘‘a
person found to have a positive result actually has the disease.’’ If the mind
is an intuitive statistician of the frequentist school, such a question has no
necessary connection to probability theory. (Gigerenzer 1991, 93)
So how does Gigerenzer handle base rate neglect? He notes that subjects
are asked for a single-event probability: Does a particular patient have a
disease? On a frequentist view of probability, it makes no sense to assign
probabilities to single events, so this question is meaningless. For a frequentist,
therefore, this problem is akin to the problem of deciding
whether to wear blue socks or red socks—probability doesn’t give us an
answer. No matter how the subject responds, that response is not a violation
of probability. As a result, subjects’ answers are not errors in the
sense that they are not violations of probability. But keep in mind,
Gigerenzer does not try to make the case that subjects are understanding
the problems in any particular way. The reason subjects’ answers are not
errors is that there is some interpretation of probability on which subjects’ answers are not a violation of the axioms of probability. In order for
Gigerenzer’s frequentist argument to work, he must be assuming something
like the following:
Gigerenzer’s Normative Assumption: If there are a number of different
‘‘legitimate’’ ways to solve a problem and a subject’s answer is not an error
on at least one of those ways of solving the problem, then regardless of how
the subject understands the problem, the subject’s answer is not an error.
Putting aside obvious worries about this formulation (including what
counts as a ‘‘letigimate’’ solution to a problem), we can grant that Gigerenzer’s
frequentist argument shows that HB problems that ask subjects to
assign probabilities to single events cannot in some sense make ‘‘errors.’’
And we can perfectly well spell out the sense of ‘‘error’’ that is meant:
Subjects’ answers are not violations of probability given some particular
conception (or conceptions) of probability. Rather than repeat this mouthful
every time we want to talk about this particular sort of error, let’s say
that Gigerenzer’s frequentist argument shows that for certain HB problems,
those that ask subjects to assign probabilities to single events, subjects cannot
make G-errors (or Gigerenzer-errors).
Granting that subjects don’t make G-errors leaves the most important
normative issues untouched. When someone neglects base rates and reasons
on the basis of a diagnostic test to the conclusion that he very likely
has cancer, there are lots of errors he hasn’t made. He hasn’t made a
G-error or violated the laws of logic; he isn’t guilty of a spelling or a
grammatical mistake; he hasn’t made a mistake by unwittingly engaging in
activity that is criminal or immoral; he hasn’t violated the rules of chess or
made a stupid move by failing to protect his queen; and he has not made
any errors of geometry or calculus. There is a galaxy of errors he hasn’t
made. But it doesn’t follow that he has reasoned well; nor does it follow
that he hasn’t made some other sort of error. Gigerenzer’s frequentist
argument leaves open the possibility that Gigerenzer will win the battle but
lose the war. That is, it is open to us to grant him the conclusion that
subjects don’t make G-errors but still argue that they reason poorly and
make significant errors.
To be fair, Gigerenzer is responding to a tradition that holds that
subjects are making errors because they suffer from ‘‘probability blindness’’
(Piattelli-Palmarini 1994, 130–32). Piattelli-Palmarini suggests the
following ‘‘probabilistic law: Any probabilistic intuition by anyone not specifically
tutored in probability calculus has a greater than 50 percent chance
of being wrong’’ (1994, 132). Here is perhaps the clearest articulation from
Kahneman and Tversky of what makes a subject’s answer an error:
‘‘The presence of an error of judgment is demonstrated by comparing
people’s responses either with an established fact . . . or with an accepted
rule of arithmetic, logic, or statistics’’ (1982, 493). Given these views
about what counts as an error, it is natural that Gigerenzer should have
focused on G-errors: on whether there is some interpretation such that
subjects’ answers do not violate the laws of probability. But if philosophers
have any useful role to play in Ameliorative Psychology, it is to critically
evaluate the epistemological assumptions underlying disputes about
normative matters. In this case, we suggest that these assumptions be
jettisoned.
This normative debate about whether to count subjects’ answers as
errors culminated in a somewhat heated exchange (Kahneman and Tversky
1996; Gigerenzer 1996). From our perspective, however, this debate takes
place within unnaturally narrow normative constraints. The parties to the
debate take the main issue to be whether subjects have violated the laws of
probability. Gigerenzer thinks that the mind is a frequentist, and given a
frequentist interpretation of probability, subjects often do not violate the
laws of probability (1991, 1996). Kahneman and Tversky argue that given
how subjects understand the problems (i.e., they don’t deem single-event
probability statements meaningless), subjects do violate the laws of probability
(1996). While there are any number of moves each side to this
debate can make, we will proceed by breaking free of the debate’s narrow
normative confines. This is a strategic decision. It is not based on the
assumption that this debate cannot proceed productively within its narrow
normative limits. Instead, our strategy depends on realizing that the
issue we’re most interested in is the quality of subjects’ reasoning; and that
is an issue we can address with Strategic Reliabilism. In other words,
Strategic Reliabilism provides us a framework for thinking about relative
reasoning excellence, which is typically what we’re most concerned about
when assessing a subject’s reasoning; and this framework will often allow
us to resolve disagreements about how to evaluate a particular episode of
reasoning. In some cases, when a normative disagreement has become
stuck on an issue other than the relative excellence of a subject’s reasoning,
Strategic Reliabilism can help us to break the stalemate. We bypass the
narrow issue on which we are stuck and focus on what we take to be the
main issue: How well are subjects reasoning?
The fundamental problem with Gigerenzer’s frequentist argument is
that people can make extraordinarily serious errors of reasoning that wouldn’t count as G-errors. For example, when a man tests positive for
prostate cancer, he wants to know whether he has prostate cancer. In order
to make decisions, he might want to ask: Given the positive result, what
are the chances that I actually have prostate cancer? In such a situation, it
is hard to imagine anyone seriously pointing out that frequentists would
deem this a meaningless question—or worse yet, explaining to the patient
that he is a frequentist, and so his own question can have no meaning for
him. If a doctor tells his patient that he has a 99% chance of having cancer,
the patient is surely going to have some sort of understanding of what is
being said. (It is unlikely, for example, that he will react with glee.) And
that understanding will play a role in what might well be life-or-death
decisions. When medical practitioners make diagnoses and ignore base
rates, people who are highly vulnerable will end up acting on misleading
information. And those actions will too often lead to horrible results—to
tragically mistaken decisions to treat or not treat a condition or to deep
psychological trauma. And no amount of philosophical pussyfooting can
change that. How are we to understand what sort of error the subject
makes and why his reasoning is less than excellent? We will discuss the
answer provided by Strategic Reliabilism in chapter 9. Our discussion will
make clear the irony of our critique of Gigerenzer’s conceptual reject-thenorm
argument: Gigerenzer perfectly well understands and accepts our
central contention—that base rate neglect involves poor reasoning and
some kind of error, even if it is not a G-error.