Tải bản đầy đủ (.pdf) (36 trang)

The neuroscience of ethics

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (208.75 KB, 36 trang )

9 The neuroscience of ethics
In the preceding chapters, we considered difficult questions con-
cerning the ethical permissibility or desirability of various ways of
intervening into the minds of human beings. In examining these
questions, we took for granted the reliability of the ethical theories,
principles and judgments to which we appealed. But some thinkers
have argued that the sciences of the mind are gradually revealing
that we cannot continue to do so. Neuroscience and social psy-
chology, these thinkers claim, show that our ethical judgments
are often, perhaps even always, unjustified or irrational. These sci-
ences are stripping away the layers of illusion and falsehood with
which ethics has always clothed itself. What lies beneath these
illusions? Here thinkers diverge. Some argue for a revisionist view,
according to which the lesson of the sciences of the mind is that all
moral theories but one are irrational; on this revisionist view, the
sciences of the mind provide decisive support for one particular
ethical theory. Some argue for an eliminativist view, according to
which the sciences of the mind show that all moral theories and
judgments are unjustified. In this chapter, we shall assess these twin
challenges.
How is this deflation of morality supposed to take place? The
neuroscientific challenge to ethics focuses upon our intuitions.
Neuroscience, its proponents hold, shows that our moral intuitions
are systematically unreliable, either in general or in some particular
circumstances. But if our moral intuitions are systematically unre-
liable, then morality is in serious trouble, since moral thought is, at
bottom, always based upon moral intuition. Intuitions play different
roles, and are differentially prominent, in different theories. But no
moral theory can dispense with intuitions altogether. Each owes its
appeal, in the final accounting, to the plausibility of one or more
robust intuitions. Understanding the ways in which the assault on


ethics is supposed to work will therefore require understanding the
role of intuitions in ethical thought.
ethics and intuitions
Many moral philosophers subscribe to the view of moral thought and
argument influentially defended by John Rawls (1971). Rawls argued
that we test and justify moral theories by seeking what he called
reflective equilibrium between our intuitions and our explicit the-
ories. What, however, is an intuition? There is no universally
accepted definition in the literature. Some philosophers identify
intuitions with intellectual seemings: an irrevocable impression
forced upon us by consideration of a circumstance, which may or
may not cause us to form the corresponding belief – something akin
to a visual seeming, which normally causes a belief, but which may
sometimes be dismissed as an illusion (Bealer 1998).
Consider, for example, the intuition provoked by a famous
demonstration of the conjunction fallacy (Tversky and Kahneman
1983). In this experiment, subjects were required to read the fol-
lowing description:
Linda is thirty-one years old, single, outspoken, and very bright.
She majored in philosophy. As a student, she was deeply concerned
with issues of discrimination and social justice, and also
participated in anti-nuclear demonstrations.
Subjects were then asked to rank a list of statements about Linda in
order of their probability of being true, from most to least likely. The
original experiment used eight statements, but five of them were
filler. The three statements of interest to the experimenters were the
following:
(1) Linda is active in the feminist movement.
(2) Linda is a bank teller.
(3) Linda is a bank teller and is active in the feminist movement.

the neuroscience of ethics
282
A large majority of subjects ranked statement (3) as more
probable than statement (2). But this can’t be right; (3) can’t be more
probable than (2) since (3) can be true only if (2) is true as well. A
conjunction of two propositions cannot be more likely than either of
its conjuncts (indeed, conjunctions are usually less probable than
their conjuncts). Now, even after the conjunction fallacy is explained
to people, and they accept its truth, it may nevertheless go on
seeming as if – intellectually seeming – (3) is more probable than (2).
Even someone as mathematically sophisticated as Steven Jay Gould
was vulnerable to the experience:
I know that the third statement is least probable, yet a little
homunculus in my head continues to jump up and down, shouting
at me–‘but she can’t just be a bank teller; read the description.’
(Gould 1988)
In other words, the description provokes in us an intellectual
seeming, an intuition, which we may then go on to accept or – as in
this case, though much less often – to reject.
There is some controversy about this definition of intuitions,
but it will suffice for our purposes. In what follows, I shall identify
intuitions with spontaneous intellectual seemings. Intuitions are
spontaneous in the sense that they arise unbidden as soon as we
consider the cases that provoke them. They are also, typically, stub-
born: once we have them they are relatively hard to shift. Intuitions
may be given up as false after reflection and debate, but even then we
do not usually lose them, not, at least, all at once.
In moral thought, intuitions are often characterized as ‘‘gut feel-
ings.’’ This is slightly misleading, inasmuch as it might be taken to
suggest that intuitions are lacking in cognitive content. But it does

capture the extent to which moral intuitions (especially) are indeed
typically deeply affective. Contemplating (say) the events at Abu
Ghraib, or the execution of a hostage in Iraq, the indignation I feel
powerfully expresses and reinforces my moral condemnation of the
actions. For many other scenarios, real and imaginary, which I judge to
ethics and intuitions
283
be wrong, the affective response is much weaker, so much weaker that
I may not even be conscious of it. However, as Damasio’s work on
somatic markers indicates, it is likely that even in these cases my
judgment is guided by my somatic responses: measurements of my skin
conductance, heart rate and other autonomic systems, would probably
indicate heightened activity, of precisely the kind involved in affective
responses. Many, if not all, moral intuitions should be considered
both cognitive and affective, with the affective component having a
powerful tendency to cause or to reinforce the corresponding belief.
To intuit that an act is right (wrong) is not, however, neces-
sarily to go on to form the belief that the act is right (wrong). It’s quite
possible for people to have moral intuitions which do not correspond
to their moral beliefs (just as we can experience an optical illusion, in
the full knowledge that it is an illusion). Nevertheless, moral intui-
tions are normally taken to have very strong evidential value. An
intuition normally causes the corresponding belief, unless the agent
has special reason to think that their intuition is, on this occasion,
likely to be unreliable. Intuitions are usually taken to have justifi-
catory force, and, as a matter of fact, typically lead to the formation of
beliefs that correspond to them.
Intuitions play an important role in many, perhaps most, areas
of enquiry. But they are especially central to moral thought. Accord-
ing to Rawls, we test a moral theory by judging the extent to which it

accords with our intuitions (or our considered moral judgments – we
shall consider possible differences between them shortly). Theory
construction might begin, for instance, by simply noting our intuitive
responses to a range of uncontroversial moral cases, and then making
a first attempt at systematizing them by postulating an overarching
principle that apparently explains them all. Thus, we might begin
with judgments that are overwhelmingly intuitive, like the following:
It is wrong to torture babies for fun;
Giving to charity is usually praiseworthy;
Stealing, lying and cheating are almost always wrong.
the neuroscience of ethics
284
What principle might explain all these judgments? One possibility is
a simple utilitarian principle, such as the principle formulated by
Jeremy Bentham, the father of utilitarianism. According to Bentham,
it is ‘‘the greatest happiness of the greatest number that is the
measure of right and wrong;’’ that is, an action is right when it
produces more happiness for more people than any alternative. It is
therefore wrong to torture babies because the harm it causes them is
so great; giving to charity is right, on the other hand, because it tends
to increase happiness.
Once we have our moral principle in hand, we can test it
by attempting to formulate counterexamples. Typically, a good
counterexample is a case, real or imaginary, in which an action is
wrong – intuitively wrong – even though it does not violate the moral
principle under examination. If we can discover such a counter-
example, we have (apparently) shown that the moral principle is
false. Our principle is not, after all, in harmony with our intuitions,
and therefore we have not yet reached reflective equilibrium.
Are there counterexamples to Bentham’s simple utilitarian

principle? Plenty. There are many cases, some of them all too real, in
which an action which maximizes happiness nevertheless seems to
be wrong. Indeed, even actions like torturing babies for fun could
turn out to be mandated by the principle. Suppose that a group of
people is so constituted that they will get a great deal of pleasure out
of seeing a baby tortured. The pain caused to the baby might be
outweighed by the pleasure it causes the onlookers, especially if
there are very many of them and they experience a great deal of
pleasure. In response to counterexamples like this, we continue the
search for reflective equilibrium by refining our moral principles to
try to bring them into harmony with our intuitions. For instance, we
might look to a more sophisticated consequentialist principle – that
is, a principle that, like Bentham’s, bases judgments of right or wrong
on the consequences of actions. Alternatively, we might look to a
deontological principle, according to which people have rights which
must not be violated – such as the right to freedom from torture – no
ethics and intuitions
285
matter the consequences. Mixed theories, and character-based
theories, have also been developed by many thinkers.
The search for reflective equilibrium is therefore the search for
a principle or set of principles that harmonizes, and presumably
underlies, our intuitions, in much the same way as the search for
grammatical rules is (according to many linguists) the making
explicit of rules that competent language users employ implicitly.
However, though intuitions guide this search, they are not taken to
be sacrosanct by proponents of reflective equilibrium. It may be that
a moral principle is itself so intuitively plausible that when it con-
flicts with a single-case intuition, we ought to keep the principle
rather than modify it. Moreover, intuitions may be amenable to

change, at least at the margins; we may find that our intuitions
gradually fall into line with our moral theory. Even if they don’t, it
may be that we ought to put up with a certain degree of disharmony.
The conjunction fallacy is obviously a fallacy: reflection on it, as well
as probability theory, confirms this. We should continue to regard it
as a fallacy no matter the degree of conflict with our intuitions in
cases like ‘‘Linda the bank teller.’’ Similarly, it may be that the best
moral theory will clash with some of our moral intuitions. Never-
theless – and this is the important point here – moral theory con-
struction begins from, and continues indispensably to refer to, our
moral intuitions from first till (almost) the last. The best moral
theory will systematize a great many of our moral intuitions; ideally
it will itself be intuitive, at least on reflection.
Some theorists seek to avoid reliance on intuitions. One way
they have sought to do so is by referring, in the process of attempting
to reach reflective equilibrium, not to intuitions but to ‘‘considered
moral judgments’’ instead. This tack won’t work: if considered moral
judgments are something different to intuitions – in some philoso-
phers’ work, they seem to be much the same thing – then we can
only reach them via intuitions. If they are not intuitions, then our
considered moral judgments are nothing more than the judgments
we reach after we have already begun to test our intuitions against
the neuroscience of ethics
286
our moral principles; in other words, when our judgments have
already reached a (provisional) harmony with a moral principle. Some
utilitarians, such as Peter Singer (1974), suggest that their preferred
moral theory avoids reliance on intuitions altogether. They reject
intuitions as irrational prejudices, or the products of cultural indoc-
trination. However, it is apparent – as indeed our first sketch of

a justification for utilitarianism made clear – that utilitarianism
itself is just as reliant upon intuitions as is any other moral theory
(Daniels 2003). Singer suggests that we reject intuitions in favor of
‘‘self-evident moral axioms’’ (1974: 516). But self-evidence is itself
intuitiveness, of a certain type: an axiom is self-evident (for an
individual) if that axiom seems true to that individual and their
intuition in favor of that axiom is undefeated. Hence, appeal to self-
evidence just is appeal to intuition.
The great attraction of utilitarianism rests upon the intuitive-
ness of a principle like Bentham’s, which rests, itself, on the intui-
tiveness of the claim that pains and pleasures are, respectively and
ceteris paribus, good and bad. No moral theory seems likely to be
able to dispense with intuitions, though different theories appeal to
them in different ways. Some give greater weight to case-by-case
intuitions, as deontologists may do, and as everyday moral thought
seems to (DePaul 1998). Others, like utilitarianism, rest the justifi-
catory case on one big intuition, a particular moral principle taken to
be itself so intuitive that it outweighs case-by-case intuitions (Pust
2000). Whatever the role intuitions play in justifying their principles
or their case-by-case judgments, all moral theories seem to be based
ultimately upon moral intuition.
It is this apparently indispensable reliance of moral reflection
upon intuition that leaves it open to the challenges examined here. In
a sense, these challenges build upon Singer’s (indeed, we shall see
that Singer himself has seized upon them as evidence for his view):
they provide, or are seen as providing, evidence for the claim that
intuitions are indeed irrational. But in its more radical form, the
challenge turns against consequentialism, in all its varieties, just as
ethics and intuitions
287

much as rival moral theories: if our intuitions are systematically
unreliable guides to moral truths, if they fail to track genuine, or
genuinely moral, features of the world, then all moral theories are in
deep trouble.
the neuroscientific challenge to morality
There are many possible challenges to our moral intuitions, and
thence to the rationality of moral judgments. They come, for
instance, from psychology (Horowitz 1998) and from evolutionary
considerations (Joyce 2001; 2006). These challenges all take a similar
form: they adduce evidence for the claim that our intuitions are
prompted by features of our mind/brain that, whatever else can be
said for them, cannot be taken to be reliable guides to moral reality.
Here I shall focus on two versions of this challenge to our intuitions,
an argument from neuroscience, and an argument from social psy-
chology. First, the argument from neuroscience.
In a groundbreaking study of the way in which brains process
moral dilemmas, Joshua Greene and his colleagues found significant
differences in the neural processes of subjects, depending upon
whether they were considering personal or impersonal moral
dilemmas (Greene et al. 2001). A personal moral dilemma is a case
which involves directly causing harm or death to someone, whereas
an impersonal moral dilemma is a case in which harm or death
results from less direct processes. For instance, Greene and collea-
gues used variations on the famous trolley problem (also considered
in Chapter 5) as test dilemmas. The first version of this problem is an
impersonal variant of the dilemma, whereas the second is a personal
variant:
(1) Imagine you are standing next to railway tracks, when you see an
out-of-control trolley hurtling towards you. If the trolley continues
on its current path, it will certainly hit and kill five workers who are

in a nearby tunnel. You cannot warn them in time, and they cannot
escape from the tunnel. However, if you pull a lever you can divert
the trolley to a sidetrack, where it will certainly hit and kill a single
the neuroscience of ethics
288
worker. Assume you have no other options available to you that
would save the five men. Should you pull the lever?
(2) Imagine that this time you find yourself on a bridge over the
railway tracks when you see the trolley hurtling toward a group of
five workers. The only way to prevent their certain deaths is for you
to push the fat man standing next to you into its path; this will stop
the trolley, but the man will die. It’s no use you leaping into its
path; you are too light to stop the trolley. Should you push the
fat man?
The judgments of Greene’s subjects were in line with those of
most philosophers: the great majority judged that in the first case it is
permissible or even obligatory to pull the lever, but in the second it is
impermissible to push the fat man. Now, from some angles these
judgments are prima facie inconsistent. After all, there is a level of
description – well captured by consequentialism – in which these
cases seem closely similar in their morally relevant features. In both,
the subject is asked whether he or she should save five lives at the
cost of one. Yet most people have quite different intuitions with
regard to the two cases: in the first, they think it is right to save the
five, but in the second they believe it to be wrong.
Most philosophers have responded to these cases in the tradi-
tional way described by Rawls: they have sought a deeper moral
principle that would harmonize their intuitions. For instance, the
following Kantian principle has been suggested: it is wrong to use
people as a means to others’ ends. The idea is this: in pushing the

fat man into the path of the trolley, one is using him as a means
whereby to prevent harm to others, since it is his bulk that will stop
the trolley. But in pulling the lever one is not using the man on the
tracks as a means, since his presence is not necessary to saving the
lives of the five. Pulling the lever would work just as well if he were
absent, so we do not use him. Unfortunately, this suggestion fails.
Consider the looping track variant of the problem (Thomson 1986). In
this variant, pulling the lever diverts the trolley onto the alternative
track, but that track loops back onto the initial track, in such a
the neuroscientific challenge to morality
289
manner that were it not for the presence of the solitary worker, the
trolley would end up killing the five anyway. In that case, diverting
the trolley saves the five, but only by using the one worker as a
means: were it not for his presence, the strategy wouldn’t work.
Nevertheless, most people have the intuition that it is permissible to
pull the lever.
Greene and colleagues claim that their results cast a radically
different light on these dilemmas. They found that when subjects
considered impersonal dilemmas, regions of the brain associated
with working memory showed a significant degree of activation,
while regions associated with emotion showed little activation. But
when subjects considered personal moral dilemmas, regions asso-
ciated with emotion showed a significant degree of activity, whereas
regions associated with working memory showed a degree of activity
below the resting baseline (Greene et al. 2001). Why? The authors
plausibly suggest that the thought of directly killing someone is
much more personally engaging than is the thought of failing to help
someone, or using indirect means to harm them.
In their original study Greene and his co-authors explicitly

deny that their results have any direct moral relevance. Their con-
clusion is ‘‘descriptive rather than prescriptive’’ (2001: 2107). How-
ever, it is easy to see how their findings might be taken to threaten
the evidential value of our moral intuitions. It might be suggested
that the high degree of emotional involvement in the personal moral
dilemmas clouds the judgment of subjects. It is, after all, common-
place that strong emotions can distort our judgments. Perhaps the
idea that the subjects would themselves directly cause the death of a
bystander generates especially strong emotions, which cause them
to judge irrationally in these cases. Evidence for this suspicion is
provided by the under-activation of regions of the brain associated
with working memory. Perhaps subjects do not properly think
through these dilemmas. Rather, their distaste for the idea of killing
prevents them from rationally considering these cases at all (Sinnott-
Armstrong 2006).
the neuroscience of ethics
290
The case for the claim that Greene’s results have skeptical
implications for morality has recently been developed and defended
by Peter Singer (2005) himself. For Singer, Greene’s results do not
merely explain our moral intuitions; they explain them away. Sing-
er’s case rests upon the overwhelmingly likely hypothesis that these
responses are the product of our evolutionary history (echoing here
Greene’s (2005; forthcoming) own latest reinterpretation of his
results). He suggests that it is likely that we feel a special repugnance
for direct harms because these were the only kinds of harms that
were possible in our environment of evolutionary adaptation.
Understanding the evolutionary origins of our intuitions undermines
them, Singer claims, not in the sense that we cease to experience
them, but in the sense that we see that they have no moral force:

What is the moral salience of the fact that I have killed someone
in a way that was possible a million years ago, rather than in a
way that became possible only two hundred years ago? I would
answer: none.
(Singer 2005: 348)
Since it is an entirely contingent fact that we respond more strongly to
some kinds of killing than others, a fact produced by our evolutionary
history and the relatively recent development of technologies for
killing at a distance, these intuitions are shown to be suspect, Singer
suggests. As Greene himself has put it ‘‘maybe this pair of moral
intuitions has nothing to do with ‘some good reason’ and everything
to do with the way our brains happen to be built’’ (2003: 848).
Singer suggests, moreover, that the case against the intuitions
prompted by personal and impersonal moral dilemmas can be gen-
eralized, to cast doubt on moral intuitions more generally. If the
neuroscientific evidence suggests that moral intuitions are the pro-
duct of emotional responses, and it is plausible that these responses
are themselves the product of our evolutionary history, and not the
moral structure of the world, then all our moral intuitions ought to
be suspect, whether or not we possess any direct neuroscientific
the neuroscientific challenge to morality
291
evidence to demonstrate their irrationality. After all, the cognitive
mechanisms we have as the result of our evolutionary history are not
designed to track moral truths; they are designed to increase our
inclusive fitness (where inclusive fitness means, roughly, our success
in increasing the proportion of copies of our genes in the next gen-
eration). Evolution at best ignores moral truth, and at worst rewards
downright selfishness. So we cannot expect our evolved intuitions to
be good guides to moral truth.

Singer tasks himself to find further evidence of the irrationality
of intuitions in psychology; specifically in the work of Jonathan
Haidt, the source of the second challenge to morality we shall
examine here. Over the past decade, Haidt (2001; 2003; Haidt et al.
1993) has been developing what he calls the social intuitionist model
(SIM) of moral judgments. The model has two components: the first
component centres upon the processes by which moral judgments
are formed; the second centres on their rationality. The process claim
is that moral judgments are the product of intuition, not reasoning:
certain situations evoke affective responses in us, which give rise to
(or perhaps just are) moral intuitions, which we then express as
moral judgments. The rationality claim is that since moral judg-
ments are the product of emotions, they neither are the product of
rational processes nor are they amenable to rational influence.
Haidt takes the process claim to constitute evidence for the
rationality claim. Because moral judgments are intuition driven,
they are not rational. Haidt suggests that the processes which drive
moral judgments are arational. Our judgments are proximately pro-
duced by our emotional responses, and differences in these responses
are the product of social and cultural influences; hence moral judg-
ments differ by social class and across cultures. We take ourselves to
have reasons for our judgments, but in fact these reasons are post hoc
rationalizations of our emotional responses. We neither have reasons
for our judgments, nor do we change them in the face of reasons
(Haidt speaks of the ‘‘moral dumbfounding’’ he encounters, when he
asks subjects for their reasons for their moral judgments. They laugh,
the neuroscience of ethics
292
shake their heads, and express surprise at their inability to defend
their views – but they do not alter them). Hence, moral judgments are

not rational. On the contrary, our moral intuitions often conflict
with the moral theories we ourselves endorse.
If Haidt is right, then the SIM provides powerful evidence in
Singer’s favor: it seems to show that our moral intuitions are
rationally incorrigible, and that they frequently clash with our best
moral theories. Of course, Singer only wants to go so far with the
SIM. He wants to use it to clear the ground for an alternative, non-
intuition-based, moral theory, not to use it to cast doubt on morality
tout court. Haidt does not consider a non-intuition-based alternative;
nothing he says therefore conflicts with Singer’s claim that the SIM
is a problem for his opponents, and not for him. We, however, have
already seen that there are good grounds to doubt that the sceptical
challenge can be contained in the manner Singer suggests. It is
simply false to think that any moral theory, Singer’s utilitarianism
included, can dispense with intuitions. If the challenge to intuitions
cannot be headed off, morality itself is in trouble.
responding to the deflationary challenge
The challenge from neuroscience (and related fields) to morality has
the following general form:
(1) Our moral theories, as well as our first-order judgments and principles
are all based, more or less directly, upon our moral intuitions.
(2) These theories, judgments and principles are justified only insofar as
our intuitions track genuinely moral features of the world.
(3) But our moral intuitions are the product of cognitive mechanisms
which evolved under non-moral selection pressures, and therefore
cannot be taken to track moral features of the world; hence
(4) Our moral theories, judgments and principles are unjustified.
This argument, whether it is motivated by concerns from
psychology or from neuroscience, casts doubt upon our ability to
know moral facts. It is therefore a direct challenge to our moral

epistemology. It is also an indirect challenge to the claim that there
responding to the deflationary challenge
293
are any moral facts to be known: if all our evidence for moral facts is
via channels which cannot plausibly be taken to give us access to
them, we have little reason to believe that they exist at all.
In this section, I shall evaluate the neuroscientific evidence
against the value of intuitions; the argument that since our intuitions
reflect the morphology of our brains, and that morphology developed
under non-moral selection pressures, we ought to dismiss these
intuitions in favor of those that are less affectively charged. I shall
delay a consideration of the argument from social psychology, resting
on Haidt’s work on moral dumbfounding, until a later section.
Singer’s strategy is to cast doubt, first, on a subset of our moral
intuitions, and then to generalize the suspicion. Some of our intui-
tions, he argues, are irrational, as Greene’s evidence demonstrates.
Evolution gives us an explanation of why we have such irrational
responses: our moral responses evolved under non-moral selection
pressures, and therefore cannot be taken to be reliable guides to
moral truth. But, Singer suggests, since all our intuitions are equally
the product of our evolutionary history, the suspicion ought to be
generalized. All our intuitions ought to be rejected, whether we have
direct evidence for their irrationality or not. How strong is this
argument? I shall argue that though Singer is surely right in thinking
that some of the intuitions provoked by, say, trolley cases are irra-
tional, and that evolutionary considerations explain why we have
them, some of our intuitions escape condemnation. If that’s right,
then of course the generalization strategy must fail: some of our
intuitions are (for all that Singer, Greene and Haidt have shown)
reliable, and we can refer to them in good conscience.

Greene’s claim, endorsed by Singer, is that because our differ-
ential responses to trolley cases are the product of our affective
states, they are not rational, and therefore ought to be rejected as
guides to action. As Singer puts it:
If, however, Greene is right to suggest that our intuitive responses
are due to differences in the emotional pull of situations that
the neuroscience of ethics
294

Tài liệu bạn tìm kiếm đã sẵn sàng tải về

Tải bản đầy đủ ngay
×