Emotion, Reason, and Moral Judgment

Research on the role of emotion/intuition in moral judgments is really heating up. For decades (millennia, even), moral judgment was thought to be a conscious, principle-based process, but over the last few years, researchers have been showing that emotion and intuition, both of which operate automatically and unconsciously for the most part, play a much larger role than most philosophers and psychologists had previously been willing to admit. In this context, two recent papers by roughly the same group of people have presented some really interesting findings which, if you ask me (and if you're reading this, that's what you're doing, damnit), really muddy the picture, but in a good way. One of the papers, a letter by Koenigs et al.1 that was published in last week's issue of Nature has been getting a lot of attention in the press, so I'll talk about it first and save the other paper for another post. The Koenigs et al. paper compares the performance of patients with bilateral damage to the ventromedial prefrontal cortex (VMPC; see the image below, which I stole from this site) on four types of moral dilemmas is compared to the performance of "normal" individuals and individuals with brain damage to other regions of the brain. First, a little about the VMPC.

i-13337d25174c7ffeda0d7c46e17204b9-vmpc.JPG

As you can see in the image, the VMPC (red lines) butts right up against the amygdala, and the two regions communicate extensively. The amygdala is one of the main brain areas associated with emotion and brain's the reward system. The VMPC is also connected to the brain stem and other areas associated with the reward system. So it's thought that the VMPC plays a role in encoding the reward value of stimuli, as well as emotions like fear. In essence, the VMPC is part of the system that determines approach and avoidance behavior. So damage to the VMPC can make decisions related to the value of a stimulus more difficult. It stands to reason, then, that if emotion plays a role in moral judgments, damage to the VMPC could have a profound affect on those judgments.

So Koenigs et al. compiled 58 scenarios and placed them into three general categories: non-moral scenarios, impersonal moral scenarios, and personal moral scenarios. They further classified the personal moral dilemmas as high-conflict or low-conflict scenarios. Here are examples of each from the paper (all 58 are available in the paper's supplementary material, which can be read here without a subscription):

  • Non-moral scenarios (18 total):

    Scheduling

    You are in charge of scheduling appointments in a dentist's office. Two people, Mr. Morris and Mrs. Santiago have called to make appointments for next Monday. The only available times for next Monday are at 10:00 AM and at 3:00 PM.

    Mr. Morris's schedule is rather flexible. He can have his appointment either at 10:00 AM or at 3:00 PM. Mrs. Santiago's schedule is less flexible. She can only have her appointment at 10:00 AM.

    Would you schedule Mr. Morris for 3:00 PM so that both he and Mrs. Santiago can have their appointments next Monday?

  • Impersonal moral scenarios (18 total):

    Standard Trolley

    You are at the wheel of a runaway trolley quickly approaching a fork in the tracks. On the tracks extending to the left is a group of five railway workmen. On the tracks extending to the right is a single railway workman.

    If you do nothing the trolley will proceed to the left, causing the deaths of the five workmen. The only way to avoid the deaths of these workmen is to hit a switch on your dashboard that will cause the trolley to proceed to the right, causing the death of the single workman.

    Would you hit the switch in order to avoid the deaths of the five workmen?

  • High-conflict personal moral scenarios (11 total):

    Crying Baby

    Enemy soldiers have taken over your village. They have orders to kill all remaining civilians. You and some of your townspeople have sought refuge in the cellar of a large house. Outside you hear the voices of soldiers who have come to search the house for valuables.

    Your baby begins to cry loudly. You cover his mouth to block the sound. If you remove your hand from his mouth his crying will summon the attention of the soldiers who will kill you, your child, and the others hiding out in the cellar. To save yourself and the others you must smother your child to death.

    Would you smother your child in order to save yourself and the other townspeople?

  • Low-conflict personal moral scenarios (11 total):

    Architect

    You are a young architect visiting one of your construction sites with your boss. Your boss is a despicable individual who makes everyone around him miserable including you.

    It occurs to you that if you were to push him off of the building you are inspecting he would fall to his death and everyone would think it was an accident.

    Would you push your boss off of the building in order to get him out of your life?

After reading the examples, you can probably imagine how most people would respond to each type of scenario. In case you can't, though, I will tell you that in previous research, "normal" individuals have tended to respond "yes" to the impersonal scenarios (e.g., they say they'd flip the switch in the standard trolley scenario), and "no" in the personal ones (they wouldn't kill the baby or push the boss off the building). The standard interpretation of these results is that in the impersonal scenarios, people are making the moral decision using conscious reasoning. Specifically, they are thought to be using utilitarian ethical principles to make the decision to flip the switch and kill one person to save five. In the personal scenarios, however, people tend not to make utilitarian decisions, and researchers therefore believe that they are basing their decision on the emotional response the situation elicits. Personally smothering a child is just too upsetting to even consider, despite the fact that not doing so will result in the death of the child and everyone else in the room, including yourself.

Since impersonal moral scenarios are thought to recruit conscious, principle-based moral decision processes, and since VMPC damage generally does not result in cognitive deficits, we'd expect both patients with VMPC damage and normal individuals (as well as patients with brain damage to other reasons) to behave similarly in these scenarios. However, if the decisions people make in response to personal moral scenarios are driven by emotion, then we might predict that patients with damage to the VMPC, who have trouble processing emotional value as a result of that damage, would behave differently in those scenarios than normal individuals (and other brain-damaged patients).

Consistent with these predictions, all three groups (VMPC-damaged patients, normal individuals, and patients with brain damage elsewhere) performed similarly in both the non-moral and impersonal moral scenarios. For the impersonal scenarios, all three groups said "yes" (that they would flip the switch, for example) between 50 and 60% of the time. There was also no difference between the three groups' performance on the low-conflict personal moral dilemmas. All of the individuals in all three groups said no (they wouldn't push their boss off the edge, e.g.) in response to the low-conflict scenarios. However, there was a difference between the normal patients (and brain-damaged elsewhere patients) and the VMPC-damaged patients for the high-conflict personal moral scenarios. The normal and non-VMPC brain-damaged patients said "no" (they wouldn't smother the baby, e.g.) about 80% of the time in response to these scenarios, while the VMPC-damaged patients said no less than 60% of the time (in fact, their response rate was pretty close to 50-50).

What does this mean, then? Well, in one sense, the VMPC-damaged patients were more rational than the normal individuals. That is, while normal individuals responded to the impersonal moral scenarios based on moral principles, they responded to the personal moral scenarios based on their affective response to them, and therefore answered in a way that was inconsistent with the same moral principles. VMPC-damaged patients were significantly more likely to respond to the high-conflict personal moral scenarios in a way that was consistent with the principle, so it's not a stretch to say they behaved more rationally. But why did they behave more rationally? And why didn't they respond in a way consistent with the principle more than half of the time (it might also be interesting to note that the variance for VMPC-damaged patients in the high-conflict condition was much, much greater than for any group in any of the other types of scenarios). It almost seems as though they were just guessing, which would belie the notion that they were behaving more rationally. One could easily interpret the data as indicating that they just didn't know how to respond to those scenarios. It's as though they had the emotional reaction, which was telling them not to smother the baby, and the principle that was telling them to smother the baby to save everyone else, available at the same time, and for whatever reason (perhaps difficulty in integrating the emotional response with the rest of the decision process), they were unable to decide between the two options. This would imply that, in normal individuals, integrating the emotional response into the rest of the decision process automatically causes the principle to be overridden. In other words, it implies (to me, at least), that when people are making these decisions, both the emotional reaction and the moral principle are available at the same time, and one will win out over the other, depending largely on the strength of the emotional response (which is strong in the personal scenarios, and weak in the impersonal ones, at least when they're just being read on paper). This would be inconsistent with strong intuitionist theories of moral judgment.

Unfortunately, this is pretty much as far as neuroscience can take us. It can't tell us, for example, why VMPC-damaged patients have no problem saying no in the low-conflict personal moral scenarios. If it were just a matter of an inability to evaluate or integrate value information, why aren't they willing to, say, throw the boss off a building to save themselves and others a great deal of stress? And it can't tell us why they're pretty much at chance on the high-conflict personal moral scenarios. It can't even really tell us that it has to do with difficulty processing value. It could be that some other feature of the high-conflict scenarios that would be processed by the VMPC is causing the difference between VMPC-damaged and normal individuals. Behavioral data is the only way to tease these things apart, so much future research is needed to make real sense of the Koenigs et al. data.


1Koenigs, M., Young, L., Adolphs, R., Tranel, D., Cushman, F., Hauser, M., & Damasion, A. (2007). Damage to the prefrontal cortex increases utilitarian moral judgements. Nature.

Categories

More like this

Back on the old blog, I wrote a series of posts in which I detailed a revolution in moral psychology. Sparked largely by recent empirical and theoretical work by neuroscientists, psychologists studying moral judgment have transitioned from Kantian rationalism, that goes back as far as, well, Kant (…
I've been posting about moral cognition anytime a new and interesting result pops up for a while now, and every time I think I've said before, though it bears repeating, that every time I read another article on moral cognition, I'm more confused than I was before reading it. Part of the problem, I…
If you've been reading this blog for a while, you might remember my old posts on moral psychology (I'm too lazy to look them up and link them, right now, but if you really want to find them, I'll do it). Well, after I discussed that research with a couple other psychologists who, it turns out, are…
Whoa. This was a data-rich talk, and my ability to transcribe it was over-whelmed by all the stuff Hauser was tossing out. Unfortunately, I think the talk also suffered from excess and a lack of a good overview of the material. But it was thought-provoking anyway. One of the themes was how people…

It almost seems as though they were just guessing, which would belie the notion that they were behaving more rationally.

Would you say that this possible interpretation is consistent with or at odds with the way the study is generally being interpreted?

Well, the study is interpreted as supporting intuitionist models (particularly Haidt's). That is, it's taken as support for the primacy of intuition over consciously available moral principles. I suppose it is consistent with that interpretation, broadly. However, Haidt's theory argues that the principle is available only after the decision has been made (as a post hoc justification for that principle). It's not consistent with that. But because the data is ambiguous, my interpretation could be way off.

Of course what is being tested here is not how one would behave in a morally fraught situation, but how you would answer such a question. Not a trivial difference. Where is the evidence that people behave how they say they would in such extreme circumstances?

The situations are phony because you are required to believe that we KNOW action A would lead to outcome B and action C to D. I wouldn't throw the switch in real life in the train situation because I would assume that my blowing the horn or yelling out the window would alert the men on the tracks.

Interestingly the most emotionally shocking situation, suffocating the baby, actually did occur: The hijacking of the Achille Lauro began on a beach in northern Israel in 1979 when terrorists of Abu Abbas' Palestine Liberation Front landed in a rubber boat from Lebanon, shot a man, Danny Haran, in front of his 4 year old daughter, Einat, and then killed Einat by smashing her head with a rifle butt. When the Achille Lauro was hijacked in 1985, the terrorists demanded the release of 50 Palestinian prisoners from Israel, but the only one named was Samir Kuntar, also called Sami Al Quantari, the man who murdered Einat. As I understand it, Einat's mother hid with neighbors and the couple's two year old daughter (I believe) when the terrorists came storming in, and held the child's mouth closed, discovering only after the terrorists were captured that she had inadvertently suffocated her daughter.

So in real life, people do suffocate children to save themselves. They just don't brag about it to researchers. Or perhaps they don't know themselves well.

This is very weak science. If you want to know how people react in different situations, observe them in those situations. If you are going to assume they know what they would do (how's that New Year's resolution working out?), that is, how their brains work, why not just ask them if they have an inborn sense of moral intuition or if they rationally crunch moral dilemmas? Don't trust they know? I don't trust they know what they would do in these highly speculative moral cases.

What does the study under consideration really tell us? In the emotionally charged high conflict scenarios, we aren't sure what we would do, but, if our VMPC's are intact, we are able to discern that answering that we would suffocate a baby would arouse contempt from the questioners, so we say what we think is socially acceptable. The VMPC may be involved in our estimating what is socially acceptable. Certainly related to morality, but perhaps not in the way the authors contend.

By epistemology (not verified) on 27 Mar 2007 #permalink

In the personal scenarios, however, people tend not to make utilitarian decisions, and researchers therefore believe that they are basing their decision on the emotional response the situation elicits.

I don't understand this 'therefore' at all. Does it abbreviate a more sophisticated line of reasoning. Or did the researchers just ignore all of the exceedingly many non-utilitarian ethical principles that people claim to use?

epistemology, you're right, these studies always lack ecological validity. That's a problem, and not one that researchers have overlooked. The problem is that ethical considerations make clear tests of most of these hypotheses that have high degrees of ecological validity impossible. What would you suggest? And how would you explain the patterns people find?

Brandon, the shorter answer to your question is yes, they did just ignore them. The long answer is that there may be reasons to ignore them. That is, research (some of which I'll talk about in the next post on the topic -- the second paper I refer to at the beginning of this post) is showing what sorts of principles people do have available when they're given these sorts of scenarios. So some of the rest of the long answer will come in that post.

Epistemology, this kind of studies has not the weakness you are referring to. In both philosophy and psychology "moral judgement" is explicitly conceptualized as different from moral action, and studied independently just because it is interesting enough to know what people sincerely think is moral (even if they don't do that in real life). You are pointing to a very important problem, but it is not a problem for this kind of studies, they are just about moral judgement.

alberto, thanks for that, I didn't know. And this may be the the key to this study. We aren't discerning here what people with VMPC damage would DO different than the rest of us, but what they are supposed to say.

I would guess from this study that the VMPC helps us model what others are thinking.

Everyone makes the same cold-blooded calculation when asked if they would smother a child to save others (I expect to see this scenario on 24 next season, with Jack torturing a 10 year old to save the world), then we catch ourselves, realizing our answer will make us look like monsters, so we say no. Those with VMPC damage don't get how others see this, and just make the rational caclulation.

I would guess that the VMPC has something to do with our understanding what is the socially appropriate thing to do. Something to do social intelligence. Knowing to say what people find acceptable. This is not morality, it is simple social conformity, and the VMPC more often failed the test.

To late to think of a way to test this. Maybe after work.

By epistemology (not verified) on 29 Mar 2007 #permalink

Have you seen The Pianist? It mentions a Jewish woman who smothered her baby to save the family from being discovered by the Nazis, but they heard the death rattle and caught them anyway. I'm not sure whether it really happened, but since the film is based on Szpilman's memoirs and influenced by Polanski's own experiences, it might have done. Whether it did or not, you might expect people who have seen the film to answer that question differently.

I second the misgivings on the Crying Baby question. It's very easy for me to sit safely in front of my computer thousands of miles from the nearest war zone (and not being currently the parent of a young child), and logically make the utilitarian response (while acknowledging the horror of it). It's something quite different to say with any confidence what I would really do in the terror and panic of such a situation, and yet faced with my own baby right there -- beyond anything I have remotely experienced. I might kill the child out of sheer panic, or I might run out screaming and beg the soldiers for mercy, or do all sorts of crazy (and totally nonrational, and mostly useless) things.

It's as though they had the emotional reaction, which was telling them not to smother the baby, and the principle that was telling them to smother the baby to save everyone else, available at the same time, and for whatever reason (perhaps difficulty in integrating the emotional response with the rest of the decision process), they were unable to decide between the two options.

Oddly enough, this describes my reaction to the question. AFAIK I don't have VMPC damage. However, I am probably borderline Asperger's (or something like that), which may be indicative of something.

By Eamon Knight (not verified) on 31 Mar 2007 #permalink

Hey, I was just wondering if you could tell me the entire ref. for the article, I'm having a bit of trouble tracking it down in nature. Cheers.

Don't worry. Found it. This article really has all the big guns. There is also a new one about "Universal Moral Grammer" in TRENDS in Cognitive Sciences. .

"...researchers have been showing that emotion and intuition, both of which operate automatically and unconsciously for the most part, play a much larger role than most philosophers and psychologists had previously been willing to admit."

Philosophers have been assuming an emotion/intuition based meta-ethics at least when doing first order ethics at for a long time, thats what reflective equilibrium is about. Philosophers are by no means largely meta-ethical rationalists. If anything these discoveries vindicate 20th century and contempary meta-ethics.

I had a TBI injury two years ago. I am still recovering, but have come a long way. I appreciate reading what you have posted and finding out more information. Thank you!

By Brett Johnson (not verified) on 21 Nov 2007 #permalink

If you do nothing the trolley will proceed to the left, causing the deaths of the five workmen. The only way to avoid the deaths of these workmen is to hit a switch on your dashboard that will cause the trolley to proceed to the right, causing the death of the single workman.

Currently, both myself and other researchers in my lab have concentrated on the role of the VMPFC in decision making under conditions of uncertainty (moral decision making, social decision making, even the IGT) and all of our results support the theory that the VMPFC plays a role in integrating somatic markers with cost/benefit analysis (i.e. joining subcortical emotional centers with the DLPFC's cost/benefit analysis). Further, we have sampled from University Studends who have MILD head injury. Even our sample supports the role of the VMPFC in integrating emotion with decision making.