Cognitive Load and Moral Judgment

I've been posting about moral cognition anytime a new and interesting result pops up for a while now, and every time I think I've said before, though it bears repeating, that every time I read another article on moral cognition, I'm more confused than I was before reading it. Part of the problem, I think, stems from a tendency towards theoretical extremes. For a long time, in fact for most of the history of moral psychology, empirical or otherwise, some form of "rationalism" dominated. That is the view that there are ethical rules in our heads, and that moral judgment involve applying those rules to the situation to be judged. More recently, moral psychologists have argued that moral judgment may be much less "rational" than previously believed. In the extreme view, represented most explicitly by Jonathan Haidt and his "social intuitionist" theory, rational justifications for moral judgments are largely, if not entirely post hoc rationalizations, while the actual decision-making process is driven by emotion and automatically activated "intuitions."

These extremes are, I've long felt, partly a result of methodological considerations. In particular, the moral dilemmas most often used in moral psychology research tend to admit either a rule-based solution (most often, a utilitarian one) and an "emotional" solution (which can, it should be noted, framed in terms of "deontological" rules). Much of the research, then, is centered around when and under what circumstances people are more likely to make the rule-based or the emotional decision. There is no in between solution, and so you're left with a sharp distinction between the two kinds of choices.

For example, consider the classic trolley-footbridge problems. In the trolley problem, participants are told that a run-away trolley is bearing down on five unsuspecting people hanging out on the track (like idiots). The participant can flip a switch, causing the trolley to switch tracks, but doing so will result in the trolley striking a single person working on this new track. In most studies, the vast majority of participants indicate that they would flip the switch, killing the one poor bastard but saving the five idiots hanging out on the track. This is the proper utilitarian choice, and is generally treated as the "rational" or rule-based one. In the footbridge problem, the trolley is yet again bearing down on five oblivious people hanging out on the track, but this time the participant is told that he or she is standing on a footbridge over the tracks, and the only way to prevent the trolley from hitting the five people is to throw the poor bastard standing next to you into the trolley's path, resulting in that one person dying but the five people being saved. The ultimate dilemma in the footbridge version is the same as in the standard trolley problem, because you're sacrificing one to save five, but in this version participants almost always say that they won't push the guy off the bridge. This is usually treated as the "emotional" or "intuitive" decision.

In addition to their tendency to cause researchers to see moral judgment in terms of theoretical extremes, these sorts of moral dilemmas have a whole host of problems, not the least of which is the fact that they're incredibly unrealistic. How many of us are ever going to be in a position to stop a trolley from killing five people, and how likely is it that throwing a guy off a bridge is going to stop the damn thing anyway? Furthermore, the two versions of the problem differ in a bunch of potentially important ways, making interpretation of people's decisions difficult. Despite these and many other problems, such dilemmas continue to be widely used, for reasons that I must admit escape me.

The most recent example comes from an in press paper by Greene et al.(1), which is an admirable attempt to develop a dual process theory of moral judgment. Dual process theories, which are becoming increasingly common in cognitive science, involve two distinct types of processes or systems, one of which is usually automatic and "intuitive" or heuristic-based (and may be influenced by emotion), and the other of which is more "rational" and deliberate. Since these two types of processes line up nicely with the two posited types of moral decision processes, Greene et al. see a dual process theory as a potential bridge between the "rationalist" and "intuitionist" camps in moral psychology. Under their view, when people make rule-based (e.g., utilitarian) decisions, they're using the rational (often denoted System 2) process, and when they make the "emotional" (e.g., non-utilitarian) decision, they're using the "intuitive (or System 1) process.

If this dual process theory is correct, then interfering with one of the two systems should selectively interfere with the corresponding decision-type, without affecting the other. To test this theory, then, Greene et al. provided participants with moral dilemmas like the footbridge problem (other dilemmas included the "crying baby" problem, in which a crying baby will alert a hostile enemy to the position of several people) in one of two conditions: the cognitive load condition or a control condition. In the cognitive load condition, participants had to perform a digit selection task in which digits scrolled across the screen while they were making the moral decision, and they had to indicate whether the digit was a 5. This task increases the participants' cognitive load (hence the condition name), and makes it more difficult to cognitively process other information. Thus, it should selectively interfere with System 2 processes, but not affect System 1 processes. The prediction, then, is that the cognitive load condition will interfere with utilitarian judgments, but not the "intuitive" ones.

And this is essentially what Greene et al. found. While the cognitive load condition didn't effect the amount of utilitarian responding (around 60% in both conditions), it did selectively influence the amount of time it took to make a decision. That is, in the cognitive load condition, participants took longer to make utilitarian than non-utilitarian decisions, while there was no difference in decision time in the control condition. This suggests that the cognitive load made the "rational," System 2 decisions more difficult, but didn't affect the intuitive, System 1 decisions, consistent with the dual process theory.

I think there are two potential problems with this data. The first, and most obvious, is that the cognitive load didn't affect the amount of utilitarian responding. One would think that if the cognitive load is interfering with with System 2 processing, it would reduce System 2 responding. The second is that the study doesn't involve a corresponding manipulation that should interfere with System 1 responding. My suspicion for some time has been that intuitive and cognitive, or both System 1 and System 2 processes are involved in both types of responses to footbridge-like dilemmas, with one or the other system dominating. For example, in the recent HPV vaccine debate in Texas, one group of people objecting to making the vaccine mandatory essentially argued that doing so would make teenage girls more promiscuous. Given that there is no evidence that such policies increase adolescent sexual behavior, I'm convinced that what happens is, people have an an emotional reaction to the association between the HPV vaccine, sex, and children, and this limits the information that they are able to use as input for System 2 processing. I suspect that something similar happens in the moral dilemmas often used in moral psychology research. That is, the type of emotional reaction people have to a dilemma influences what information they will, and will not consider. If such is the case, then simply manipulating cognitive or emotional reactions will not provide a clear picture of what's going on when people make moral decisions, and we'll be stuck with extremes, even if both extremes are used, as is the case in dual process theories. It may be the case, for example, that manipulating different emotions, or focusing people's attention on particular information in the dilemmas, can result in patterns of behavior similar to (or perhaps even clearer) the one Greene et al. observed in their study. Until more rigorous studies are conducted, hopefully with better stimuli than the silly moral dilemmas, though, I'm just going to be confused about exactly what might be going on.


1Greene, J.D., Morelli, S.A., Lowenberg, K., Nystrom, L.E., & Cohen, J.D. (In Press). Cognitive load selectively interferes with utilitarian moral judgment. Cognition

More like this

Back on the old blog, I wrote a series of posts in which I detailed a revolution in moral psychology. Sparked largely by recent empirical and theoretical work by neuroscientists, psychologists studying moral judgment have transitioned from Kantian rationalism, that goes back as far as, well, Kant (…
Over at Mind Matters we recently featured an interesting article by Walter Sinnott-Armstrong and Adina Roskies (two philosophers at Dartmouth) reviewing a recent paper by Joshua Greene, et. al. The paper tested the dual-process model of morality, which argues that every moral decision is the result…
If you've been reading this blog for a while, you might remember my old posts on moral psychology (I'm too lazy to look them up and link them, right now, but if you really want to find them, I'll do it). Well, after I discussed that research with a couple other psychologists who, it turns out, are…
Research on the role of emotion/intuition in moral judgments is really heating up. For decades (millennia, even), moral judgment was thought to be a conscious, principle-based process, but over the last few years, researchers have been showing that emotion and intuition, both of which operate…

Good summary. Good critique. I think your idea about how the dual processor for morality works is a strong one.

By wes anderson (not verified) on 01 Jan 2008 #permalink

The trolley problem is supposed to be a model for very common moral dilemmas, which pit consequentialist against deontological: eg, questions about taxation (if people have a right to the money they earn), capital punishment, and so on. At least in its philosophical use, it seems justified as a model.

By Neil Levy (not verified) on 01 Jan 2008 #permalink

Neil, I imagine it works just fine for some philosophical discussions, because that's what it was designed for, but as a tool for empirical psychology, it's pretty shitty, because:

a.) They're only alignable in a few relations. That is, if you consider only one of the causal relations (being run over by the trolley) and the consequences (5 vs. 1 dead). Everything else is different in potentially important ways.
b.) Most notably, switching the track of the trolley is a yes-or-no option. There is no obvious third alternative. Throwing a guy off a bridge, however, is not the only alternative in the footbridge problem. You could throw yourself off. You could wonder whether a body can stop the trolley (switching the tracks will almost certainly work, right?).
c.) While it's all well and good for philosophers debating two distinct types of moral theories -- utilitarian and deontological -- to contrast these two in a problem, it makes little sense for psychologists to be wedded to these two alternatives without at first at least establishing that they are, in fact, the two main types of moral judgments that people make.

I take the point about (c). I don't know what you're getting at with (a). But (b) is wrong: the philosophical examples, and the empirical literature I've read (including Greene's own earlier iterations of these experiments) excludes the option of throwing yourself off and stipulates that you know that the body will stop the trolley. The man is a fat man: you're too thin to stop the trolley.

By Neil Levy (not verified) on 02 Jan 2008 #permalink

Neil, with (a), I'm getting at what's necessary in a stimulus set to conduct a carefully controlled experiment. If there are all sorts of differences between two contrasted stimuli, it is difficult if not impossible to tell which of those differences is resulting in differences in responses.

With (b), you're right, some do use the "fat man" version, which, of course, would only work with thin participants, and further raises the question (and I guarantee you participants will think of this), "Am I strong enough to throw a man who is fat enough to stop a trolley off a bridge?" And the question of whether even a very large man could stop the trolley would still remain in the minds of many (and again, from experience, I can tell you that they'll raise this question).

I agree that they'll ask these questions: it's a familiar experience in teaching this kind of problem. But another familiar experience seems relevant too: my undergraduates typically feel a strong tension between the ways they are disposed to respond to these dilemmas. They have responses similar to those Haidt reports, under the heading of 'dumbfounding'.

I know, I know, confirmation bias. Still, it is suggestive (to me) that one is a genuine contrast case to the other.

Neil, I think there's something genuinely contrastive about them, I just think it's better to use more controlled versions of that and other types of contrasts if you're going to develop comprehensive theories of moral judgment. I've been using scenarios that we derived from Tetlock's taboo tradeoff work, and we've had some success with them, but we're still looking for more alignable and, preferably, more mundane scenarios.

This reminds me of Joan Baez's response to similar questions intended to challenge her public commitment to pacifism. She said questions about whether she would shoot one person in order to save another were irrelevant to her, because if she tried to shoot someone she would probably end up causing some accident that would kill everyone involved. Insisting on people's answering forced-choice moral questions probably doesn't tell us that much about the actual moral decisions people make, because there are so few pure moral decisions -- nearly always, there are issues like how well you can shoot, how good you are at quieting babies, and how quickly or confidently you could throw a person off a bridge.

Chris, kudos to you for noting that the trolley problems "two versions of the problem differ in a bunch of potentially important ways." Greene jumps on the emotional differences stemming from the up-close and personal nature of pushing someone off a footbridge. But suppose we change the story so that to drop the fat man, you pull a lever from far away and open a hidden trap door. I'll bet subjects would still be much more reluctant to drop the fat man than to throw the trolley track switch.

Hello Chris,

Outstanding article, very lucid. I think you are absolutely correct to reject such strong dichotomies when it comes to the nature of moral decision making. This is especially true in that all moral decisions will be made in a practical setting, which for me means you cannot cleanly cut apart the emotional from the rational. The practical world does not contain handy little theoretical vacuums. In practice (which is what we are talking about here; i.e. what people do when they make decisions), the emotions inform reason and reason informs emotions. However, it does seem that in certain contexts either the employment of reason or emotion will be primary. For example, your basic fight or flight response is an emotion primary response. My point being, it seems the context more than the theoretical make up will be what determines the type of responses people give.

On a side note, I also think things like the trolley problem are a rather silly way to generate an understanding of our moral psychologies. Nevertheless, these sorts of problems do seem to have some pedagogical value. I am wondering if anyone else agrees?

By John L. Dell (not verified) on 11 Jan 2008 #permalink

Well, I'm not a psychologist of any sort but it seems very likely to me that people make moral decisions based in part on implicit calculations of the probability of a favourable outcome from their action, so positing any kind of wildly unlikely dilemma like the fat-man-footbridge one is going to give you useless answers. I don't think adding levers and trapdoors to the scenario helps very much.

I guess this is what your comment is saying, basically, but I was surprised you didn't mention the probability thing-- it just seems so obvious to me that in the crying-baby scenario, for example, what you do depends on the probability that you'll all get caught if you do nothing, and the reason people hesitate is partly to do with their minds not really believing in the 100% certainty stipulation.

Maybe it would be better to do this research by showing people movies where the characters are facing similar dilemmas.