Good to the bone; adducing honesty via imaging

Update: See Ed Yong.

Randall Parker points me to a new paper from Joshua Greene which describes the neurological responses of individuals when do, or don't, lie, when lying might be in their self-interest. From EurekaAlert:

The research was designed to test two theories about the nature of honesty - the "Will" theory, in which honesty results from the active resistance of temptation, and the "Grace" theory in which honesty is a product of lack of temptation. The results of this study suggest that the "Grace" theory is true, because the honest participants did not show any additional neural activity when telling the truth.
...
Using fMRI, Greene found that the honest individuals displayed little to no additional brain activity when reporting their prediction of the coin toss. However, the dishonest participants' brains were most active in control-related brain regions when they chose not to lie. These control-related brain regions include the dorsolateral prefrontal cortex and the anterior cingulate cortex, and previous research has shown that these regions are active when an individual is asked to lie.
...
"When the honest people leave money on the table, you don't see anything special or extra going on in their brains at all," says Greene. "Whereas, when the dishonest people leave money on the table, that's when you saw the most robust control network activation."

If neuroscience is able to identify lies by peering into the brain of the liar, it will be important to distinguish between activity in the brain when lying and activity caused by the temptation to lie. Greene says that eventually it may be possible to detect lies by looking at someone's brain activity, although a lot more work must be done before this is possible.

Will fMRI really be better than the various other physiological indicators used in contemporary lie detector tests? What's the error rate? False positive is a killer here. Nice quote for a press release, but we'll see, color me skeptical. Nevertheless, I am intrigued by the idea that people of diverse ethical orientations may have a strong cognitive bias in particular directions which will naturally result in a neurological pattern which we can discern. A few years back Jonathan Haidt made a splash by mooting the idea of average moral differences between populations, but we don't need to go that far, behavior genetics has long shown us that there's a large heritable component to the decisions we make which might seem to have a moral or ethical valence.

In any case, for thousands of years philosophers have speculated whether humans are innately good or bad, from Rosseau and Hobbes to Xun Zi and Mencius. The time for speculation is over, as experimental philosophers are looking into the empirical distribution of human moral intuition, as opposed to surveying the reflections of their philosophically oriented colleagues. Instead of one moral sense it seems much more likely that humans exhibit plasticity in their behavior as well as differences of modal response in a given circumstance. In other words, morality is situational, but the distribution of responses might vary quite a bit from person to person given the same situations. Attempting to drill-down on the neuroscientific map of this phenomenon is one avenue of exploration, but genetics will probably get in on the action at some point. Intelligent people will also perhaps fine-tune their model of how "free will" works, though much of this research will be irrelevant to the majority.

Cite: Greene, J.D., Paxton, J.M., Patterns of neural activity associated with honest and dishonest moral decisions. Proceedings of the National Academy of Sciences (not online yet)

Categories

More like this

So, I thought it was a fascinating study. It's interesting that the paper mentions lie detection only in one paragraph towards the end of the discussion. It's almost throw-away, and it focuses mainly on limitations. The full bit:

Although our present focus is on the cognitive neuroscience of honesty and dishonesty, our findings and methods may be of interest to researchers studying brain-based lie detection (14), in part because the present study is arguably the first to establish a correlation between patterns of neural activity and real lying. However, the present experiment has several notable limitations that deserve attention. First, the model we have developed has not been tested on an independent sample, and therefore its probative value remains unknown. Second, our task design does not allow us to identify individual lies. Third, our findings highlight the challenge in distinguishing lying from related cognitive processes such as deciding whether to lie. Finally, it is not known whether our task is an ecologically valid model for real-world lying. For example, the neural signature of real prepared lies (28) may look different from the patterns observed in association with lying here. Bearing these limitations in mind, our findings may suggest new avenues for research on brainbased lie detection. For example, our findings suggest that interrogations aimed at eliciting indecision about whether to lie, rather than lies per se, may be more effective, provided that the goal is to assess the trustworthiness of the subject rather than the veracity of specific statements.

The next step would be to locate people who seem to have no discomfort with telling gross lies, and determining whether they are distinguishable from people who have no problem with truth-telling.

If we can detect a struggle between competing desires (to lie and to tell the truth) but not determine whether honesty or dishonesty rules when there is no struggle, this technique has some fundamental limitations.