Getting accurate about "my" brain: modularity, consciousness, and the evolution of hypocrisy

Psychologist Robert Kurzban's new book promises to explain Why Everyone (Else) Is a Hypocrite. It's a bold promise, and I was skeptical when first invited to review it. But Kurzban delivered - hilariously, entertainingly so. Although since I agree with almost everything he writes, I may not be the most objective of critics. (FYI: this is a long review, so if you're short on time, you can skip to the end of the post, and watch the author's short video trailer about the book. Cheers.)

For starters, Kurzban has convinced me to be more careful when I talk and write about my brain (and yours). Kurzban's key point is that our brains are modular. As a result of this modularity, different cognitive units can know or believe inconsistent things at the same time. Kurzban illustrates this idea with the Adelson illusion - in my opinion, one of the best optical illusions ever. Here it is:

i-df737188509ab72d86b502f47989f6fc-checkerillusion.jpg

Believe it or not, square A is exactly the same color as square B!

Most people have a difficult time believing this - I certainly do. But Adelson's illusion is a simple application of lightness (or color) constancy: our visual system is designed to feed us not just raw data, but a useful interpretation of what we see. This auto-correction process allows the pages of a book to appear "white" in both bright sunlight and leafy shade, despite the change in the amount of light reflected from the pages. (You can prove to yourself that your visual system is deceiving you by framing squares A and B with your fingers, to isolate them from the rest of the image and establish a common reference color.)

I blogged about the Adelson checkerboard illusion on the old BioE several years ago. This is what I said at the time:

I've seen this principle illustrated before, but can usually force my brain to accept "reality." On this one, I just couldn't make myself accept that the squares are the same color, until I took off my glasses.

Robert Kurzban would (justifiably) give me an F for those sentences. Why? Because they're totally nonsensical. What on earth did I mean by suggesting that "I" could force "my brain" to accept reality? How could "I" be something different than "my brain" - or, in my very next sentence, "myself?" Aren't "I", "myself," and "my brain" all the same thing? And who the heck was writing those nonsensical sentences - "I," or "my brain," or "that portion of my brain which blogs?"

Before you report me (or Kurzban) to the "Blog" of "Unnecessary" Quotation Marks, let me explain why he's convinced me to be more careful when writing and talking about cognitive functions. Even those of us familiar with neuroscience often use casual shorthand that is misleading and inaccurate - sort of like the infamous comment that a brain "lights up" in an MRI. That's what I did here: when I wrote "I can usually force my brain to accept reality," I meant (and you probably understood me to mean) that part of my brain, the conscious part which is able to communicate verbally with you, understands that the two checkerboard squares are the same shade of gray. But another part, which is adapted to ensure lightness constancy, does not accept that interpretation of what my eyes are seeing. Different modules of my brain disagree, which is what makes illusions dramatic - even now, some nonverbal parts of my brain continue to "think" the squares must be different colors! So "I" haven't "forced" my brain to accept anything. "I" don't even speak for all the parts of my brain!

Visual illusions illustrate that our brains are not singular entities with one fully informed consensus output, but rather collections of what Kurzban calls "modules." We are conscious of the workings of some, but not all, of our modules. Recollect the bizarre symptoms of split-brain patients: when the left and right hemispheres of their brains are shown different objects, the right hemisphere, which doesn't usually control speech, can't "say" what it's seen - but the hand under its control can pick the correct image out of a deck of cards. Meanwhile, the patient's left hemisphere has no idea why the hand has picked that image, and makes up some justification - some totally incorrect justification - that it honestly thinks is true. It's got to reconcile the inconsistency somehow. As Kurzban puts it, "my claim is that this unnatural separation [in split brain patients] is exactly analogous to natural separations in normal brains."

Since I have a modular brain, when I say "I" or "myself," I must really be speaking for a dominant part of my brain - a part that is both communicative and conscious, which is why it is able to tell you what it thinks. But I don't mean all of my brain - there's still that part that is convinced the squares are different colors, after all. Kurzban says, "no part of the brain can, at one and the same time, also be a whole brain. . . any theory that requires a little-brain-in-a-big-brain is wrong. The explanation for all the things that the big brain does is going to be an explanation of how lots of little modules - none of which has all the capabilities of the brain as a whole - work together." There are some parts of my brain of which I'm completely unaware. (When I say "I," I don't mean those parts - who knows what they're thinking about right now, or why! I just hope it's not Justin Bieber.)

It's important to note that there is nothing wrong with my brain, just because I'm fooled by the Adelson illusion, or have thought processes of which I'm not consciously aware. (Maybe there're other things wrong with my brain - but that's another post). From an evolutionary perspective, because maintaining lightness constancy is helpful in natural environments, it's perfectly understandable that I have a visual module in my brain that stubbornly and universally applies it to the Adelson illusion. And there's no evolutionary reason why that part of my brain has to agree with the other parts of my brain - it just isn't necessary for me to resolve the conflict. Kurzban even offers persuasive evidence that in certain circumstances, it's not merely evolutionarily understandable, it's actually adaptive to be ignorant. Kurzban makes a West Wing analogy: the President's press secretary may not want to know certain things, because once she does, she can't convincingly deny them. (I've known many professors who don't write the final exam until after the last day of class, just to ensure that their students can't weasel any clues out of them.) In some social circumstances, then, it may be beneficial to have certain modules of your brain remain oblivious of certain facts known to other modules - you'll be more effective at PR if you believe what you're saying.

The fact that our casual vernacular maps badly onto how we actually think - that when we say "I" we don't mean our whole brain, but only the talking part and/or the conscious part - is not news to a neuroscientist. It's not a new idea. (Kurzban notes, "I think [Daniel] Dennett got it right when he said that while there is some sense in which we all reject dualism, nonetheless, as he puts it, 'the persuasive imagery of the Cartesian Theater keeps coming back to haunt us - laypeople and scientists alike - even after its ghostly dualism has been denounced and exorcised.'") But counterintuitive ideas about cognition bear repeating, because most people don't think we think that way. For example, I saw a talk just last year about Libet's famous experiments, which (put very briefly and simplistically) showed that neural impulses signaling wrist-flicking preceded subjects' reports of any conscious desire to move. The talk was framed around the presumption that the audience would find Libet's results bizarre and surprising - you're going to move before you decide to! you have no free will! - and most of them did find it bizarre. But I was baffled: why on earth would we expect different results? We experience a conscious desire to move because our brains send impulses - neural activity is how our thoughts and desires are encoded. If the desire preceded the impulses, we'd be disembodied ghosts in the cortical machine. So I fully agree with Kurzban when he asks somewhat exasperatedly, "A recent headline in Wired magazine, discussing a study similar to Libet's, read: 'Brain Scanners Can See Your Decisions Before You Make Them.' Why is this news? . . . there's just no scenario in which the sense of deciding comes before brain activity. It just can't happen that way because all deciding just is brain activity."

That's why "I can usually force my brain to accept reality" makes no sense at all: I am my brain, not something outside of or superior to it.

So I promise to pay more attention to how I talk about the brain. It's easy to be sloppy, it's misleading, and if you consistently frame mental activity in terms of different modules, not one singular whole, it's easier to explain common human failings like misjudging one's skill, falling off the diet wagon, and even, yes, being a hypocrite, which is where Kurzban concludes his book. Unfortunately, his coverage of hypocrisy is plausible, but experimentally thin; don't expect to set the book down knowing how to convince other people they're being hypocrites (good luck with that), or even how to avoid hypocrisy yourself. If you take Kurzban's message to heart, you may appreciate better why people are hypocrites, and particularly, why they genuinely believe that they support particular policies or politics for reasons that don't make any logical sense. Debunking logical fallacies in such situations may not change the conviction of the responsible module. Consider the Adelson illusion: one module will go right on believing that the two squares are different shades of gray, no matter what kind of evidence you lay on the table. You can SHOW that part of the brain that it's wrong, and as soon as you take your fingers or the white paper away, the squares look different colors again! Logic doesn't matter.

This is one reason why I believe that better science education/literacy alone can't make as big a social policy difference as many scientists hope. Even if you explain to some people why the scientific bases for their beliefs are wrong - why evolution really happened, or why vaccines don't cause autism, or why global warming is real - they'll just seize on some new explanation, replacing one debunked rationale with another, and another. They genuinely believe that they are right, and because brains are modular, and because not all the modules communicate well, a brain will just keep rationalizing what "it" feels and believes, whether or not it knows why it feels that way - just like the split brain patient makes up spurious explanations for what his right hand is doing. It's depressing, but come on, we've all seen it happen.

One last, very important point. I've made Kurzban sound sarcastic, nitpicky, and intolerant of sloppy descriptions of how we think. He's all those things, but he's also HILARIOUS. This book reads like a blog, not a scientific paper or a New Yorker essay - he's picky about his cognitive models, but he drops asides, frivolous footnotes, and pop-culture references everywhere. I enjoyed this book so much I ripped through it in a day, often laughing out loud and periodically reading snarky comments to my boyfriend. I can only imagine how much fun it must be to have him as a professor - UPenn psych students, you're lucky. (Just be careful not to say anything like "I tried to make my brain study for the test, but. . . ")

Highly recommended. (I wish it was longer, had more experimental support for the hypocrisy section, and concluded with more prescriptive advice on how this model of the mind can inform more effective political and educational communication - but I can't not recommend it just because there's not enough of it. A very good read.)

More:

Information from Princeton University Press

The preview trailer:

Why Everyone (Else) Is a Hypocrite on Amazon

FYI: I received review copy of this book from the publisher, but was not compensated for writing this review. (Trust me, if I thought the book was awful, I'd have said so. I don't have enough time to waste it on crappy books, and I doubt you do either.)

Categories

More like this