Clocks and Clouds

I recently had a short article in Wired on the danger of getting too enthralled with our empirical tools, which leads us to neglect everything that our tools can't explain:

A typical experiment in functional magnetic resonance imaging goes like this: A subject is slid into a claustrophobia-inducing tube, the core of a machine the size of a delivery truck. The person is told to lie perfectly still and perform some task -- look at a screen, say, or make a decision. Noisy superconducting magnets whir. The contraption analyzes the magnetic properties of blood to determine the amount of oxygen present, operating on the assumption that more-active brain cells require more-oxygenated blood. It can't tell what you're thinking, but it can tell where you're thinking it.

Functional MRI has been used to study all sorts of sexy psychological properties. You've probably seen the headlines: "Scientists Discover Love in the Brain!" and "This Is Your Brain on God!" Such claims are often accompanied by a pretty silhouette of a skull, highlighted with splotches of primary color. It's like staring at a portrait of the soul. It's also false. In reality, huge swaths of the cortex are involved in every aspect of cognition. The mind is a knot of interconnections, so interpreting the scan depends on leaving lots of stuff out, sifting through noise for the signal. We make sense of the data by deleting what we don't understand.

What's disappointing here isn't just that these early fMRI studies are overhyped or miss important facts. It's that this mistake is all too familiar. Time and time again, an experimental gadget gets introduced -- it doesn't matter if it's a supercollider or a gene chip or an fMRI machine -- and we're told it will allow us to glimpse the underlying logic of everything. But the tool always disappoints, doesn't it? We soon realize that those pretty pictures are incomplete and that we can't reduce our complex subject to a few colorful spots. So here's a pitch: Scientists should learn to expect this cycle -- to anticipate that the universe is always more networked and complicated than reductionist approaches can reveal.

Look at genetics: When the Human Genome Project was launched in the early 1990s, it was sold as a means of finally making sense of our DNA by documenting the slight differences that encode our individuality. But that didn't happen. Instead, the project has mostly demonstrated that we are more than a text, and that our base pairs rarely explain anything in isolation. It has forced researchers to focus on the much broader study of how our genes interact with the environment.

This same story plays out over and over -- only the nouns change. Once upon a time, physicists thought they had the universe mostly solved, thanks to their fancy telescopes and elegant Newtonian equations. But then came a century of complications, from the theory of relativity to the uncertainty principle; string theorists, in their attempts to reconcile ever widening theoretical gaps, started talking about 11 dimensions. Dark matter remains a total mystery. We used to assume that it was enough to understand atoms -- the bits that compose the cosmos -- but it's now clear that these particles can't be deciphered in a vacuum.

Not surprisingly, this is exactly what neuroscientists are coming to grips with. In the mid-'90s, Marcus Raichle started wondering about all the mental activity exhibited by subjects between tasks, when they appeared to be doing nothing at all. Although Raichle's colleagues discouraged him from trying to make sense of all this noisy activity -- "They told me I was wasting my time," he says -- his team's work led to the discovery of what he calls the default network, which has since been linked to a wide range of phenomena, from daydreaming to autism. However, it can't be accurately described with the kind of distinct spots of a typical fMRI image. There's too much to see: It's a network of colorful complexity. Thanks to the work of Raichle and others, neuroscience now has a mandate to forgo the measurement of local spikes in blood flow in favor of teasing apart the vast electrical loom of the cortex. God and love are nowhere to be found -- and most of the time we have no idea what we're looking at. But that confusion is a good sign. The brain isn't simple; our pictures of the brain shouldn't be, either.

Karl Popper, the great philosopher of science, once divided the world into two categories: clocks and clouds. Clocks are neat, orderly systems that can be solved through reduction; clouds are an epistemic mess, "highly irregular, disorderly, and more or less unpredictable." The mistake of modern science is to pretend that everything is a clock, which is why we get seduced again and again by the false promises of brain scanners and gene sequencers. We want to believe we will understand nature if we find the exact right tool to cut its joints. But that approach is doomed to failure. We live in a universe not of clocks but of clouds.

So how do we see the clouds? I think the answer returns us to the vintage approach of the Victorians. Right now, the life sciences follow a very deductive model, in which researchers begin with a testable hypothesis, and then find precisely the right set of tools to test their conjecture. Needless to say, this has been a fantastically successful approach. But I wonder if our most difficult questions will require a more inductive method, in which we first observe and stare and ponder, and only then theorize. (This was the patient process of Darwin and his 19th century peers.) After all, the power of our newest neuroscientific tools (such as those associated with the connectome) is that they allow us to observe the brain directly, without the frame of a conjecture. (The problem with such conjectures is that force us to sort the noise from the signal before we really understand what we're looking at, which helps explain why entities like the default network were ignored for so many years.) Such an approach might seem anachronistic, but when it comes to deciphering the intractable mysteries of the brain it might be necessary. The human cortex is the most complex object in the universe: Before we can speculate about it, we need to see it, even if we don't always understand what we're looking at.

More like this

So we dismiss empirical observation?

By erikthebassist (not verified) on 08 Jun 2010 #permalink

Or do we accept that the scientific method based on imperialism has brought us to the moon and doubled our lifespans? It's lifted the veil of ignorance, so that while we don't necessarily know our place in the universe, we at least know that it exists, emperialism has shaped the world you live in, so keep blogging on the your computer and the internet, both based on technology derived from quantum mechanics, the ultimate method for understanding natures methods, and keep questioning the science that gives you a voice.

By erikthebassist (not verified) on 08 Jun 2010 #permalink

I've looked at clouds from both sides now, from up and down and still somehow it's cloud's illusions I recall. I really don't know clouds at all

By J Collins (not verified) on 09 Jun 2010 #permalink

From the perspective of a working scientist, it seems that it's _journalists_ who over-hype new technology. MRI is treated with a decent amount of skepticism within the field, and there are many people who don't use it because they think it has nothing to offer them. The research papers themselves rarely (but sometimes!) overhype the results -- there's strong internal pressure against doing so, as overselling your results is a sure way of getting trashed in the peer review process.

Media reports, however, are rarely so cautious. I've read many overblown reports, tracked back to the original paper, and found something much more mild. I understand that some of this has to do with university press releases ... though the fact that so much science journalism is just a reworked press release is in itself an indictment.

Great article. I think that you will be interested in a recently developed set of methods for fMRI analysis, which use machine-learning methods to analyse how distributed parts of the brain jointly encode information, as opposed to just seeing which particular part of the brain lights up. This allows much richer analyses, for example looking at how neural representational structure relates to behavioural performance.

Here are a couple of review articles:
http://www.dartmouth.edu/~raj/papers/raizada_kriegeskorte_IJIST_review_…
http://compmem.princeton.edu/NormanEtAlTICS.pdf

This approach of looking at how multiple parts of the brain jointly encode information is very parallel to the trend that you point out in genetics. Instead of asking "What's the gene for X?", people these days instead use machine-learning tools to investigate how multiple genes encode together. There is no single gene for X (except for a handful of monogenic conditions such as Huntington's), just as there is no single brain area which "does the task X". Both approaches involve trying to explore high-dimensional datasets, which is difficult but worth it. e.g. http://www.ncbi.nlm.nih.gov/pubmed/18097463

These pattern-based approaches can be, and usually are, every bit as hypothesis-driven as any other type of analysis. There also exist more inductive multivariate fMRI analysis approaches, such as ICA, which try to get interesting results spontaneously to bubble up from large datasets, but the multivoxel pattern analyses of brain activation that I am talking about are not like that. Instead, they ask a question, e.g. how does the multidimensional similarity space of neural activation patterns relate to people's behavioural performance?

"Noisy superconducting magnets whir."

FYI - superconducting magnets in MR systems are silent. The pulsing magnets that make most of the noise are non-superconducting electromagnets that operate at or near room temperature.

@erikthebassist... did you even read the article? Where on earth do you get "dismiss empirical observation" from? Jonah's exact words on the scientific method are as follows: "Needless to say, this has been a fantastically successful approach." That doesn't sound much like a dismissal, does it? Suggesting a possible adaptation or expansion on those methods isn't exactly "questioning the science", I'd call it "furthering" science. Try reading and thinking before YOU so quickly dismiss something.

I agree totally, but there are two more points that need to be made. One involves marketing parading as science. New gizmos are often oversold as revelatory mainly to serve researcher bids for research grants. The second point has to do with our preoccupation with technological applications. Truly basic research virtually does not exist today. When some new gizmo participates in a "breakthrough," that advance is always at the engineering level; we miss the fact that it has not brought greater fundamental understanding because we are so enthralled with technological applications. This leads us to mistake engineering-level insight for fundamental understanding.

By Vic Comello (not verified) on 10 Jun 2010 #permalink

... but even clouds can be reduced to their chemical structure.

You're right that everything is more complicated than we would like to think, and because of that important insight, I think you provided a need perspective to humble the claims of science and technology. But here and elsewhere I see a tendency in your writing to conflate that point with another - that these so-called "clouds" (dna, the brain, etc...) are fundamentally irreducible.

These are two different claims, and I think it wise to stay away from the second.

Jonah,

I really like your writing and am constantly inspired by your work. I am encouraged to see a science journalist discuss the limitations of the reductionist approach, and fMRI specifically. I do think, though, that in the spirit of observing and staring and pondering, we have to have something to stare at... which isn't just the big picture, but the pieces the reductionist approach provides us. I think when we illuminate the issues with reductionist science and the temptation to overstate the conclusions thereof, we should propose a marriage of the reductionist and integrative approaches and not go too far in the other direction (not that I think you have, necessarily, but I think it's worth mentioning).
Another issue common to neuroscience that I'd like to see discussed is "the curse of averages". You touch on this when you mention "noise", but I'd love to hear what you have to say about the blessings and curses of individual variation to neuroscience research...

A wonderfully cogent and concise piece.
I might add that late 19th/early 20th century philosopher Charles S. Peirce proposed the notion of "abduction". It seems like very much what you are speaking of.
Lovely work.

I'm not sure the default network is the best example here as, in fact, it can and is accurately described with the kind of distinct spots of a typical fMRI image.

Indeed, the first study to demonstrate the default network just collated distinct spots of deactivation across several studies. Subsequent 'default network' experiment have used nothing else except established neuroimaging paradigms.

The innovation here was not a new use of technology but simply using it to ask a novel question. Raichle covers this well in a recent review article.

Materials in our daily lives need to be identified for our safety. Who knows when youâll touch something highly corrosive or toxic? That just might be the end of your repertory system! Ever wondered how and why these substances change? Science has the answer.