Can Specific Eye Movements Aid in Complex Problem Solving?

Over at OmniBrain, Steve has a great summary of a recent article by Thomas and Lleras(1 on embodied cognition/perceptual symbol systems and problem solving. I recommend reading Steve's summary before going on with this post, but in case you're really lazy, here's the abstract:

Grant and Spivey (2003) proposed that eye movement trajectories can influence spatial reasoning by way of an implicit eye-movement-to-cognition link. We tested this proposal and investigated the nature of this link by continuously monitoring eye movements and asking participants to perform a problem-solving task under free-viewing conditions while occasionally guiding their eye movements (via an unrelated tracking task), either in a pattern related to the problem's solution or in unrelated patterns. Although participants reported that they were not aware of any relationship between the tracking task and the problem, those who moved their eyes in a pattern related to the problem's solution were the most successful problem solvers. Our results support the existence of an implicit compatibility between spatial cognition and the eye movement patterns that people use to examine a scene.

I find this study's results fascinating, and more than a bit surprising, for several reasons, which I'll get to in a moment, but first let me tell you a bit about the problem they used. It's a problem that those of us who spend a significant portion of our time studying analogy are intimately familiar with -- the Duncker radiation problem, which, as originally formulated, read like this(2):

Your problem is to find out how to aply a certain kind of x-rays, with high intensities of which destroy organic tissue, in order to cure a man from a tumor within his body (for instance, in his stomach. (p. 669)

Or more recently(3):

A tumor was located in the interior of a patient's body. A doctor wanted to destroy the tumor with rays. The doctor wanted to prevent the rays from destroying healthy tissue. As a result the high-intensity rays could not be applied to the tumor along one path. However, high-intensity rays were needed to destroy the tumor. So applying one low-intensity ray would not succeed. (p. 311)

This is an incredibly difficult problem for most people to solve spontaneously (i.e., without being given any sort of hint). Duncker identified three spontaneously produced solutions, but most researchers have focused on one, and as it's the solution that Thomas and Lleras use in their study, I'll just describe it. The solution involves using more than one (in Duncker's example, 2, but in most examples, more than 2) weak rays from different directions that meet in the middle. The idea is that as they pass through the healthy tissue on their way to the tumor from different directions, they will be too weak to do any harm, but when they meat in the middle, their concentrated effect will kill the tumor cells. In Duncker's most comprehensive study(4), only two out of forty-two partipants, or 4.8%, spontaneously produced this solution.

In a classic study on analogical reasoning, Gick and Holyoak gave people an analogous story to help them solve the problem. The story involved a general who wanted to attack a the fortress of a ruthless dictator. The fortress had many roads radiating out of it in several directions, but the dictator had the roads mined. The mines would explode if a large force walked over them, but if a small force walked over them, they wouldn't explode. The general felt he needed his entire large force to take the fort, but couldn't send the force down one road without setting off the mines. So he divided his forces into several small forces, and sent each one down one of the many roads leading to the fortress. They all met at the fortress, and were able to take it.

The idea was that if participants read the radiation problem after reading the story about the general taking the fortress, they would recognize the two problems were analogous, and use the general's solution to solve the radiation problem. In Gick and Holyoak's study, participants spontaneously (i.e., without reading the story about the general) produced the weaker rays on multiple routes solution between between 0 and 20% of the time. When they were given the analogy without being told that it could be used to help solve the radiation problem, they solved it between 30 and 40% of the time. That's a pretty nice improvement, but still the majority of the participants couldn't come up with the solution on their own. Only when they were given hints tor specifically told that the general's story could help them solve the problem did more than 50% of participants produce the multiple routes solution.

Interestingly, in one condition Gick and Holyoak(5) gave participants a diagram (either with or without the analogy) meant to illustrate the multiple routes solution (p. 18):

i-752aab43f16857c9f0461c4834121dc4-gickholyoakfig1.JPG

When the diagram was presented alone, without a hint that it might aid in solving the radiation problem, only 7% of participants produced the multiple routes solution. So the diagram was no help whatsoever. In fact, when participants received the general's story and the diagram, they did worse than if they'd just been given the story (without a hint, that is).

Why am I telling you all of this? Because in Thomas and Lleras' study, participants who made eye movements that suggested the multiple routes solution (if you don't know what I'm talking about, go read Steve's post, ya lazy bum) spontaneously produced the multiple routes solution about 50% of the time after 19 eye tracking trials. Fifty percent! When I read that, my first thought was (pardon my French), "Well fuck me!" How the hell did they get participants to solve it 50% of the time without a hint? Short of presenting multiple different and explicit analogs, no one gets that kind of solution rate without a hint, and Thomas and Lleras gave them no hints or explicit analogs whatsoever. Just eye movements.

Thomas and Lleras argue that their finding suggests an internal perceptual simulation facilitated by the eye movements. What makes their finding and this explanation even more surprising, though, is that an explicit perceptual clue -- Gick and Holyoak's diagram -- had absolutely no impact on the production of spontaneous solutions. I mean, maybe if they'd been given it 19 times (the number of eye-tracking trials needed to get participants to 50%), they might have figured it out, but we'll never know (who's going to run that study? not I). For now, it seems that the argument is that internal perceptual simulations (whatever the hell those are) are more powerful than external perceptual clues. Weird.

So for now, color me skeptical about the theoretical explanation. It seems to me that after you get what amounts to 19 analogies, perceptual or not, explicit or implicit, you've been so beaten over the head with the solution that if you don't get it, you're not trying very hard. Sure, participants reported they didn't see the connection between the task, and I'm willing to believe that consciously, they weren't aware of that. But as recent research has shown(6), analogical mappings are often conducted automatically and implicitly. And if you give them 19 shots at it, how can they not be?

The real test of a perceptual symbols/embodied cognition explanation of problem solving would, I think, look like this. You'd run everyone in an eye-tracker, using any of the various conditions in which people have been shown to solve the radiation problem with the multiple routes solution at relatively high rates (e.g., multiple analogies), with and without hints, as well as in a control condition in which they have to solve it spontaneously. Put a picture of the tumor surrounded by healthy tissue around it on the screen, and monitor their eye movements. If people are using perceptual simulations facilitated by eye movements to solve the problem, then participants who solve the problem (in any condition), should make eye movements consistent with the multiple routes solution on their own (and this would mean more than just looking at the skin, as in the Grant and Spivey study mentioned in the abstract above -- that data is pretty much irrelevant to the question at hand).


1Thomas, L.E., & Lleras, A. (2007). Moving eyes and moving thought: On the spatial compatibility between eye movements and cognition. Psychonomic Bulletin & Review, 14, 663-668.
2Duncker, K. A. (1926). A qualitative (experimental and theoretical) study of productive thinking (solving of comprehensible problems). Journal of Genetic Psychology, 33, 642-708.
3Gick, M. L., & Holyoak, K. J. (1980). Analogical problem solving. Cognitive Psychology, 12, 306-355.
4Duncker, K. (1945). On problem solving. Psychological Monographs, 58(5, Whole No. 270).
5Gick, M.L., & Holyoak, K.J. (1983). Schea induction and analogical transfer. Cognitive Psychology, 15, 1-38.
6Day, S.B., Gentner, D. (2007). Nonintentional analogical inference in text comprehension. Memory & Cognition, 35(1), 39-49.

More like this

Out of curiosity, what are the other two spontaneously produced solutions? They are not obvious to me, and I could not find anything revealing on Google.

By Michael Poole (not verified) on 24 Sep 2007 #permalink

Michael, don't worry, they're not obvious to anyone. In Duncker's original study, the other general types of solution involved going around or otherwise avoiding the healthy tissue (e.g., through surgery), or altering the sensitivity of the healthy and/or cancerous cells to the rays, e.g., by making the healthy tissue more resistant, allowing the strong rays to be used, or by making the tumor more sensitive, allowing weaker rays to be used. Those are the three general types of solutions that Gick and Holyoak identify (in the 1980 paper) in Duncker's 1926 and 1945 papers.

Unfortunately, because they're old, the Gick and Holyoak papers and Duncker's papers are not available online, but if you'd like a copy, send me an email.

That would explain why people tested on various skills appear to benefit by visualising tasks before carrying them out.

Alex, that makes a good point. I wonder if those who solved the problem without visual prompting (eye movements exercises) were visualizing the solution differently thatn those who couldn't solve the problem. How do you measure that?

The problem with the pre-test instruction given to the test-takers in the Grant and Spivey (2003) experiment ( and subsequently in Thomas and Lleras' study) is that it relies on the technical knowledge of the test-takers regarding lasers/radiation.

This is the original instruction from Thomas and Lleras' paper:
"Given a human being with an inoperable stomach tumor, and lasers which destroy organic tissue at sufficient intensity, how can one cure the person with these lasers and, at the same time, avoid harming the healthy tissue that surrounds the tumor?"

Whereas the Gick & Holyoak version is more detailed in its purpose and the background info provided with the problem:

"A tumor was located in the interior of a patient's body. A doctor wanted to destroy the tumor with rays. The doctor wanted to prevent the rays from destroying healthy tissue. As a result the high-intensity rays could not be applied to the tumor along one path. However, high-intensity rays were needed to destroy the tumor. So applying one low-intensity ray would not succeed. (p. 311)"

Thomas and Lleras study simply assumes that the the "multiple routes solution" is the absolute best of all the solutions. The fact is that the other groups' solutions too might have their own significance.

For example: A test taker, with no knowledge of cancer radiotherapy or lasers might very well think " there is going to be some damage to the healthy tissue any how, so why don't I restrict that damage to just one area of the body by just shooting the laser at the tumor along a single trajectory rather than going all round the body and end up burning more healthy tissue.
Such a point of view cannot be written off as "wrong".

(The present study used undergraduate students and eliminated all those who had some previous knowledge of the problem)

The real question to be answered is whether people really make use of the visual clues of the "eye tracking" to work out the problem at least.
Solving it the way a 'radiation oncologist' does, is a totally different aspect of it.

And even if the ones who solve the problem are moving their eyes left to right more frequently, you don't know if it's because of perceptual-motor simulation or because of similar effects demonstrated in eye movements during recall, where simulation theories cannot account for the results.

While I think a lot of the embodiment theories are very interesting, I think they have a lot of work to do in terms of finding unequivocal results, almost without exception.

The number I found interesting was 2/42. Numbers that low are really usually gotten?

Well, since this is an unintuitive result (to me) the natural question to ask is: What kind of model can predict properties of that number? The obvious tests to start with would be correlation with spacial IQ results, correlation with solution finding for problems of different types, and so forth...

PERCEPTUAL SYMBOL SYSTEMS???

How about just good old fashion priming!

John, I suspect that the PPS people would be perfectly happy with priming as an explanation for this data, because it would imply that a purely perceptual-motor/embodied action representation can prime solutions in what would ordinarily be considered a purely cognitive, "amodal" (the biggest single-word red herring of all time) problem-solving process.

What I would want, and what PPS/embodied people have to give at this point in order to be taken seriously, is something over and above black-box abstractions like "simulation." What they need is a mechanism, and a model. Otherwise, what makes a finding like this support perceptual/embodied systems theories other than that there's something perceptual/embodied about it?

"something over and above black-box abstractions":

Deb Roy. (2005). Grounding Words in Perception and Action: Insights from Computational Models. Trends in Cognitive Science, 9(8):389-96.

Deb Roy. (2005). Semiotic Schemas: A Framework for Grounding Language in Action and Perception. Artificial Intelligence, 167(1-2):170-205.

A, I've read Roy's TiCS paper, but not her AI paper. I assume they're fairly similar, but if I'm wrong, let me know. I really like the stuff that Roy discusses in the TiCS paper, but it's not what I am talking about in my previous comment for two reasons. The first is that, in essence, what the models Roy discusses do is integrate perceptual and motor information into amodal symbols (e.g., state spaces defined computationally, propositionally-defined logical relations, etc.), and much of the work (e.g., in the contextual meaning and action-verb meaning models she describes) is done by the amodal parts of the systems. That's great, and exactly what I and most cognitive scientists would say is going on, but it's not really PPS.

The second is that there's nothing in those models that even approaches a model of simulation, as it is defined (if you can say it is defined) in PPS theories.

" what the models...do is integrate perceptual and motor information into amodal symbols ...and much of the work ...is done by the amodal parts of the systems."

"That's great, and exactly what I and most cognitive scientists would say is going on"

If this is what most cognitive scientists believe is going on then embodied views of cogntion have already won out. And having just read a review of Pinker's new book he even seems to have come over.

I agree with you that particular popular theoretical models along these lines are very woolly (e.g. Barsalou or Lakoff) but the general approach (that concepts, lexemes, parts of speech etc. are grounded in perceptual and motor representaions) is becoming mainstream. This was not the case 5 years ago.

Certainly, part of the problem is one of terminology. "Embodiment" means very different things to metaphor wackos and mirror neuron zealots but the general "approach" is not hollow...particular theories are relatively new and time will tell if they are valid or not.

p.s. Deb is a dude. :)

Eh, if you had said to me 30+ years ago, at the height of functionalism, that it was news that perception and motor programs were taken seriously, I might have been surprised, but for the most part, this has been mainstream cog psy for a few decades. I mean, there's a special issue of JML coming out sometime in the near future on eye movements and language, and it has nothing to do with PPS or anything of the sort (well, most of it doesn't), it's just about how eye movements and language interact. And that's just common sense.

I don't think embodiment is hollow at all, but I think that when it's taken to its extreme, even in PPS, it's pretty empty. It yields cool experiments, though, even if I have trouble figuring out how theory and the data go together.

The footnotes don't work.

By Peter Lund (not verified) on 30 Sep 2007 #permalink

There are two things I don't understand:

1) how did they find enough participants who didn't a) already know how irradiation therapy works and b) weren't errm... stupid?

2) Why on Earth did so few people solve it? The diagram and the military analogy should have made it blindingly obvious!

PS: You can also to a certain extent push internal organs around and squeeze and stretch them to minimize the absorbed dosage in the healthy tissue. Does that count as a (partial) fourth solution?

By Peter Lund (not verified) on 30 Sep 2007 #permalink

Alex, that makes a good point. I wonder if those who solved the problem without visual prompting (eye movements exercises) were visualizing the solution differently thatn those who couldn't solve the problem. How do you measure that?