How we know what someone else can see

i-eca0cf2af9fc3ac4445c7dff7d8aab70-research.gifDevelopmental psychologists since Piaget have been interested in how well children are able to take the perspective of another. Piaget's laboratory had a large table with elaborate models on top; children who were able to take the perspective of a doll on the table and explain what the table looked like from her perspective instead of their own perspective were said to be at a later developmental stage.

But understanding whether a doll can "see" something doesn't always literally require taking her perspective. Take a look at this simple arrangement of objects on a table:


You don't have to imagine yourself sitting at the doll's place in order to decide whether she can see the key. All you have to do is trace an imaginary line from the doll's eyes to the object in question. But line tracing won't tell you if the key is on the doll's right or left. For that, you have to imagine sitting in the place of the doll. Children typically are able to do the "can the doll see X?" task at a younger age than the "is the key to the doll's left or right?" task.

However, though there has been an abundance of research on when children do these tasks, little work has been done to learn how people -- children or adults -- do them. It might be that even though line-tracing is possible, people actually always place themselves in the perspective of another. Or there may be a completely different explanation.

Pascal Michelon and Jeffrey Zacks have devised a set of experiments to help uncover exactly how we perform these tasks. In one experiment, they showed college students a set of photos like the example above, asking them to indicate either whether the highlighted object was visible, or whether it was to the left or right of the doll. Participants were instructed to respond as quickly as possible, and reaction times were measured.

In a new experiment, they substituted an asterisk symbol for the doll, like this:


Again, reaction times were measured. If the participants are imagining themselves in the position of the asterisk, then the farther the table is rotated from their own perspective, the longer it should take them to do the task. But if they are simply imagining a line from the asterisk to the object, then the orientation of the table shouldn't matter. Here are the results:


As you can see, reaction times were slower for the left-right task as the orientation of the table changed, while for the "can the asterisk see it" task, the reaction times were the same, regardless of table orientation.

But if the participants were tracing a line from the asterisk to the objects to do the "can the asterisk see it" task, then the longer the distance from the asterisk to the object, the longer the reaction times should be. Here's a chart showing the relationship between object distance and reaction time for both tasks:


For the "can the asterisk see it" task, the results are as expected: faster reaction times when the object is nearer. But the results were similar for the left-right task, which at first glance doesn't make sense: if participants are imagining what they see from the perspective of the asterisk, it shouldn't take longer to decide if objects that are farther away are to the left or to the right.

But take a look back at the diagram with the asterisk.


Due to the arrangement of the table, objects that are closer to the asterisk, like the scissors or the pencil, are also more dramatically to the left or right compared to, say, the key. In fact, when they analyzed the data taking this into account, the distance advantage for the left-right task disappeared. It appears that participants were using line-tracing for the visibility task and perspective-taking for the other task.

To confirm this suspicion, Michelon and Zacks created a new experiment, with a new arrangement of objects that allowed them to control not only the distance from the asterisk, but also the distance of each object from the midline. Here's a sample arrangement of the table:


As before, they found that reaction times were slower for the left-right task as the table was rotated farther from the viewer's perspective. However, now the reaction time advantage for near objects was much more dramatic for the visibility task compared to the left-right task.


Michelon and Zacks argue that these experiments offer substantial evidence that we use at least two different methods to understand the perspective of others. When we are trying to decide whether someone else can see what we can see, these experiments suggest that we use the line-tracing method, but when we're trying to understand the relative positions of objects, we use the more cognitively demanding perspective-taking approach.

Michelon, P., & Zacks, J.M. (2006). Two kinds of visual perspective taking. Perception & Psychophysics, 68(2), 327-337.

More like this

Today, we unveil a brand new PLoS ONE Collection - the Prokaryotic Genome Collection. The Collection was edited by Niyaz Ahmed, who wrote an introductory Overview. In other news, there are 17 new articles published last night and another 17 new articles published tonight in PLoS ONE. As always,…
This morning I got a question in e-mail, asking if I'd heard of a particular paper. Of course I had, it's a very fun bit of research...and then I realized I'd never mentioned it on the weblog before. I guess it's because it's focused entirely on the phylum Chordata, specifically one rather…
There are 20 new articles in PLoS ONE today. As always, you should rate the articles, post notes and comments and send trackbacks when you blog about the papers. You can now also easily place articles on various social services (CiteULike, Mendeley, Connotea, Stumbleupon, Facebook and Digg) with…
There are 18 new articles in PLoS ONE today. As always, you should rate the articles, post notes and comments and send trackbacks when you blog about the papers. You can now also easily place articles on various social services (CiteULike, Connotea, Stumbleupon, Facebook and Digg) with just one…

As the asterisk rotates farther from your point of view, you also have to inhibit a response tendency to answer according to YOUR right or left. There isn't such a difference in inhibitions necessary for the visibility task since all objects are visible to you, regardless of the rotation.