Embodied Language and Expertise

One of the more sophisticated theories in embodied cognition is Lawrence Barsalou's perceptual symbol systems theory. It is, in essence, an updated version of the "ideas as images" position of the British empiricists, and the mental imagery theories of the seventies1. The basics of the theory are really quite simple. Here's a short description from the abstract of the paper linked above:

During perceptual experience, association areas in the brain capture bottom-up patterns of activation in sensory-motor areas. Later, in a top-down manner, association areas partially reactivate sensory-motor areas to implement perceptual symbols. The storage and reactivation of perceptual symbols operates at the level of perceptual components - not at the level of holistic perceptual experiences. Through the use of selective attention, schematic representations of perceptual components are extracted from experience and stored in memory (e.g., individual memories of green, purr, hot). As memories of the same component become organized around a common frame, they implement a simulator that produces limitless simulations of the component (e.g., simulations of purr). Not only do such simulators develop for aspects of sensory experience, they also develop for aspects of proprioception (e.g., lift, run) and introspection (e.g., compare, memory, happy, hungry). Once established, these simulators implement a basic conceptual system that represents types, supports categorization, and produces categorical inferences. These simulators further support productivity, propositions, and abstract concepts, thereby implementing a fully functional conceptual system. Productivity results from integrating simulators combinatorially and recursively to produce complex simulations. Propositions result from binding simulators to perceived individuals to represent type-token relations. Abstract concepts are grounded in complex simulations of combined physical and introspective events. Thus, a perceptual theory of knowledge can implement a fully functional conceptual system while avoiding problems associated with amodal symbol systems.

OK, maybe it doesn't sound so simple when Barsalou puts it that way. In essence, perceptual symbol systems theory says that concepts are represented by perceptual content, within sensorimotor systems (e.g., the visual system). This theory is meant to contrast with "amodal symbol systems" theories that have dominated cognitive science pretty much since the cognitive revolution. In these theories, knowledge representation is not entirely perceptual (amodal means that it's not associated with a sensory modality). For example, many amodal symbol systems theories treat conceptual content as propositional, much like language (as in classic AI, for example).

The debate between the perceptual symbol systems theorists and "amodal sybol systems" theorists has been pretty intense, and will likely continue to be for some time. I'll talk a little bit about this debate at the end of this post. The best thing about perceptual symbol systems theory (PPS from now on; I'm getting tired of typing that) is not the theory itself, but the really cool experiments it has inspired. For example, Barsalou himself had people described rolled up lawns2. Who, other than a PSS theorist, would have though of having people describe rolled up lawns (the purpose was to show that people describe properties of lawns like roots and the like that they rarely describe when lawns are not rolled up). The coolest experiments, though, have probably been on language comprehension, so I thought I'd describe a few of those.

Let's start with an experiments by Arthur Glenberg and Michael Kaschak that demonstrate what they call the "action-sentence compatibility effect"3. In their first experiment, Glenberg and Kaschak presented participants with sentences, some of which made sense, and some of which were nonsense (e.g., "Boil the air"). The sentences that made sense were of two types: "toward sentences," such as "open the drawer," which implied movement toward the body, and "away sentences," like "close the drawer," which implied movement away from the body. The participants were asked to judge the sensibility of the sentences, and indicate their response by pressing one of three buttons, placed so that one was furthest from the body, one was in the middle, and one was closest. Participants started with their hands on the middle button, and the key manipulation was whether the far button (requiring an away movement) or near button (requiring a toward movement) was the response for sensible sentences.

According to PPS, our representations of the sentences should utilize our sensorimotor systems -- the same sensorimotor systems that are involved in making the toward or away movements. Thus, representing the toward movement in the sentence "open the drawer" should interfere with movements toward the body, and representing the away movement in the sentence "close the drawer" should interfere with movements away from the body. Thus, PPS predicts that reaction times should be longer when the movement required to indicate that the sentence makes sense is inconsistent with the movement in the sentence (i.e., when moving to the closest button for away sentences and moving to the furthest button for toward sentences). Here are their results for three different types of sentences (their Figure 1, p. 560):

i-55e8f1c3b963d098043c0da2f431c798-GlenbergKaschakGraph.JPG

As you can see, the prediction was confirmed for all three types of sentences: response movements consistent with the movements in the sentence yielded faster reaction times than inconsistent response movements.

The next experiment, by Zwaan et al.4, may be even cooler than the action-sentence compatibility effect study. In their study, participants first heard a sentence that suggested an object was moving in a particular direction -- either toward ( "The shortstop hurled the ball at you") or away ("You hurled the softball at the shortstop"). In the critical conditions, the participants then viewed two images of baseballs, each presented for 500 ms, and separated by a 175 ms delay. In one condition, the first ball was smaller than the second, which gave the impression of the ball moving towards the participant, and in the other condition, the first ball was larger than the second, suggesting motion away from the participant. Participants were then instructed to indicate whether the two images they'd seen were the same (in control conditions, two different objects were presented). As in the Glenberg and Kaschak study, the prediction was that participants would verify that the two images were the same when the motion implied by the change in the size of the ball (either small then large, or large then small) was consistent with the motion in the sentence. Consistent with this prediction, reaction times for both toward and away sentences were faster when the implied motion was consistent with the motion in the sentence.

It gets even cooler, though. Stanfield and Zwaan5, as well as Zwaan et al.6 (different et al.) conducted an experiment in which participants read a sentence (e.g., "The woman put the umbrella in the closet") which implied that the target object (in this case, umbrella) had a particular shape (in this case, closed), and then presented participants with images of objects that either matched the shape of the object in the sentence (a picture of a closed umbrella) or didn't match (an open umbrella), and asked participants whether the object was in the sentence. Of course, in the control condition, the objects weren't in the sentence. Reaction times were the variable of interest, again, and the prediction, similar to the previous studies, was that matched shapes would produce faster reaction times than mismatched shapes. Once again, the results were consistent with the prediction, and thus with PSS.

Finally, the coolest study of all, because it involves sports. In a paper published last month in the Psychonomic Bulletin & Review, Holt and Beilock7 looked at the role of expertise in how we represent sentences perceptually. They used two types of participants: college and high school hockey players (the experts), and people with no hockey experience (the novices). All of the participants read hockey-related (e.g., "The referee saw the hockey helmet on the bench") and non-hockey sentences (e.g., "The child saw the balloon in the air"), and as in the Stanfield and Zwaan experiment, were subsequently shown a picture of an object and asked to indicate whether it had been in the sentence. Also as in the Stanfield and Zwaan experiment, the pictures of objects from the sentences (in the control condition, the pictures were of objects not in the sentence) were either the same shape as the objects in the sentence, or a different shape. Since I don't know a hell of a lot about hockey, I'll just show you the sentences and images from Holt and Beilock's Appendix A, so that you can see the matches and mismatches yourself.

i-303a4bfae151c7e445b09c482d28ba3e-HoltandBeilockAppendix.JPG

For the non-hockey sentences, both the hockey players and the novices show the same results: matches produced faster reaction times than nonmatches. For hockey sentences, though, only the hockey patters were faster for matches than nonmatches, because hockey-specific knowledge was required to know the shape of the object in the hockey sentences. In a second experiment, they replicated this result with football players and football novices. This time, they used sentences in which the players were either performing everyday actions (e.g., praying) or football-specific actions (e.g., a player blocking a kick). All participants showed the match-mismatch effect for everyday actions, but only football experts showed the effect for football-specific actions. Once again, expertise influenced how people represented the sentences perceptually (according to PSS).

Those are all fun experiments, right? In each case, the authors argue that the results indicate that our representations of the sentences occur in the same perceptual and motor systems with which we perceive and perform the objects and actions in the sentences. I can definitely buy that conclusion. I'm not sure how you explain these results without recourse to representations that contain perceptual and motor information (and thus, by implication, utilize perceptual and motor systems, or are at least associated with those systems). However, they also argue that these results are incompatible with "amodal symbol systems" theories. That I can't buy. Only extreme versions of "amodal symbol systems" theories, versions to which I doubt anyone adheres, would argue that our representations of sentences, or anything else, are generally devoid of perceptual content. Theories of knowledge representation that posit amodal content simply say that it's not all perceptual. I suspect that many such theories could explain these results. In fact, that's the major problem for PSS, just as it was for the mental imagery theories of the 1970s: any result can be explained by both PSS theories and "amodal symbol systems" theories.

Now, the common argument, used frequently by Barsalou himself, is that "amodal symbol systems" theories (I'm using the scare quotes 'cause I hate that name, by the way) can probably explain these results, but they can't predict them. In science, a theory that predicts data is generally more viewed more favorably than one that can only explain it post hoc. In this case I don't think that heuristic applies, though. "Amodal symbol systems" theorists probably wouldn't predict these results, not because they couldn't, if they thought about it, but because their focus is elsewhere. Since PSS theorists focus so much on perception and the motor system, they're more likely to make predictions about perceptual and motor effects. In other words, it's not that "amodal symbol systems" theories can't predict these results, it's that no one using "amodal symbol system" theories ever really thought to do so. Of course, that's why PSS is so valuable, even if it's empirically indistinguishable from many "amodal symbol systems" theories: it has caused researchers to focus on phenomena that they otherwise would have ignored. As a result, it's produced really cool experiments with interesting results. And that's a good thing.


1If you want to learn about the mental imagery work in the 70s, I recommend starting with Kosslyn, S. M. (1978). Imagery and internal representations. In E. Rosch and B. Lloyd (Eds.), Cognition and Categorization. Hillsdale, NJ.: Erlbaum Associates. Just about anything by Kosslyn from the 70s and early 80s would work as well.
2Barsalou, L.W., Solomon, K.O., & Wu, L.L. (1999). Perceptual simulation in conceptual tasks. In M.K. Hiraga, C. Sinha, & S. Wilcox (Eds.), Cultural, typological, and psychological perspectives in cognitive linguistics: The proceedings of the 4th conference of the International Cognitive Linguistics Association, Vol. 3 (209-228). Amsterdam: John Benjamins.
3Glenberg, A. M., & Kaschak, M. P. (2002). Grounding language in action. Psychonomic Bulletin & Review, 9, 558-565.
4Zwaan, R. A., Madden, C. J., Yaxley, R. H., & Aveyard, M. E. (2004). Moving words: Dynamic representations in language comprehension. Cognitive Science, 28, 611-619.
5Stanfield, R. A., & Zwaan, R. A. (2001). The effect of implied orientation
derived from verbal context on picture recognition. Psychological Science, 12, 153-156.
6Zwaan, R. A., Stanfield, R. A., & Yaxley, R. H. (2002). Language comprehenders mentally represent the shape of objects. Psychological Science, 13, 168-171.
7Holt, L.E., & Beilock, S.L. (2006). Expertise and its embodiment: Examining the impact of sensorimotor skill expertise on the representation of action-related text. Psychonomic Bulletin & Review, 13(4), 694-701.

More like this

It seems that a significant fraction of cog sci experiments use reaction time as the measurement of brain activity. But has anyone actually tried to figure out what is involved in "harder" tasks taking more time? In other words, how do we actually make decisions, on a neurological level? How does the brain "know" that in one case it has a satisfactory result after 1.3s and in another case after 1.4s? Is there a network of neurons dedicated to looking at other networks and checking if they've converged on a result?

Well, sure, most RT tasks are designed to test hypotheses about how and why two or more tasks are different from each other.