What sound are you hearing? It may depend on the words you read

Listen to this short audio clip:

ResearchBlogging.orgThe clip plays two notes that are two full octaves apart. That's a greater range than many people can produce vocally. It should be easy for anyone to tell the difference between these two notes, even when heard in isolation, right?

Not necessarily.

A team led by Ulrich Weger has found a scenario where people make systematic errors judging these two very different notes. While most people get the notes right most of the time, by introducing a wrinkle into the testing, Weger's team could reliably induce errors and slower response times.

They asked 20 undergraduate students to listen to these notes repeatedly, responding as quickly as possible to identify the notes. What was the wrinkle? First they had to read a word on a computer screen and make a judgment about it. Each word had either a good meaning (e.g. kiss) or a bad meaning (e.g. dead). 150 milliseconds after the volunteer responded with "good" or "bad," one of the two tones was played through headphones. Respondents had to press the the "1" key if they heard the high-frequency tone, and "5" if they heard the low-frequency tone. How'd they do? Here are the results:

i-771b63694967748e4a95b2cb446ae834-weger1.gif

After reading positive words, the students made significantly fewer errors identifying the high-pitch tone than the low-pitch tone. After reading negative words, the results were reversed: students were significantly more accurate identifying low-pitched tones than high-pitched tones, even though the tones were two full octaves apart! A similar pattern was found in reaction times: they were faster to identify the high-pitched tone after positive words, and slower to identify it after negative words.

Since these error rates are relatively low, the researchers repeated the experiment with tones that were more similar -- about five notes apart on a traditional scale. The results were the same.

Weber's team says their results are an example of affective priming. Good and bad words are metaphors for higher and lower things in space, just as high notes and low notes are. The students were led to think of "high" things when they read the positive words, which made it more difficult for them to identify the low notes.

The researchers were careful not to the terms "high" and "low" in the study at all, so they claim it's clear that the results of the experiment show that the students in the study share a cultural bias to see negative words and low notes as physically low in space, and positive words and high notes as physically high in space.

Weger, U.W., Meier, B.P., Robinson, M.D., Inhoff, A.W. (2007). Things are sounding up: Affective influences on auditory tone perception. Psychonomic Bulletin & Review, 14(3), 517-521.

More like this

Could it be that we tend to say positive words in a high pitched voice ("Good work!") and negative words in a low pitched voice ("Bad dog!")? The results do not necessarily show that the high/low metaphor is involved.

In nature, larger humans have deeper voices. Larger humans are associated with being stronger and therefore more dangerous when crossed. Instinctively, children are adored and protected, with their cute little high-pitched voices.

In the animal kingdom, we have more to fear from the low growl of the bear than the high twitter of the songbird.

I venture to say that there is an evolutionary role in this high=good low=bad association.

I've never really like affective priming as an explanation for these sorts of things. Just because you're "feeling down" or things are "looking up" doesn't mean that there aren't also negative associations with high and low. Besides, I should hope that most of our preferences are derived from experience and not language. It seems like there are too many leaps, from good and bad to spatial location and from there to music, it seems a bit tenuous. I can agree that high pitches are associated with good things and therefore possibly good words, but I don't think it's for spatial reasons. I agree with the previous two that there are definitely other things at work.

It would be interesting to repeat this with a group of piano, flute and saxophone players. For all three instruments the notes on the page correspond spatially to musically (high on the staff is a high pitch), but the direction your fingers go doesn't necessarily correspond. Flutes have higher pitch when the leftmost keys are pressed, and pianos when the rightmost are pressed. Saxophones have higher pitch for the topmost keys. Are there any instruments where higher pitch is lower spatially? Pipe organs?

One thing I'd like to know is whether all languages use "high" and "low" in the same way for pitch that we do in English. If that notation is universal it might suggest that there is some innate reason for the connection rather than purely cultural.

Just because they did not identify the tones as either high or low does not mean that the informants did not think of them in these terms and "labeled" them, so to speak. One needs to think of other ways of controling this factor.

Darth Vader = Bad.
Cute rodenty cartoon characters = Good.

Don't forget about "motherese" - the exaggerated high-pitched way that people speak to babies and small pets. I'd also like to see the results for a culture that doesn't use the same spatial metaphors as us. These results are interesting but I'd be cautious in interpreting the findings.