Surgery on conscious patients reveals sequence and timing of language processing

THINKING of and saying a word is something that most of us do effortlessly many times a day. This involves a number of steps - we must select the appropriate word, decide on the proper tense, and also pronounce it correctly. The neural computations underlying these tasks are highly complex, and whether the brain performs them all at the same time, or one after the other, has been a subject of debate.

This debate has now apparently been settled, by a team of American researchers who had the rare opportunity to investigate language processing in conscious epileptic patients undergoing surgery. In today's issue of the journal Science, the researchers report that the brain processes lexical, grammatical and phonological information in a well defined sequence that lasts less than half a second, and that a single language centre known as Broca's Area is involved in all these tasks.

Broca's Area is named after the French physician Pierre Paul Broca, who identified the brain region during post mortem examinations of two patients who had lost the ability to speak after suffering strokes. These patients were still able to understand the speech of others perfectly well, and the area Broca identified - which is located in the inferior frontal gyrus of the left hemisphere - was later found to control the throat and tongue muscles required for production of speech. It has therefore long been assumed to be involved solely in speech production. 

It is now known that Broca's Area is also involved in other aspects of speech. But the faculty of speech cannot be studied in animals, and the resolution of techniques used to investigate the living human brain, such as functional neuroimaging, is too low to examine brain activity in any great detail. So since it was discovered, little progress has been been made towards understanding the precise role of this part of the brain in language. Neurosurgeons can probe the brain using electrodes placed onto the surface of the cerebral cortex, but the operations requiring this procedure are performed not performed very often.  

Using a variation of the technique pioneered by Wilder Penfield in the 1930s, Ned Sahin and his colleagues implanted electrode arrays into the brains of three epileptic patients undergoing pre-surgical evaluation. During the procedure, the patients were shown words on a computer screen, and asked to either silently repeat them or perform various inflections. For example, if they saw the phrase "Yesterday they ____" followed by "to walk", the patient would mentally utter "Yesterday they walked" and then press a button to indicate that they had completed the task.

sahin intracellular electrophysiology.JPG

The electrodes were implanted in and around Broca's Area, enabling the researchers to record the neural activity associated with the language processing, at high spatial and temporal resolution. This revealed that the neural signature of the language task consisted of three distinct components, which were separated in space and time, and found consistently in all three patients.

The three phases of electrical activity were recorded exclusively from Broca's Area, but were recorded at different times and in distinct subregions separated from each other by several millimeters. The first component occurred at around 200 milliseconds (ms) after presentation of each word. It  was found to be larger for infrequently used words than for common ones, but not sensitive to word length, suggesting that it corresponds to word identification. Broca's Area is not associated with identifying words, but it has previously been shown to activated at this timescale in response to lexical information delivered to it from other language areas.

The second component of the signature was recorded at 320 ms after word repesentation, in a region at the back of Broca's Area. Activity recorded from this region was found to be modulated during trials requiring the patients to subvocally produce the past tense forms of verbs or to convert nouns between the singular and the plural, but not during trials in which the word was just repeated. It therefore seems to be involved in processing grammatical, but not lexical, aspects of language.

The final component was recorded at around 450 ms, in yet another distinct subregion. This signal was the same during trials in which the patients read the word as it was presented, or uttered a sentence containing it in the present tense (for verbs) or the singular form (for nouns). However, it differed in trials involving conversion of a verb to the past tense, or of noun to its plural. These inflexions generate outputs which sound different from the others - for the past tense, one must select an appropriate suffix, such as "-ed", and decide how it is pronounced (for example, the "d" in "handed" and "walked" sound different from one another), as well as whether it is regular ("played") or irregular ("bought"). These differences led the researchers to the conclusion that this third component corresponds to the processing of phonological information.  

According to the classical neurological model of language, Broca's Area is involved in speech production, and Wernicke's Area, which is located in the temporal lobe, is required for speech comprehension. This study shows that Broca's Area is subdivided into functionally distinct regions which are involved in sequential processing of different aspects of language, and that its role in language is far more extensive than previously thought. It adds to earlier evidence that Broca's Area is involved in both speech production and comprehension. Future work using these techniques may reveal more of its fine-grained structure, and provide further clues about its involvement in speech.


Sahin, N., et al. (2009). Sequential Processing of Lexical, Grammatical, and Phonological Information Within Broca's Area. Science 326: 445-449. DOI: 10.1126/science.1174481.

More like this

We all know what it means to be conscious. You are, of course, conscious right now - if you were not, you would be unable to read this. And while you read, you will be conscious of the words on your computer screen; of tactile sensations originating from the mouse you are holding and the chair you…
This week a few more tantalizing clues about the origin of language popped up. I blogged here and here about a fierce debate over the evolution of language. No other species communicates quite the way humans do, with a system of sounds, words, and grammar that allows us to convey an infinite number…
The French anatomist, anthropologist, and surgeon Pierre Paul Broca (1824-1880) is best remembered for his descriptions of two patients who had lost the ability to speak after sustaining damage to the left frontal lobe of the brain. Broca's observations of these patients, and the conclusions he…
The patient lies on the operating table, with the right side of his body raised slightly. The anaesthetist sterilizes his scalp and injects it with Nupercaine to produce analgesia - the patient will remain fully conscious throughout the procedure. Behind the surgical drapes, three large incisions…

Thank you for this account. It feels so odd to have such accepted tenets challenged so comprehensively but it's also exciting for so many other reasons.

By Evidence Matters (not verified) on 16 Oct 2009 #permalink

I love when my RSS feed cuts off part of the headline. This one read "Surgery on conscious patients reveals". I was so surprised to find out that the rest of the sentence wasn't "That surgery hurts."

Great article on a fascinating study. Any time I read about electrodes in the brains of epileptic patients, I think of Libet. Care to tie the 0.5 seconds to verbalize with Libet's 0.5 seconds for conscious recognition?

Well, really very interesting in all ways. It's a milestone. The language, loud or not, likely is a sophisticated behavior in all senses and its articulations are strongly motivated movements of the body. It's the Empire of senses. So should be the Broca's Area ...
Thinking that it's dangerous to talk on mobile when driving suggests that evidence ...
We should be aware that some human languages are using speech components that extensively rely on phonological resemblance, also said opposition, to involve as well logical - actually semantic - relation/variation. This is, two words, let's say "gira" and "kira" with a voiced consonant[g] vs [k], will respectively be meaning "to have" and "to be rich". Obviously, one entitles the other.
The study of the Broca's Area hasn't yet explained all on the language secrets ...

Fascinating article, lucky to find patients to run this experiment on! I actually know someone with a brain tumor that sometimes loses the ability to talk - they can understand things fine but just can't make the words right. On one occassion they couldn't say numbers, and had to hold up fingers instead, like they understood numbers but couldn't remember the word associated with it.

@Cole: Libet et al used patient's subjective reports of when they became aware of the stimuli, so given the timescales involved, that leaves room for error. And the question of how accurate they are is further complicated by the fact that it takes another fraction of a second for the patients to verbalize their awareness! Assuming accuracy though, the events measured in both studies can be thought of as the brain's output. They both involve some degree of prior cortical processing, so I think that in both cases this is what's reflected in the short delays. Consciousness is always about the "now", but of course we can only ever perceive reality through our brain's construction of it. We don't become aware of events in the outside world when they actually happen, but slightly later, because our perception of them is based on neural processing. How the brain encodes time is certainly intriguing, but I think the answers will elude us for a long time yet.

@CaptainSkellet: Very interesting. This seems to reinforce the old dichotomy between Broca's and Wernicke's areas, but clearly language processing is far more complicated than that. Both areas a likely involved in multiple functions, along with many other areas, and multiple parallel streams of information must converge before a word can be uttered.