Neuroscientists ID Part of the Brain Devoted to Processing Speech


A team of NYU neuroscientists has identified a part of the brain exclusively devoted to processing speech, helping to settle a long-standing debate about role-specific neurological functions.

Researchers ID Part of the Brain Devoted to Processing Speech
A team of NYU neuroscientists has identified a part of the brain exclusively devoted to processing speech, helping to settle a long-standing debate about role-specific neurological functions. ©iStock/Varandah

A team of New York University neuroscientists has identified a part of the brain exclusively devoted to processing speech. Its findings point to the superior temporal sulcus (STS), located in the temporal lobe, and help settle a long-standing debate about role-specific neurological functions.

“We now know there is at least one part of the brain that specializes in the processing of speech and doesn’t have a role in handling other sounds,” explains David Poeppel, the paper’s senior author, a professor in NYU’s Department of Psychology and Center for Neural Science.

The study, which appears in the journal Nature Neuroscience, sought to address a decades-old uncertainty—and dispute—in neural science: are there certain regions of the brain exclusively dedicated to managing speech, thereby ignoring other sounds, such as music or animal noises?

To address this question, the researchers conducted a series of experiments in which the study’s subjects listened to speech as well as to other types of “environmental” sounds that ranged from fireworks to ping pong to dogs barking.

To ensure that the subjects were responding to speech sounds rather than to a language that was already familiar to them, the researchers used recorded German-language words—which none of the subjects understood—rather than English ones. In this way, the method aimed to solely measure the brain’s detection of speech, which involves listening and speaking, rather than language, which involves constructing and understanding sentences.

To further disguise the origins of both the speech and environmental sounds, the researchers developed a series of audio “quilts”—sound segments, ranging from 30 to 900 milliseconds, in which words or natural sounds were chopped up and reordered. With this method, the researchers could help ensure that the study’s subjects were responding only to audio cues rather than guessing their origins—and thus possibly activating parts of the brain not relevant to sound detection.

During these procedures, the researchers gauged subjects’ neurological responses—in multiple parts of the brain—using functional magnetic resonance imaging (fMRI).

The results showed expected activity in response to all types of sounds in the temporal lobe’s auditory cortex. However, moving further down in this region—to the STS—the results showed activity only in detecting speech sounds, suggesting that this part of the brain is reserved for spotting the spoken word.

The study’s other co-authors included Tobias Overath, Josh H McDermott, and Jean Mary Zarate—all NYU post-doctoral fellows at the time of the study.

This work was supported, in part, by a grant from the National Institutes of Health (2R01DC05660).
 

Press Contact

James Devitt
James Devitt
(212) 998-6808