New York University Skip to Content Skip to Search Skip to Navigation Skip to Sub Navigation

NYU Researchers Identify a Key Mechanism in the Brain’s Computation of Sound Location

June 29, 2010
445

New York University researchers have identified a mechanism the brain uses to help process sound localization. Their findings, which appear in the latest edition of the journal PLoS Biology, focus on how the brain computes the different arrival times of sound into each ear to estimate the location of its source.

Animals can locate the source of a sound by detecting microsecond (one millionth of a second) differences in arrival time at their two ears. The neurons encoding these differences—called interaural time differences (ITDs)—receive a message from each ear. After receiving these messages, or synaptic inputs, they perform a microsecond computation to determine the location of the sound source. The NYU scientists found that one reason these neurons are able to perform such a rapid and sensitive computation is because they are extremely responsive to the input’s “rise time”—the time it takes to reach the peak of the synaptic input.

Existing theories have held that the biophysical properties of the two inputs are identical—that is, messages coming from each ear are rapidly processed at the same time and in the same manner by neurons.

The NYU researchers challenged this theory by focusing on the nature of the neurons and the inputs—specifically, how sensitive they are in detecting differences in inputs’ rise times and also how different are these rise times between the messages arriving from each ear.

Buoyed by predictions from computer modeling work, the researchers examined this process in gerbils, which are good candidates for study because they process sounds in a similar frequency range and with apparently similar neuro-architecture as humans. 

Their initial experimental findings were obtained through examination of the gerbils’ neuronal activity in charge of this task. This part of the brain was studied by stimulating directly the synaptic pathways. The researchers found that the rise times of the synaptic inputs coming from the two ears occur at different speeds: the rise time of messages coming from the ipsilateral ear are faster than those driven by the contralateral ear. (The brain has two groups of neurons that compute this task, one group in each brain hemisphere—ipsilateral messages come from the same-side ear and the contralateral messages come from opposite-side ear.) In addition, they found that the arrival time of the messages coming from each ear were different. This finding was not surprising as the distance from these neurons to the each ear is not symmetric. Other researchers had assumed that such asymmetry existed, but it was never measured and reported prior to this study.  Given this newfound complexity of the way sound reaches the neurons in the brain, the researchers concluded that neurons did not have the capacity to process it in the way previously theorized.

Key insights about how these neurons actually function in processing sound coming from both ears were obtained by using the computer model. Their results identified that neurons perform the computation differently than what neuroscientists had proposed previously. These neurons not only encode the coincidence in arrival time of the two messages from each ear, but they also detect details on the input’s shape more directly related to the time scale of the computation itself than other features proposed in previous studies. 

“Some neurons in the brain respond to the net amplitude and width of summed inputs—they are integrators,” explained Pablo Jercog and John Rinzel, two of the study’s co-authors. “However, these auditory neurons respond to the rise time of the summed input and care less about the width. In other words, they are differentiators—key players on the brain’s calculus team for localizing a sound source.”

Jercog is a former graduate student at NYU’s Department of Physics and Center for Neural Science and now a post-doctoral fellow at Columbia University; Rinzel is a professor in the Center for Neural Science and the Courant Institute of Mathematical Sciences.

The study’s other authors were: Dan Sanes, a professor in NYU’s Department of Biology and Center for Neural Science; Gytis Svirskis, a former researcher at the Center for Neural Science; and Vibhakar Kotak, a research associate professor at the Center for Neural Science.

This Press Release is in the following Topics:
Research, Arts and Science, Courant Institute of Mathematical Sciences, Faculty

Type: Press Release

Press Contact: James Devitt | (212) 998-6808

NYU Researchers Identify a Key Mechanism in the Brain’s Computation of Sound Location

NYU researchers have identified a mechanism the brain uses to help process sound localization. Their findings focus on how the brain computes the different arrival times of sound into each ear to estimate the location of its source. ©iStockphoto.com/leolintang.


Search News



NYU In the News

Paying It Backward: NYU Alum Funds Scholarships

The Wall Street Journal profiled Trustee Evan Chesler on why he decided to chair the Momentum fund-raising campaign.

A Nobel Prize Party: Cheese, Bubbles, and a Boson

The New Yorker talked to Professor Kyle Cranmer and graduate student Sven Kreiss about NYU’s role in the discovery of the Higgs boson, which resulted in a Nobel prize for the scientists who predicted its existence.

The World as They Knew It

The New York Times reviewed the exhibit at the Institute for the Study of the Ancient World on how ancient Greeks and Romans mapped the known and unknown areas of their world.

Elite Institutions: Far More Diverse Than They Were 20 Years Ago

NYU made stronger gains over the last 20 years in increasing diversity than any other major research university, according to the Chronicle of Higher Education.

Program Seeks to Nurture ‘Data Science Culture’
at Universities

The New York Times reported on the multi-million collaboration among NYU and two other universities to harness the potential of Big Data, including an interview with Professor Yann LeCun, director of NYU’s Center for Data Science.

NYU Footer