Excerpt from LMU press release of May 30, 2018:
Benedikt Grothe and his research group study the neuronal processing mechanisms that enable the mammalian auditory system to localize sounds in space. In their latest study, the researchers take a closer look at the impact of context on sound localization, and demonstrate that the human hearing system is capable of dynamically adjusting its response when stimuli are presented in sequences. The results question the conventional view that the system primarily serves to localize sound sources with very high precision based on physical differences between the same sound as perceived by the two ears. “Our study will lead to a paradigm change in the understanding of spatial hearing,” Grothe states. The findings appear in the online journal Scientific Reports.
We perceive sounds because the pressure changes they set up in the inner ear are transduced into electrical signals by sensory nerve cells, which are transmitted and processed via several way stations to the auditory cortex. According to the generally accepted model, the processing system localizes sound sources in space by measuring the difference between the times of arrival of the sound at the ipsilateral ear (which is closer to the source) and the contralateral ear. The mammalian auditory system can perceive timing differences on the order of a few microseconds. This feat is made possible, in part, by a novel mechanism that involves precisely timed inhibition of neuronal firing during intermediate stages of processing, as a recent paper published in Nature Communications by a team led by Grothe and Michael Pecka has shown. However, the studies on which the timing-difference model is based involved the use of isolated single tones. When the stimulus consists of two sounds in succession, one observes ‘curious adaptation processes’, says Grothe. “In essence, we find it more difficult for us to accurately localize the second sound.”