Hearing-impaired artificial neural networks replicate speech recognition deficits of hearing-impaired humans

Mark R. Saddler1,2,3, Jenelle Feather1,2,3, Andrew Francl1,2,3, Josh H. McDermott1,2,3,4

1 Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, USA; 2 McGovern Institute for Brain Research, Massachusetts Institute of Technology, USA; 3 Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, USA; 4 Program in Speech and Hearing Biosciences and Technology, Harvard University, USA

Background: Damage to peripheral auditory structures is known to alter cochlear signal processing. However, relating these changes to the real-world listening difficulties of humans with hearing loss has remained a challenge. Computational models capable of performing real-world tasks could provide insight. Artificial neural networks optimized to perform auditory recognition tasks from simulated cochlear input have recently been shown to replicate aspects of human auditory behavior. Here, we extend this approach to investigate how outer hair cell (OHC) and auditory nerve fiber (ANF) loss can account for difficulties recognizing speech in noisy environments.

Methods: We trained deep neural networks to recognize words from simulated healthy cochlear representations of speech in noise. We then simulated OHC and ANF loss in the cochlear model and measured the effects on network performance. To investigate how plasticity in the central auditory system might allow hearing-impaired listeners to adapt to their damaged cochleae, we also trained networks with impaired cochlear input.

Results: Networks with either OHC or ANF loss introduced at test time replicated behavioral deficits of hearing-impaired listeners: speech recognition performance was degraded (especially at low SNRs) and the fluctuating masker benefit was reduced. In most cases, optimizing networks to handle impaired peripheral input produced remarkably unimpaired performance, provided sounds were presented at an audible level.

Conclusion: The results suggest that a perfectly plastic auditory system could almost fully compensate for hearing loss-related changes in the periphery due to outer hair cell or auditory nerve fiber loss. Our model illustrates how deep learning can provide insight into both normal and abnormal sensory function.

Saddler et al
Deep neural networks were trained to recognize spoken words from simulated auditory nerve representations. We measured effects of simulated OHC loss (broader frequency tuning and elevated thresholds) and ANF loss (reduced fidelity of temporal coding) on network speech recognition in noise.