Modeling Normal and Impaired Hearing With Artificial Neural Networks Optimized for Ecological Tasks

Authors: Mark Saddler1, Torsten Dau1, Josh McDermott2

1Technical University of Denmark
2Massachusetts Institute of Technology

Background: Computational models that perform real-world tasks using simulated cochlear input could help link the peripheral effects of hearing loss to real-world perceptual consequences. Artificial neural networks optimized separately for sound localization and recognition tasks have been shown to account for many aspects of normal hearing human behavior. Here, we extend this approach to hearing loss using a model jointly optimized for multiple tasks.

Methods: We trained a single model to localize and recognize speech, voices, and environmental sounds from simulated auditory nerve representations of naturalistic scenes. Once trained, we compared the model’s speech recognition and spatial hearing to that of humans. We also measured psychoacoustic thresholds from the model by training linear classifiers to make binary judgments using the model’s learned features. To investigate the perceptual consequences of hearing loss, we altered the model’s peripheral input and measured effects of these alterations on behavior. Different types of hearing loss were simulated by manipulating the functionality and number of hair cells and auditory nerve fibers.

Results: When equipped with healthy cochleae, the model accounted for several aspects of binaural speech perception in humans with normal hearing, reproducing effects of noise, reverberation, and spatial separation between speech and noise. When healthy cochleae were replaced with damaged cochleae, the model’s performance characteristics resembled those of humans with hearing loss: speech recognition was degraded (especially at low SNRs) and spatial release from masking was reduced. Psychoacoustic thresholds measured from the model similarly reproduced patterns of normal and impaired human hearing.

Conclusions: Machine-learning-based models that generate behavior from simulated auditory nerve input predict some aspects of hearing-impaired behavior and have promise for linking peripheral damage to its perceptual consequences.

 

 

  • Hi Mark,
    I watched your talk but didn’t get a chance to ask a question, well done! The model outcomes and analysis framework were all very interesting, have you published the results for hearing impairment anywhere, or planning to publish it soon? I found your latest preprint and I’ll read it soon, but didn’t see anything hearing-loss related there.
    Thank you,
    Fotis