Computational models of the auditory system

Use of a deep recurrent neural network to reduce transient noise: Effects on subjective speech intelligibility and comfort

Transient noise reduction using a deep recurrent neural network improves the subjective speech Intelligibility and comfort.

Continue Reading Use of a deep recurrent neural network to reduce transient noise: Effects on subjective speech intelligibility and comfort
Binaural prediction of speech intelligibility based on a blind model using automatic phoneme recognition
(A) We process signals with a binaural processing stage. The signal is converted to phoneme probabilities using a DNN. The degradation of these is used to predict binaural SI. (B) Results for subjective data, our model (BAPSI) and baseline models in terms of speech recognition threshold (SRT).

Binaural prediction of speech intelligibility based on a blind model using automatic phoneme recognition

In this study, we show that phoneme probabilities from a DNN can produce good estimates of speech intelligibility when combined with a blind binaural processing stage.

Continue Reading Binaural prediction of speech intelligibility based on a blind model using automatic phoneme recognition
Estimating the distortion component of hearing impairment from attenuation-based model predictions using machine learning
Estimated distortion component of hearing impairment from the model's prediction errors as a function of the average hearing loss.

Estimating the distortion component of hearing impairment from attenuation-based model predictions using machine learning

Attenuation-component-based model predictions of speech recognition thresholds like FADE seem to facilitate an estimation of the supra-threshold distortion component of hearing impairment.

Continue Reading Estimating the distortion component of hearing impairment from attenuation-based model predictions using machine learning
Hearing-impaired artificial neural networks replicate speech recognition deficits of hearing-impaired humans
Deep neural networks were trained to recognize spoken words from simulated auditory nerve representations. We measured effects of simulated OHC loss (broader frequency tuning and elevated thresholds) and ANF loss (reduced fidelity of temporal coding) on network speech recognition in noise.

Hearing-impaired artificial neural networks replicate speech recognition deficits of hearing-impaired humans

We developed a deep learning model of hearing loss by training artificial neural networks to recognize words in noise from simulated auditory nerve input.

Continue Reading Hearing-impaired artificial neural networks replicate speech recognition deficits of hearing-impaired humans
Comparing phonemic information transmission with cochlear implants between human listeners and an end-to-end computational model of speech perception
A finite element model of the implanted cochlea, showing the voltage spread caused by stimulation of the implanted electrode.

Comparing phonemic information transmission with cochlear implants between human listeners and an end-to-end computational model of speech perception

A finite element model of a cochlea, a computational model of the auditory nerve, and an automatic speech recognition neural network were combined to replicate CI speech perception patterns.

Continue Reading Comparing phonemic information transmission with cochlear implants between human listeners and an end-to-end computational model of speech perception
Modeling formant-frequency discrimination based on auditory-nerve and midbrain responses: normal hearing and sensorineural hearing loss
Figure 1. Stimulus spectra of standard (blue) and target (orange) vowel-like stimuli with ΔF2=2% and the corresponding model responses of AN fibers and IC band-enhanced cells (BE), which are excited by fluctuations within a band of modulation frequencies.

Modeling formant-frequency discrimination based on auditory-nerve and midbrain responses: normal hearing and sensorineural hearing loss

Formant-frequency difference limens in human with normal hearing or mild sensorineural hearing loss were estimated based on models for neural fluctuation profiles of neurons in the inferior colliculus.

Continue Reading Modeling formant-frequency discrimination based on auditory-nerve and midbrain responses: normal hearing and sensorineural hearing loss
Modeling the effects of inhibition and gap junctions on synchrony enhancement in bushy cells of the ventral cochlear nucleus
VCN microcircuit structure of a pair of bushy cells. Green lines show the excitatory inputs originating from ANFs. Blue lines represent inhibitory inputs from DS cells while red lines are inhibitory inputs from TV cells. Yellow lines represent the gap junction connections between the bushy cells.

Modeling the effects of inhibition and gap junctions on synchrony enhancement in bushy cells of the ventral cochlear nucleus

This study explores the effects of inhibition and gap junctions on the synchrony enhancement seen in ventral cochlear nucleus bushy cells by using biophysically detailed neural network models of bushy cells microcircuits.

Continue Reading Modeling the effects of inhibition and gap junctions on synchrony enhancement in bushy cells of the ventral cochlear nucleus
A computational model of fast spectrotemporal chirp sensitivity in the inferior colliculus
Top, SFIE model receiving input from a single frequency, bottom, a cross-frequency model receiving an array of excitatory and inhibitory inputs from multiple frequencies. Right, model responses to two opposite directions of frequency chirp.

A computational model of fast spectrotemporal chirp sensitivity in the inferior colliculus

We present a computational model of auditory nerve fiber responses elicited by combined electric and acoustic stimulation which can be used to investigate peripheral electric-acoustic interaction.

Continue Reading A computational model of fast spectrotemporal chirp sensitivity in the inferior colliculus
A computational single-fiber model of electric-acoustic stimulation
The model consists of a phenomenological model of acoustic stimulation (Bruce et al., 2018, green) and an integrate-and-fire model of electric stimulation (Joshi et al., 2017, orange). The model output are simulated spike times. Different coupling methods between both models were investigated (red).

A computational single-fiber model of electric-acoustic stimulation

We present a computational model of auditory nerve fiber responses elicited by combined electric and acoustic stimulation which can be used to investigate peripheral electric-acoustic interaction.

Continue Reading A computational single-fiber model of electric-acoustic stimulation

“Ear in the Clouds”– A web app supporting computational models for auditory-nerve and midbrain responses

A cloud-based web app provides an accessible tool for simulation and visualization of population responses of model auditory-nerve and midbrain neurons.

Continue Reading “Ear in the Clouds”– A web app supporting computational models for auditory-nerve and midbrain responses
The effect of selective loss of auditory nerve fibers on temporal envelope processing: a simulation study
Average thresholds for amplitude modulation detection task (N = 11) at different modulation rates (left: 16 Hz, middle: 32 Hz, right: 64 Hz). Carrier frequency: 500 Hz.

The effect of selective loss of auditory nerve fibers on temporal envelope processing: a simulation study

Computer simulation supports the hypothesis of cochlear synaptopathy selectively damaging low-spontaneous rate auditory nerves and impacting temporal envelope processing, and shows that some tasks are more sensitive to this issue than others.

Continue Reading The effect of selective loss of auditory nerve fibers on temporal envelope processing: a simulation study
Correlates of linguistic processing in the frequency following response to naturalistic speech
A) Latency of the frequency following response to natural speech estimated by the complex linear model. B) Magnitudes of the model coefficients at 11 ms. C) Phases of the model coefficients at 11 ms.

Correlates of linguistic processing in the frequency following response to naturalistic speech

Our findings suggest that the frequency following response tracking the fundamental frequency of voiced speech plays an active role in the rapid and continuous processing of spoken language.

Continue Reading Correlates of linguistic processing in the frequency following response to naturalistic speech

Audiological classification performance based on audiological measurements and Common Audiological Functional Parameters (CAFPAs)

Towards the development of a diagnostic supporting tool in audiology, the Common Audiological Functional Parameters (CAFPAs) were shown to be similarly suitable for audiological finding classification as combinations of typical audiological measurements, and thereby provide the potential to combine different audiological databases.

Continue Reading Audiological classification performance based on audiological measurements and Common Audiological Functional Parameters (CAFPAs)

Use of air conduction thresholds to predict bone conduction asymmetry and air-bone gap

This study used machine learning methods to predict bone conduction abnormalities from air conduction pure tone audiometric thresholds.

Continue Reading Use of air conduction thresholds to predict bone conduction asymmetry and air-bone gap

The Panoramic ECAP Method: modelling the electrode-neuron interface in cochlear implant users

The Panoramic ECAP Method models patient-specific electrode-neuron interfaces in cochlear implant users, and may provide important information for optimizing efficacy and improving speech perception outcomes.

Continue Reading The Panoramic ECAP Method: modelling the electrode-neuron interface in cochlear implant users

Predicting abnormal hearing difficulty in noise in ‘normal’ hearers using standard audiological measures

This study used machine learning models trained on otoacoustic emissions and audiometric thresholds to predict self-reported difficulty hearing in noise in normal hearers.

Continue Reading Predicting abnormal hearing difficulty in noise in ‘normal’ hearers using standard audiological measures

Random Forest Classification to Predict Response to High-Definition Transcranial Direct Current Stimulation Therapy for Tinnitus

A random forest classifier can predict response to high-definition transcranial direct current stimulation treatment for tinnitus with 82.41% accuracy.

Continue Reading Random Forest Classification to Predict Response to High-Definition Transcranial Direct Current Stimulation Therapy for Tinnitus

Computational modelling of the human auditory brainstem response to natural speech

The computational model consists of three main parts (auditory nerve, inferior colliculus and cochlear nuclei). The figure shows the input (natural speech) and the neural outputs at the different levels.

Continue Reading Computational modelling of the human auditory brainstem response to natural speech

Predicting Hearing Aid Fittings Based on Audiometric and Subject-Related Data: A Machine Learning Approach

A machine learning model is trained on real-world fitting data to predict the user's individual gain based on audiometric and further subject-related data, such as age, gender, and the acoustic environments.

Continue Reading Predicting Hearing Aid Fittings Based on Audiometric and Subject-Related Data: A Machine Learning Approach

Aladdin: Automatic LAnguage-independent Development of the Digits-In-Noise test

The Automatic LAnguage-independent Development of the Digits-In-Noise test (Aladdin)-project aims to create a fully automatic test development procedure for digit-in-noise hearing tests in various languages and for different target populations.

Continue Reading Aladdin: Automatic LAnguage-independent Development of the Digits-In-Noise test
Computational Audiology: new ways to address the global burden of hearing loss
Source: https://www.stripepartners.com/our_writing_article/the-age-of-the-ear/

Computational Audiology: new ways to address the global burden of hearing loss

Computational audiology, the augmentation of traditional hearing health care by digital methods, has potential to dramatically advance audiological precision and efficiency to address the global burden of hearing loss.

Continue Reading Computational Audiology: new ways to address the global burden of hearing loss