Transient noise reduction using a deep recurrent neural network improves the subjective speech Intelligibility and comfort.
Continue ReadingUse of a deep recurrent neural network to reduce transient noise: Effects on subjective speech intelligibility and comfort
(A) We process signals with a binaural processing stage. The signal is converted to phoneme probabilities using a DNN. The degradation of these is used to predict binaural SI. (B) Results for subjective data, our model (BAPSI) and baseline models in terms of speech recognition threshold (SRT).
Attenuation-component-based model predictions of speech recognition thresholds like FADE seem to facilitate an estimation of the supra-threshold distortion component of hearing impairment.
Continue ReadingEstimating the distortion component of hearing impairment from attenuation-based model predictions using machine learning
Deep neural networks were trained to recognize spoken words from simulated auditory nerve representations. We measured effects of simulated OHC loss (broader frequency tuning and elevated thresholds) and ANF loss (reduced fidelity of temporal coding) on network speech recognition in noise.
A finite element model of a cochlea, a computational model of the auditory nerve, and an automatic speech recognition neural network were combined to replicate CI speech perception patterns.
Continue ReadingComparing phonemic information transmission with cochlear implants between human listeners and an end-to-end computational model of speech perception
Figure 1. Stimulus spectra of standard (blue) and target (orange) vowel-like stimuli with ΔF2=2% and the corresponding model responses of AN fibers and IC band-enhanced cells (BE), which are excited by fluctuations within a band of modulation frequencies.
Formant-frequency difference limens in human with normal hearing or mild sensorineural hearing loss were estimated based on models for neural fluctuation profiles of neurons in the inferior colliculus.
Continue ReadingModeling formant-frequency discrimination based on auditory-nerve and midbrain responses: normal hearing and sensorineural hearing loss
VCN microcircuit structure of a pair of bushy cells. Green lines show the excitatory inputs originating from ANFs. Blue lines represent inhibitory inputs from DS cells while red lines are inhibitory inputs from TV cells. Yellow lines represent the gap junction connections between the bushy cells.
This study explores the effects of inhibition and gap junctions on the synchrony enhancement seen in ventral cochlear nucleus bushy cells by using biophysically detailed neural network models of bushy cells microcircuits.
Continue ReadingModeling the effects of inhibition and gap junctions on synchrony enhancement in bushy cells of the ventral cochlear nucleus
Top, SFIE model receiving input from a single frequency, bottom, a cross-frequency model receiving an array of excitatory and inhibitory inputs from multiple frequencies. Right, model responses to two opposite directions of frequency chirp.
We present a computational model of auditory nerve fiber responses elicited by combined electric and acoustic stimulation which can be used to investigate peripheral electric-acoustic interaction.
Continue ReadingA computational model of fast spectrotemporal chirp sensitivity in the inferior colliculus
The model consists of a phenomenological model of acoustic stimulation (Bruce et al., 2018, green) and an integrate-and-fire model of electric stimulation (Joshi et al., 2017, orange). The model output are simulated spike times. Different coupling methods between both models were investigated (red).
Computer simulation supports the hypothesis of cochlear synaptopathy selectively damaging low-spontaneous rate auditory nerves and impacting temporal envelope processing, and shows that some tasks are more sensitive to this issue than others.
Continue ReadingThe effect of selective loss of auditory nerve fibers on temporal envelope processing: a simulation study
A) Latency of the frequency following response to natural speech estimated by the complex linear model. B) Magnitudes of the model coefficients at 11 ms. C) Phases of the model coefficients at 11 ms.
Towards the development of a diagnostic supporting tool in audiology, the Common Audiological Functional Parameters (CAFPAs) were shown to be similarly suitable for audiological finding classification as combinations of typical audiological measurements, and thereby provide the potential to combine different audiological databases.
Continue ReadingAudiological classification performance based on audiological measurements and Common Audiological Functional Parameters (CAFPAs)
The Panoramic ECAP Method models patient-specific electrode-neuron interfaces in cochlear implant users, and may provide important information for optimizing efficacy and improving speech perception outcomes.
Continue ReadingThe Panoramic ECAP Method: modelling the electrode-neuron interface in cochlear implant users
The computational model consists of three main parts (auditory nerve, inferior colliculus and cochlear nuclei). The figure shows the input (natural speech) and the neural outputs at the different levels.
Continue ReadingComputational modelling of the human auditory brainstem response to natural speech
A machine learning model is trained on real-world fitting data to predict the user's individual gain based on audiometric and further subject-related data, such as age, gender, and the acoustic environments.
Continue ReadingPredicting Hearing Aid Fittings Based on Audiometric and Subject-Related Data: A Machine Learning Approach
The Automatic LAnguage-independent Development of the Digits-In-Noise test (Aladdin)-project aims to create a fully automatic test development procedure for digit-in-noise hearing tests in various languages and for different target populations.
Continue ReadingAladdin: Automatic LAnguage-independent Development of the Digits-In-Noise test
Looking for questions Here’s an idea. Collect the problems in audiology that need AI solutions (Instead of solutions looking for a problem, we are looking for genuine problems looking for a solution.)…
Computational audiology, the augmentation of traditional hearing health care by digital methods, has potential to dramatically advance audiological precision and efficiency to address the global burden of hearing loss.
Continue ReadingComputational Audiology: new ways to address the global burden of hearing loss
In case of severe or profound hearing impairment, rehabilitation can be provided by a cochlear implant (CI) that directly stimulates the auditory nerve via acoustically modulated electrical current pulses. The…