The role of computational auditory models in auditory precision diagnostics and treatment
Auditory modeling is indispensable for precision diagnostics and individualized treatment
Computational models of the auditory system
Auditory modeling is indispensable for precision diagnostics and individualized treatment
Simulated binaural neural networks show that sharp spatial and frequency tuning is needed to accurately localize sound sources in the azimuth direction.
Modeling speech perception in hidden hearing loss using stochastically undersampled neuronal firing patterns
PTA is more enlightening than speech-ABR to predict aided behavioural measures.
Detection of current shunts with a ladder-network model
Test-retest analysis of aggregated audiometry testing data using Jacoti Hearing Center self-testing application
Towards the development of a diagnostic supporting tool in audiology, the Common Audiological Functional Parameters (CAFPAs) were shown to be similarly suitable for audiological finding classification as combinations of typical audiological measurements, and thereby provide the potential to combine different audiological databases.
Applying biophysical auditory periphery models for real-time applications and studies of hearing impairment
This study used machine learning to identify normal-hearing listeners with and without tinnitus based on their ABRs.
A system that predicts and identifies neural responses to overlapping speech sounds mimics human perception.
This study used machine learning methods to predict bone conduction abnormalities from air conduction pure tone audiometric thresholds.
The Panoramic ECAP Method models patient-specific electrode-neuron interfaces in cochlear implant users, and may provide important information for optimizing efficacy and improving speech perception outcomes.
This study used machine learning models trained on otoacoustic emissions and audiometric thresholds to predict self-reported difficulty hearing in noise in normal hearers.
A random forest classifier can predict response to high-definition transcranial direct current stimulation treatment for tinnitus with 82.41% accuracy.
The computational model consists of three main parts (auditory nerve, inferior colliculus and cochlear nuclei). The figure shows the input (natural speech) and the neural outputs at the different levels.
A machine learning model is trained on real-world fitting data to predict the user's individual gain based on audiometric and further subject-related data, such as age, gender, and the acoustic environments.
This work presents a CASA model of attentive voice tracking.
Computational modelling allowed us to explore the effects of non-invasive brain stimulation on cortical processing of speech.
The Automatic LAnguage-independent Development of the Digits-In-Noise test (Aladdin)-project aims to create a fully automatic test development procedure for digit-in-noise hearing tests in various languages and for different target populations.
Adaptation of the auditory nerve to electrical stimulation can best be described by a power law or a sum of exponents.
Looking for questions Here’s an idea. Collect the problems in audiology that need AI solutions (Instead of solutions looking for a problem, we are looking for genuine problems looking for a solution.)…
Computational audiology, the augmentation of traditional hearing health care by digital methods, has potential to dramatically advance audiological precision and efficiency to address the global burden of hearing loss.
In case of severe or profound hearing impairment, rehabilitation can be provided by a cochlear implant (CI) that directly stimulates the auditory nerve via acoustically modulated electrical current pulses. The…