The Clarity project is running a series of machine learning challenges to revolutionise signal processing in hearing aids.
Machine Learning applied to problems in audiology
Simulated binaural neural networks show that sharp spatial and frequency tuning is needed to accurately localize sound sources in the azimuth direction.
AI-assisted Diagnosis for Middle Ear Pathologies
Modeling speech perception in hidden hearing loss using stochastically undersampled neuronal firing patterns
Speech perception by hearing aid (HA) users has been evaluated in a database that includes up to 45 hours of testing their aided abilities to recognize syllabic constituents of speech, and words in meaningful sentences, under both masked (eight-talker babble) and quiet conditions.
PTA is more enlightening than speech-ABR to predict aided behavioural measures.
This study used machine learning to identify normal-hearing listeners with and without tinnitus based on their ABRs.
A system that predicts and identifies neural responses to overlapping speech sounds mimics human perception.
This study used machine learning methods to predict bone conduction abnormalities from air conduction pure tone audiometric thresholds.
A random forest classifier can predict response to high-definition transcranial direct current stimulation treatment for tinnitus with 82.41% accuracy.
The computational model consists of three main parts (auditory nerve, inferior colliculus and cochlear nuclei). The figure shows the input (natural speech) and the neural outputs at the different levels.
Dynamically masked audiograms achieve accurate true threshold estimates and reduce test time compared to current clinical masking procedures.
A machine learning model is trained on real-world fitting data to predict the user's individual gain based on audiometric and further subject-related data, such as age, gender, and the acoustic environments.
This work presents a CASA model of attentive voice tracking.
The Automatic LAnguage-independent Development of the Digits-In-Noise test (Aladdin)-project aims to create a fully automatic test development procedure for digit-in-noise hearing tests in various languages and for different target populations.
The rise of new digital tools for collecting data on scales never before seen in our field coupled with new modeling techniques from deep learning requires us to think about what computational infrastructure we need in order to fully enjoy the benefits and mitigate associated barriers.
Speech recognition software has become increasingly sophisticated and accurate due to progress in information technology. This project aims to examine the performance of speech recognition apps and to explore which audiological tests are a representative measure of the ability of these apps to convert speech into text.
Looking for questions Here’s an idea. Collect the problems in audiology that need AI solutions (Instead of solutions looking for a problem, we are looking for genuine problems looking for a solution.)…
Computational audiology, the augmentation of traditional hearing health care by digital methods, has potential to dramatically advance audiological precision and efficiency to address the global burden of hearing loss.