A journey around the world of AI and Audiology within 80 slides!
Machine Learning applied to problems in audiology
Machine learning models identify altered spontaneous brain connections in sound tolerance disorders
n this episode, Brent Edwards from NAL and Stefan Launer from Sonova take us through their careers and share lessons and perspectives on the development of hearing technology. We discuss how the development of technology becomes more holistic, design thinking, standardization, and what's needed to get to new service models and innovation.
Automated Speech Recognition (ASR) for the deaf and communication on equal terms regardless of hearing status. Episode 2 with Dimitri Kanevsky, Jessica Monaghan and Nicky Chong-White. Moderator Jan-Willem Wasmann You…
Active Learning in the auditory domain A round table with Bert de Vries, Josef Schlittenlacher and Dennis Barbour. Moderator Jan-Willem Wasmann Audio version: Audiovisual version on Youtube: Josef Schlittenlacher is…
The model should conduct the experiment because it knows best which condition is going to be the most informative for confining its free parameters, at least in theory
Here we explore the potential for using machine learning to detect hearing loss from children's speech.
Transient noise reduction using a deep recurrent neural network improves the subjective speech Intelligibility and comfort.
In this study, we show that phoneme probabilities from a DNN can produce good estimates of speech intelligibility when combined with a blind binaural processing stage.
Attenuation-component-based model predictions of speech recognition thresholds like FADE seem to facilitate an estimation of the supra-threshold distortion component of hearing impairment.
Using a Naive Bayes classifier, we could show that twelve different activities were classified above chance.
Training k-nearest neighbor classifiers to predict intelligibility, social state and participant perception of a listening task
Based on the results of an online survey, we developed a decision tree to classify somatosensory tinnitus patients with an accuracy of over 80%.
The Clarity project is running a series of machine learning challenges to revolutionise signal processing in hearing aids.
Simulated binaural neural networks show that sharp spatial and frequency tuning is needed to accurately localize sound sources in the azimuth direction.
AI-assisted Diagnosis for Middle Ear Pathologies
Modeling speech perception in hidden hearing loss using stochastically undersampled neuronal firing patterns
Speech perception by hearing aid (HA) users has been evaluated in a database that includes up to 45 hours of testing their aided abilities to recognize syllabic constituents of speech, and words in meaningful sentences, under both masked (eight-talker babble) and quiet conditions.
PTA is more enlightening than speech-ABR to predict aided behavioural measures.
This study used machine learning to identify normal-hearing listeners with and without tinnitus based on their ABRs.
A system that predicts and identifies neural responses to overlapping speech sounds mimics human perception.
This study used machine learning methods to predict bone conduction abnormalities from air conduction pure tone audiometric thresholds.
A random forest classifier can predict response to high-definition transcranial direct current stimulation treatment for tinnitus with 82.41% accuracy.
The computational model consists of three main parts (auditory nerve, inferior colliculus and cochlear nuclei). The figure shows the input (natural speech) and the neural outputs at the different levels.
Dynamically masked audiograms achieve accurate true threshold estimates and reduce test time compared to current clinical masking procedures.
A machine learning model is trained on real-world fitting data to predict the user's individual gain based on audiometric and further subject-related data, such as age, gender, and the acoustic environments.
This work presents a CASA model of attentive voice tracking.
The Automatic LAnguage-independent Development of the Digits-In-Noise test (Aladdin)-project aims to create a fully automatic test development procedure for digit-in-noise hearing tests in various languages and for different target populations.
The rise of new digital tools for collecting data on scales never before seen in our field coupled with new modeling techniques from deep learning requires us to think about what computational infrastructure we need in order to fully enjoy the benefits and mitigate associated barriers.
Speech recognition software has become increasingly sophisticated and accurate due to progress in information technology. This project aims to examine the performance of speech recognition apps and to explore which audiological tests are a representative measure of the ability of these apps to convert speech into text.
Looking for questions Here’s an idea. Collect the problems in audiology that need AI solutions (Instead of solutions looking for a problem, we are looking for genuine problems looking for a solution.)…
Computational audiology, the augmentation of traditional hearing health care by digital methods, has potential to dramatically advance audiological precision and efficiency to address the global burden of hearing loss.