Machine Learning applied to problems in audiology

Prediction of speech recognition by hearing-aid users: the syllable-constituent, contextual model of speech perception

Speech perception by hearing aid (HA) users has been evaluated in a database that includes up to 45 hours of testing their aided abilities to recognize syllabic constituents of speech, and words in meaningful sentences, under both masked (eight-talker babble) and quiet conditions.

Continue Reading Prediction of speech recognition by hearing-aid users: the syllable-constituent, contextual model of speech perception

Use of air conduction thresholds to predict bone conduction asymmetry and air-bone gap

This study used machine learning methods to predict bone conduction abnormalities from air conduction pure tone audiometric thresholds.

Continue Reading Use of air conduction thresholds to predict bone conduction asymmetry and air-bone gap

Random Forest Classification to Predict Response to High-Definition Transcranial Direct Current Stimulation Therapy for Tinnitus

A random forest classifier can predict response to high-definition transcranial direct current stimulation treatment for tinnitus with 82.41% accuracy.

Continue Reading Random Forest Classification to Predict Response to High-Definition Transcranial Direct Current Stimulation Therapy for Tinnitus

Computational modelling of the human auditory brainstem response to natural speech

The computational model consists of three main parts (auditory nerve, inferior colliculus and cochlear nuclei). The figure shows the input (natural speech) and the neural outputs at the different levels.

Continue Reading Computational modelling of the human auditory brainstem response to natural speech
Dynamically Masked Audiograms with Machine Learning Audiometry
Final masked AMLAG results for one participant (127) with a left cochlear implant and no residual hearing. Red diamonds denote unheard tones and blue pluses denote heard tones. The most intense tones at lower frequencies in the left ear were effectively masked.

Dynamically Masked Audiograms with Machine Learning Audiometry

Dynamically masked audiograms achieve accurate true threshold estimates and reduce test time compared to current clinical masking procedures.

Continue Reading Dynamically Masked Audiograms with Machine Learning Audiometry

Predicting Hearing Aid Fittings Based on Audiometric and Subject-Related Data: A Machine Learning Approach

A machine learning model is trained on real-world fitting data to predict the user's individual gain based on audiometric and further subject-related data, such as age, gender, and the acoustic environments.

Continue Reading Predicting Hearing Aid Fittings Based on Audiometric and Subject-Related Data: A Machine Learning Approach

Aladdin: Automatic LAnguage-independent Development of the Digits-In-Noise test

The Automatic LAnguage-independent Development of the Digits-In-Noise test (Aladdin)-project aims to create a fully automatic test development procedure for digit-in-noise hearing tests in various languages and for different target populations.

Continue Reading Aladdin: Automatic LAnguage-independent Development of the Digits-In-Noise test
The critical role of computing infrastructure in computational audiology
The nine stages of the machine learning workflow.

The critical role of computing infrastructure in computational audiology

The rise of new digital tools for collecting data on scales never before seen in our field coupled with new modeling techniques from deep learning requires us to think about what computational infrastructure we need in order to fully enjoy the benefits and mitigate associated barriers.

Continue Reading The critical role of computing infrastructure in computational audiology

Speech recognition apps for the hearing impaired and deaf

Speech recognition software has become increasingly sophisticated and accurate due to progress in information technology. This project aims to examine the performance of speech recognition apps and to explore which audiological tests are a representative measure of the ability of these apps to convert speech into text.

Continue Reading Speech recognition apps for the hearing impaired and deaf
Computational Audiology: new ways to address the global burden of hearing loss
Source: https://www.stripepartners.com/our_writing_article/the-age-of-the-ear/

Computational Audiology: new ways to address the global burden of hearing loss

Computational audiology, the augmentation of traditional hearing health care by digital methods, has potential to dramatically advance audiological precision and efficiency to address the global burden of hearing loss.

Continue Reading Computational Audiology: new ways to address the global burden of hearing loss