Machine Learning applied to problems in audiology

Model-based selection of most informative diagnostic tests and test parameters
The goal is to find the model instance (folder from shelf y) that has the maximum likelihood to have generated the experimental data set d. The stimulus that leads to the smallest variance of parameter estimates is presented next. The process is repeated until the termination criterion is met.

Model-based selection of most informative diagnostic tests and test parameters

The model should conduct the experiment because it knows best which condition is going to be the most informative for confining its free parameters, at least in theory

Continue Reading Model-based selection of most informative diagnostic tests and test parameters

Use of a deep recurrent neural network to reduce transient noise: Effects on subjective speech intelligibility and comfort

Transient noise reduction using a deep recurrent neural network improves the subjective speech Intelligibility and comfort.

Continue Reading Use of a deep recurrent neural network to reduce transient noise: Effects on subjective speech intelligibility and comfort
Binaural prediction of speech intelligibility based on a blind model using automatic phoneme recognition
(A) We process signals with a binaural processing stage. The signal is converted to phoneme probabilities using a DNN. The degradation of these is used to predict binaural SI. (B) Results for subjective data, our model (BAPSI) and baseline models in terms of speech recognition threshold (SRT).

Binaural prediction of speech intelligibility based on a blind model using automatic phoneme recognition

In this study, we show that phoneme probabilities from a DNN can produce good estimates of speech intelligibility when combined with a blind binaural processing stage.

Continue Reading Binaural prediction of speech intelligibility based on a blind model using automatic phoneme recognition
Estimating the distortion component of hearing impairment from attenuation-based model predictions using machine learning
Estimated distortion component of hearing impairment from the model's prediction errors as a function of the average hearing loss.

Estimating the distortion component of hearing impairment from attenuation-based model predictions using machine learning

Attenuation-component-based model predictions of speech recognition thresholds like FADE seem to facilitate an estimation of the supra-threshold distortion component of hearing impairment.

Continue Reading Estimating the distortion component of hearing impairment from attenuation-based model predictions using machine learning
Automatic detection of human activities from accelerometer sensors integrated in hearables
Figure 1 A: Data streamed from the hearables to a PC running a recording script. B. Data from two different activities, recorded over 5 minutes each. C. Results from a classification using Naïve Bayes classification and a 5-fold cross-validation procedure.

Automatic detection of human activities from accelerometer sensors integrated in hearables

Using a Naive Bayes classifier, we could show that twelve different activities were classified above chance.

Continue Reading Automatic detection of human activities from accelerometer sensors integrated in hearables
A classification approach to listening effort: combining features from the pupil and cardiovascular system
Schematic showing experimental set-up including the placement of the participant, loudspeakers and two observers.

A classification approach to listening effort: combining features from the pupil and cardiovascular system

Training k-nearest neighbor classifiers to predict intelligibility, social state and participant perception of a listening task

Continue Reading A classification approach to listening effort: combining features from the pupil and cardiovascular system
A data-driven decision tree for diagnosing somatosensory tinnitus
Overview of the decision tree to diagnose somatosensory tinnitus

A data-driven decision tree for diagnosing somatosensory tinnitus

Based on the results of an online survey, we developed a decision tree to classify somatosensory tinnitus patients with an accuracy of over 80%.

Continue Reading A data-driven decision tree for diagnosing somatosensory tinnitus

Prediction of speech recognition by hearing-aid users: the syllable-constituent, contextual model of speech perception

Speech perception by hearing aid (HA) users has been evaluated in a database that includes up to 45 hours of testing their aided abilities to recognize syllabic constituents of speech, and words in meaningful sentences, under both masked (eight-talker babble) and quiet conditions.

Continue Reading Prediction of speech recognition by hearing-aid users: the syllable-constituent, contextual model of speech perception

Use of air conduction thresholds to predict bone conduction asymmetry and air-bone gap

This study used machine learning methods to predict bone conduction abnormalities from air conduction pure tone audiometric thresholds.

Continue Reading Use of air conduction thresholds to predict bone conduction asymmetry and air-bone gap

Random Forest Classification to Predict Response to High-Definition Transcranial Direct Current Stimulation Therapy for Tinnitus

A random forest classifier can predict response to high-definition transcranial direct current stimulation treatment for tinnitus with 82.41% accuracy.

Continue Reading Random Forest Classification to Predict Response to High-Definition Transcranial Direct Current Stimulation Therapy for Tinnitus

Computational modelling of the human auditory brainstem response to natural speech

The computational model consists of three main parts (auditory nerve, inferior colliculus and cochlear nuclei). The figure shows the input (natural speech) and the neural outputs at the different levels.

Continue Reading Computational modelling of the human auditory brainstem response to natural speech
Dynamically Masked Audiograms with Machine Learning Audiometry
Final masked AMLAG results for one participant (127) with a left cochlear implant and no residual hearing. Red diamonds denote unheard tones and blue pluses denote heard tones. The most intense tones at lower frequencies in the left ear were effectively masked.

Dynamically Masked Audiograms with Machine Learning Audiometry

Dynamically masked audiograms achieve accurate true threshold estimates and reduce test time compared to current clinical masking procedures.

Continue Reading Dynamically Masked Audiograms with Machine Learning Audiometry

Predicting Hearing Aid Fittings Based on Audiometric and Subject-Related Data: A Machine Learning Approach

A machine learning model is trained on real-world fitting data to predict the user's individual gain based on audiometric and further subject-related data, such as age, gender, and the acoustic environments.

Continue Reading Predicting Hearing Aid Fittings Based on Audiometric and Subject-Related Data: A Machine Learning Approach

Aladdin: Automatic LAnguage-independent Development of the Digits-In-Noise test

The Automatic LAnguage-independent Development of the Digits-In-Noise test (Aladdin)-project aims to create a fully automatic test development procedure for digit-in-noise hearing tests in various languages and for different target populations.

Continue Reading Aladdin: Automatic LAnguage-independent Development of the Digits-In-Noise test
The critical role of computing infrastructure in computational audiology
The nine stages of the machine learning workflow.

The critical role of computing infrastructure in computational audiology

The rise of new digital tools for collecting data on scales never before seen in our field coupled with new modeling techniques from deep learning requires us to think about what computational infrastructure we need in order to fully enjoy the benefits and mitigate associated barriers.

Continue Reading The critical role of computing infrastructure in computational audiology

Speech recognition apps for the hearing impaired and deaf

Speech recognition software has become increasingly sophisticated and accurate due to progress in information technology. This project aims to examine the performance of speech recognition apps and to explore which audiological tests are a representative measure of the ability of these apps to convert speech into text.

Continue Reading Speech recognition apps for the hearing impaired and deaf
Computational Audiology: new ways to address the global burden of hearing loss
Source: https://www.stripepartners.com/our_writing_article/the-age-of-the-ear/

Computational Audiology: new ways to address the global burden of hearing loss

Computational audiology, the augmentation of traditional hearing health care by digital methods, has potential to dramatically advance audiological precision and efficiency to address the global burden of hearing loss.

Continue Reading Computational Audiology: new ways to address the global burden of hearing loss