Machine Learning applied to problems in audiology

Using CARFAC-JAX, a fast, differentiable model of the human cochlea, to efficiently fit personalized hearing loss

When implemented in JAX, CARFAC not only displays rapid calculation time but also allows differentiation relative to its parameters. This facilitates the fast and efficient generation of personalized models of hearing impairment.

Continue ReadingUsing CARFAC-JAX, a fast, differentiable model of the human cochlea, to efficiently fit personalized hearing loss

From business need to model monitoring: Optimizing manufacturing of custom hearing aids with machine learning

This presentation will introduce a framework used for developing and deploying cloud-based machine learning models in the context of a project aimed at optimizing manufacturing of custom (in-the-ear style) hearing aids by improved detection of electroacoustic test failures due to feedback.

Continue ReadingFrom business need to model monitoring: Optimizing manufacturing of custom hearing aids with machine learning

Neural Networks Hear You Loud and Clear: Hearing Loss Compensation Using Deep Neural Networks

This study investigates using deep neural networks to provide hearing loss compensation for people with hearing loss, showing promising results in speech intelligibility and perceived quality.

Continue ReadingNeural Networks Hear You Loud and Clear: Hearing Loss Compensation Using Deep Neural Networks

Podcast Episode 3: A holistic perspective on hearing technology

n this episode, Brent Edwards from NAL and Stefan Launer from Sonova take us through their careers and share lessons and perspectives on the development of hearing technology. We discuss how the development of technology becomes more holistic, design thinking,  standardization, and what's needed to get to new service models and innovation.

Continue ReadingPodcast Episode 3: A holistic perspective on hearing technology
Read more about the article Model-based selection of most informative diagnostic tests and test parameters
The goal is to find the model instance (folder from shelf y) that has the maximum likelihood to have generated the experimental data set d. The stimulus that leads to the smallest variance of parameter estimates is presented next. The process is repeated until the termination criterion is met.

Model-based selection of most informative diagnostic tests and test parameters

The model should conduct the experiment because it knows best which condition is going to be the most informative for confining its free parameters, at least in theory

Continue ReadingModel-based selection of most informative diagnostic tests and test parameters

Use of a deep recurrent neural network to reduce transient noise: Effects on subjective speech intelligibility and comfort

Transient noise reduction using a deep recurrent neural network improves the subjective speech Intelligibility and comfort.

Continue ReadingUse of a deep recurrent neural network to reduce transient noise: Effects on subjective speech intelligibility and comfort
Read more about the article Binaural prediction of speech intelligibility based on a blind model using automatic phoneme recognition
(A) We process signals with a binaural processing stage. The signal is converted to phoneme probabilities using a DNN. The degradation of these is used to predict binaural SI. (B) Results for subjective data, our model (BAPSI) and baseline models in terms of speech recognition threshold (SRT).

Binaural prediction of speech intelligibility based on a blind model using automatic phoneme recognition

In this study, we show that phoneme probabilities from a DNN can produce good estimates of speech intelligibility when combined with a blind binaural processing stage.

Continue ReadingBinaural prediction of speech intelligibility based on a blind model using automatic phoneme recognition
Read more about the article Estimating the distortion component of hearing impairment from attenuation-based model predictions using machine learning
Estimated distortion component of hearing impairment from the model's prediction errors as a function of the average hearing loss.

Estimating the distortion component of hearing impairment from attenuation-based model predictions using machine learning

Attenuation-component-based model predictions of speech recognition thresholds like FADE seem to facilitate an estimation of the supra-threshold distortion component of hearing impairment.

Continue ReadingEstimating the distortion component of hearing impairment from attenuation-based model predictions using machine learning
Read more about the article Automatic detection of human activities from accelerometer sensors integrated in hearables
Figure 1 A: Data streamed from the hearables to a PC running a recording script. B. Data from two different activities, recorded over 5 minutes each. C. Results from a classification using Naïve Bayes classification and a 5-fold cross-validation procedure.

Automatic detection of human activities from accelerometer sensors integrated in hearables

Using a Naive Bayes classifier, we could show that twelve different activities were classified above chance.

Continue ReadingAutomatic detection of human activities from accelerometer sensors integrated in hearables
Read more about the article A classification approach to listening effort: combining features from the pupil and cardiovascular system
Schematic showing experimental set-up including the placement of the participant, loudspeakers and two observers.

A classification approach to listening effort: combining features from the pupil and cardiovascular system

Training k-nearest neighbor classifiers to predict intelligibility, social state and participant perception of a listening task

Continue ReadingA classification approach to listening effort: combining features from the pupil and cardiovascular system
Read more about the article A data-driven decision tree for diagnosing somatosensory tinnitus
Overview of the decision tree to diagnose somatosensory tinnitus

A data-driven decision tree for diagnosing somatosensory tinnitus

Based on the results of an online survey, we developed a decision tree to classify somatosensory tinnitus patients with an accuracy of over 80%.

Continue ReadingA data-driven decision tree for diagnosing somatosensory tinnitus

Prediction of speech recognition by hearing-aid users: the syllable-constituent, contextual model of speech perception

Speech perception by hearing aid (HA) users has been evaluated in a database that includes up to 45 hours of testing their aided abilities to recognize syllabic constituents of speech, and words in meaningful sentences, under both masked (eight-talker babble) and quiet conditions.

Continue ReadingPrediction of speech recognition by hearing-aid users: the syllable-constituent, contextual model of speech perception

Random Forest Classification to Predict Response to High-Definition Transcranial Direct Current Stimulation Therapy for Tinnitus

A random forest classifier can predict response to high-definition transcranial direct current stimulation treatment for tinnitus with 82.41% accuracy.

Continue ReadingRandom Forest Classification to Predict Response to High-Definition Transcranial Direct Current Stimulation Therapy for Tinnitus

Computational modelling of the human auditory brainstem response to natural speech

The computational model consists of three main parts (auditory nerve, inferior colliculus and cochlear nuclei). The figure shows the input (natural speech) and the neural outputs at the different levels.

Continue ReadingComputational modelling of the human auditory brainstem response to natural speech
Read more about the article Dynamically Masked Audiograms with Machine Learning Audiometry
Final masked AMLAG results for one participant (127) with a left cochlear implant and no residual hearing. Red diamonds denote unheard tones and blue pluses denote heard tones. The most intense tones at lower frequencies in the left ear were effectively masked.

Dynamically Masked Audiograms with Machine Learning Audiometry

Dynamically masked audiograms achieve accurate true threshold estimates and reduce test time compared to current clinical masking procedures.

Continue ReadingDynamically Masked Audiograms with Machine Learning Audiometry

Predicting Hearing Aid Fittings Based on Audiometric and Subject-Related Data: A Machine Learning Approach

A machine learning model is trained on real-world fitting data to predict the user's individual gain based on audiometric and further subject-related data, such as age, gender, and the acoustic environments.

Continue ReadingPredicting Hearing Aid Fittings Based on Audiometric and Subject-Related Data: A Machine Learning Approach

Aladdin: Automatic LAnguage-independent Development of the Digits-In-Noise test

The Automatic LAnguage-independent Development of the Digits-In-Noise test (Aladdin)-project aims to create a fully automatic test development procedure for digit-in-noise hearing tests in various languages and for different target populations.

Continue ReadingAladdin: Automatic LAnguage-independent Development of the Digits-In-Noise test
Read more about the article The critical role of computing infrastructure in computational audiology
The nine stages of the machine learning workflow.

The critical role of computing infrastructure in computational audiology

The rise of new digital tools for collecting data on scales never before seen in our field coupled with new modeling techniques from deep learning requires us to think about what computational infrastructure we need in order to fully enjoy the benefits and mitigate associated barriers.

Continue ReadingThe critical role of computing infrastructure in computational audiology

Speech recognition apps for the hearing impaired and deaf

Speech recognition software has become increasingly sophisticated and accurate due to progress in information technology. This project aims to examine the performance of speech recognition apps and to explore which audiological tests are a representative measure of the ability of these apps to convert speech into text.

Continue ReadingSpeech recognition apps for the hearing impaired and deaf
Read more about the article Computational Audiology: new ways to address the global burden of hearing loss
Source: https://www.stripepartners.com/our_writing_article/the-age-of-the-ear/

Computational Audiology: new ways to address the global burden of hearing loss

Computational audiology, the augmentation of traditional hearing health care by digital methods, has potential to dramatically advance audiological precision and efficiency to address the global burden of hearing loss.

Continue ReadingComputational Audiology: new ways to address the global burden of hearing loss