Prediction of speech recognition by hearing-aid users: the syllable-constituent, contextual model of speech perception

Speech perception by hearing aid (HA) users has been evaluated in a database that includes up to 45 hours of testing their aided abilities to recognize syllabic constituents of speech, and words in meaningful sentences, under both masked (eight-talker babble) and quiet conditions.

Continue ReadingPrediction of speech recognition by hearing-aid users: the syllable-constituent, contextual model of speech perception

Administration by telephone of US National Hearing Test to 150,000+ persons

The U.S. National Hearing Test has now been taken by over 150,000 people and this extensive database provides reliable estimates of the distribution of hearing loss for people who voluntarily take a digits-in-noise test by telephone.

Continue ReadingAdministration by telephone of US National Hearing Test to 150,000+ persons

Audiological classification performance based on audiological measurements and Common Audiological Functional Parameters (CAFPAs)

Towards the development of a diagnostic supporting tool in audiology, the Common Audiological Functional Parameters (CAFPAs) were shown to be similarly suitable for audiological finding classification as combinations of typical audiological measurements, and thereby provide the potential to combine different audiological databases.

Continue ReadingAudiological classification performance based on audiological measurements and Common Audiological Functional Parameters (CAFPAs)

The Panoramic ECAP Method: modelling the electrode-neuron interface in cochlear implant users

The Panoramic ECAP Method models patient-specific electrode-neuron interfaces in cochlear implant users, and may provide important information for optimizing efficacy and improving speech perception outcomes.

Continue ReadingThe Panoramic ECAP Method: modelling the electrode-neuron interface in cochlear implant users

Predicting abnormal hearing difficulty in noise in ‘normal’ hearers using standard audiological measures

This study used machine learning models trained on otoacoustic emissions and audiometric thresholds to predict self-reported difficulty hearing in noise in normal hearers.

Continue ReadingPredicting abnormal hearing difficulty in noise in ‘normal’ hearers using standard audiological measures

Preliminary evaluation of the Speech Reception Threshold measured using a new language-independent screening test as a predictor of hearing loss

We developed a new, automated, language-independent speech in noise screening test, we evaluated its performance in 150 subjects against the WHO criteria for slight/mild and moderate hearing loss, and we observed an accuracy >80%, with an area under the ROC curves equal to 0.83 and 0.89, respectively.

Continue ReadingPreliminary evaluation of the Speech Reception Threshold measured using a new language-independent screening test as a predictor of hearing loss

Random Forest Classification to Predict Response to High-Definition Transcranial Direct Current Stimulation Therapy for Tinnitus

A random forest classifier can predict response to high-definition transcranial direct current stimulation treatment for tinnitus with 82.41% accuracy.

Continue ReadingRandom Forest Classification to Predict Response to High-Definition Transcranial Direct Current Stimulation Therapy for Tinnitus

Computational modelling of the human auditory brainstem response to natural speech

The computational model consists of three main parts (auditory nerve, inferior colliculus and cochlear nuclei). The figure shows the input (natural speech) and the neural outputs at the different levels.

Continue ReadingComputational modelling of the human auditory brainstem response to natural speech

Predicting Hearing Aid Fittings Based on Audiometric and Subject-Related Data: A Machine Learning Approach

A machine learning model is trained on real-world fitting data to predict the user's individual gain based on audiometric and further subject-related data, such as age, gender, and the acoustic environments.

Continue ReadingPredicting Hearing Aid Fittings Based on Audiometric and Subject-Related Data: A Machine Learning Approach

Aladdin: Automatic LAnguage-independent Development of the Digits-In-Noise test

The Automatic LAnguage-independent Development of the Digits-In-Noise test (Aladdin)-project aims to create a fully automatic test development procedure for digit-in-noise hearing tests in various languages and for different target populations.

Continue ReadingAladdin: Automatic LAnguage-independent Development of the Digits-In-Noise test

Transcranial alternating current stimulation with the theta- but not delta band modulates speech-in-noise comprehension

When listening to speech, oscillatory activity in the auditory cortex entrains to the amplitude fluctuations. The entrainment can be influenced by non-invasive neurostimulation, which can thereby modulate the comprehension of a speech signal in background noise.

Continue ReadingTranscranial alternating current stimulation with the theta- but not delta band modulates speech-in-noise comprehension

Speech recognition apps for the hearing impaired and deaf

Speech recognition software has become increasingly sophisticated and accurate due to progress in information technology. This project aims to examine the performance of speech recognition apps and to explore which audiological tests are a representative measure of the ability of these apps to convert speech into text.

Continue ReadingSpeech recognition apps for the hearing impaired and deaf