Real-Time Deep Neural Network to Remix Music for Cochlear Implant Users
Design and evaluation of a real-time audio source separation algorithm to remix music for cochlear implant users
Design and evaluation of a real-time audio source separation algorithm to remix music for cochlear implant users
Auditory modeling is indispensable for precision diagnostics and individualized treatment
Simulated binaural neural networks show that sharp spatial and frequency tuning is needed to accurately localize sound sources in the azimuth direction.
AI-assisted Diagnosis for Middle Ear Pathologies
Remote Audiology Delivery Care COVID-19
Modeling speech perception in hidden hearing loss using stochastically undersampled neuronal firing patterns
Speech perception by hearing aid (HA) users has been evaluated in a database that includes up to 45 hours of testing their aided abilities to recognize syllabic constituents of speech, and words in meaningful sentences, under both masked (eight-talker babble) and quiet conditions.
The U.S. National Hearing Test has now been taken by over 150,000 people and this extensive database provides reliable estimates of the distribution of hearing loss for people who voluntarily take a digits-in-noise test by telephone.
PTA is more enlightening than speech-ABR to predict aided behavioural measures.
Detection of current shunts with a ladder-network model
Test-retest analysis of aggregated audiometry testing data using Jacoti Hearing Center self-testing application
Towards the development of a diagnostic supporting tool in audiology, the Common Audiological Functional Parameters (CAFPAs) were shown to be similarly suitable for audiological finding classification as combinations of typical audiological measurements, and thereby provide the potential to combine different audiological databases.
Applying biophysical auditory periphery models for real-time applications and studies of hearing impairment
This study used machine learning to identify normal-hearing listeners with and without tinnitus based on their ABRs.
A system that predicts and identifies neural responses to overlapping speech sounds mimics human perception.
Diotic and antiphasic digits-in-noise to detect and classify types of hearing loss
A simple at-home self-check to screen for aberrant loudness growth in hearing aid and cochlear implant users
This study used machine learning methods to predict bone conduction abnormalities from air conduction pure tone audiometric thresholds.
The Panoramic ECAP Method models patient-specific electrode-neuron interfaces in cochlear implant users, and may provide important information for optimizing efficacy and improving speech perception outcomes.
This study used machine learning models trained on otoacoustic emissions and audiometric thresholds to predict self-reported difficulty hearing in noise in normal hearers.
During the Musi-CI training methods are developed for CI users primarily to enhance music enjoyment and secondary to improve perception of daily sounds and speech.
We developed a new, automated, language-independent speech in noise screening test, we evaluated its performance in 150 subjects against the WHO criteria for slight/mild and moderate hearing loss, and we observed an accuracy >80%, with an area under the ROC curves equal to 0.83 and 0.89, respectively.
How much audiological data is needed for convergence? One year!
A random forest classifier can predict response to high-definition transcranial direct current stimulation treatment for tinnitus with 82.41% accuracy.
The computational model consists of three main parts (auditory nerve, inferior colliculus and cochlear nuclei). The figure shows the input (natural speech) and the neural outputs at the different levels.
A machine learning model is trained on real-world fitting data to predict the user's individual gain based on audiometric and further subject-related data, such as age, gender, and the acoustic environments.
This work presents a CASA model of attentive voice tracking.
Computational modelling allowed us to explore the effects of non-invasive brain stimulation on cortical processing of speech.
The Automatic LAnguage-independent Development of the Digits-In-Noise test (Aladdin)-project aims to create a fully automatic test development procedure for digit-in-noise hearing tests in various languages and for different target populations.
When listening to speech, oscillatory activity in the auditory cortex entrains to the amplitude fluctuations. The entrainment can be influenced by non-invasive neurostimulation, which can thereby modulate the comprehension of a speech signal in background noise.
Adaptation of the auditory nerve to electrical stimulation can best be described by a power law or a sum of exponents.
Speech recognition software has become increasingly sophisticated and accurate due to progress in information technology. This project aims to examine the performance of speech recognition apps and to explore which audiological tests are a representative measure of the ability of these apps to convert speech into text.