Use of a deep recurrent neural network to reduce transient noise: Effects on subjective speech intelligibility and comfort
Transient noise reduction using a deep recurrent neural network improves the subjective speech Intelligibility and comfort.
Transient noise reduction using a deep recurrent neural network improves the subjective speech Intelligibility and comfort.
In this study, we show that phoneme probabilities from a DNN can produce good estimates of speech intelligibility when combined with a blind binaural processing stage.
Attenuation-component-based model predictions of speech recognition thresholds like FADE seem to facilitate an estimation of the supra-threshold distortion component of hearing impairment.
We developed a deep learning model of hearing loss by training artificial neural networks to recognize words in noise from simulated auditory nerve input.
A finite element model of a cochlea, a computational model of the auditory nerve, and an automatic speech recognition neural network were combined to replicate CI speech perception patterns.
Teenagers with bilateral cochlear implants (CI) often suffer from poor spatial hearing abilities; a set of multi-modal (audio-visual) and multi-domain training tasks (localisation, spatial speech-in-noise and spatial music) was designed by involving teenage CI users as co-creators during the development process.
Statistical analysis of how fitting parameters relate to speech recognition scores finds meaningful differences between the highest- and lowest-scoring tertiles of recipients.
In a first of its kind study, we aimed to determine the accuracy and reliability of sound-level monitoring earphones and the effect of smartphone feedback as an intervention to encourage safe listening use among young people.
Using a Naive Bayes classifier, we could show that twelve different activities were classified above chance.
This ongoing study aims to investigate how effortful listening becomes, when neurocognitive mechanisms need to be activated while listening to a distorted speech signal through a CI vocoder, using EEG and pupillometry during an auditory digit working memory task.
Training k-nearest neighbor classifiers to predict intelligibility, social state and participant perception of a listening task
Formant-frequency difference limens in human with normal hearing or mild sensorineural hearing loss were estimated based on models for neural fluctuation profiles of neurons in the inferior colliculus.
This study explores the effects of inhibition and gap junctions on the synchrony enhancement seen in ventral cochlear nucleus bushy cells by using biophysically detailed neural network models of bushy cells microcircuits.
We present a computational model of auditory nerve fiber responses elicited by combined electric and acoustic stimulation which can be used to investigate peripheral electric-acoustic interaction.
We present a computational model of auditory nerve fiber responses elicited by combined electric and acoustic stimulation which can be used to investigate peripheral electric-acoustic interaction.
A psysiologically-based model can simulate some dichotic vowel fusion percepts in human listeners.
A cloud-based web app provides an accessible tool for simulation and visualization of population responses of model auditory-nerve and midbrain neurons.
We present an automated program for aligning stimulus-response phonemes collected from speech testing in order to visualize speech perception errors in individuals with hearing loss
Difficulties integrating binaural cues and understanding speech in noise amongst blast-exposed Service members with audiometric thresholds within clinical norms are most likely due to sensory, and not cognitive deficits, as indicated by a poorer signal-to-noise ratio in the neural encoding of sound in the peripheral auditory system.
Computer simulation supports the hypothesis of cochlear synaptopathy selectively damaging low-spontaneous rate auditory nerves and impacting temporal envelope processing, and shows that some tasks are more sensitive to this issue than others.
Our findings suggest that the frequency following response tracking the fundamental frequency of voiced speech plays an active role in the rapid and continuous processing of spoken language.
Cortical tracking of the ignored speaker plays a functional role in speech processing.
We introduce a new generative model of selective attention during cocktail party listening, and treat selective attention as an inference problem.
This study developed an application software that periodically records the course of Meniere's disease patients through the performance of automated, binaural audiometric tests.
The hyperacusis group differs significantly from the control and tinnitus groups in behavioral tests, but not in electrophysiological tests.
Exploiting spontaneous messages of Reddit users discussing tinnitus, this work identifies the main topics of interest, their heterogeneity, and how they relate to one another based on cooccurrence in users' discussions; to enhance patient-centered support.
A RCT was undertaken on a US populations indicating the value of an CBT internet-intervention for both reducing tinnitus distress, tinnitus comorbidities and managing the anxiety associated with the pandemic.