We present an Alexa skill that performs a speech-in-noise listening test with matrix
sentences. The skill is evaluated with four subject groups and in three different acoustic
conditions.
The goal is to find the model instance (folder from shelf y) that has the maximum likelihood to have generated the experimental data set d. The stimulus that leads to the smallest variance of parameter estimates is presented next. The process is repeated until the termination criterion is met.
The model should conduct the experiment because it knows best which condition is going to be the most informative for confining its free parameters, at least in theory
Fig.1 Map of England by Government Office Regions, showing prevalence rates of self-reported hearing loss in eight Waves of the English Longitudinal Study of Ageing (ELSA). This work by Dialechti Tsimpida is licensed under a Creative Commons Attribution 4.0 International License.
(A) We process signals with a binaural processing stage. The signal is converted to phoneme probabilities using a DNN. The degradation of these is used to predict binaural SI. (B) Results for subjective data, our model (BAPSI) and baseline models in terms of speech recognition threshold (SRT).
In this study, we show that phoneme probabilities from a DNN can produce good estimates of speech intelligibility when combined with a blind binaural processing stage.
Attenuation-component-based model predictions of speech recognition thresholds like FADE seem to facilitate an estimation of the supra-threshold distortion component of hearing impairment.
Deep neural networks were trained to recognize spoken words from simulated auditory nerve representations. We measured effects of simulated OHC loss (broader frequency tuning and elevated thresholds) and ANF loss (reduced fidelity of temporal coding) on network speech recognition in noise.
We developed a deep learning model of hearing loss by training artificial neural networks to recognize words in noise from simulated auditory nerve input.
A finite element model of a cochlea, a computational model of the auditory nerve, and an automatic speech recognition neural network were combined to replicate CI speech perception patterns.
Teenagers with bilateral cochlear implants (CI) often suffer from poor spatial hearing abilities; a set of multi-modal (audio-visual) and multi-domain training tasks (localisation, spatial speech-in-noise and spatial music) was designed by involving teenage CI users as co-creators during the development process.
Statistical analysis of how fitting parameters relate to speech recognition scores finds meaningful differences between the highest- and lowest-scoring tertiles of recipients.
In a first of its kind study, we aimed to determine the accuracy and reliability of sound-level monitoring earphones and the effect of smartphone feedback as an intervention to encourage safe listening use among young people.
Figure 1 A: Data streamed from the hearables to a PC running a recording script. B. Data from two different activities, recorded over 5 minutes each. C. Results from a classification using Naïve Bayes classification and a 5-fold cross-validation procedure.
Example trial of the auditory Sternberg paradigm. The upper row shows the screen presented to the participant during each phase. The lower row illustrates the sound presented to the participant during each phase.
This ongoing study aims to investigate how effortful listening becomes, when neurocognitive mechanisms need to be activated while listening to a distorted speech signal through a CI vocoder, using EEG and pupillometry during an auditory digit working memory task.
Figure 1. Stimulus spectra of standard (blue) and target (orange) vowel-like stimuli with ΔF2=2% and the corresponding model responses of AN fibers and IC band-enhanced cells (BE), which are excited by fluctuations within a band of modulation frequencies.
Formant-frequency difference limens in human with normal hearing or mild sensorineural hearing loss were estimated based on models for neural fluctuation profiles of neurons in the inferior colliculus.
VCN microcircuit structure of a pair of bushy cells. Green lines show the excitatory inputs originating from ANFs. Blue lines represent inhibitory inputs from DS cells while red lines are inhibitory inputs from TV cells. Yellow lines represent the gap junction connections between the bushy cells.
This study explores the effects of inhibition and gap junctions on the synchrony enhancement seen in ventral cochlear nucleus bushy cells by using biophysically detailed neural network models of bushy cells microcircuits.
Top, SFIE model receiving input from a single frequency, bottom, a cross-frequency model receiving an array of excitatory and inhibitory inputs from multiple frequencies. Right, model responses to two opposite directions of frequency chirp.
We present a computational model of auditory nerve fiber responses elicited by combined electric and acoustic stimulation which can be used to investigate peripheral electric-acoustic interaction.
The model consists of a phenomenological model of acoustic stimulation (Bruce et al., 2018, green) and an integrate-and-fire model of electric stimulation (Joshi et al., 2017, orange). The model output are simulated spike times. Different coupling methods between both models were investigated (red).
We present a computational model of auditory nerve fiber responses elicited by combined electric and acoustic stimulation which can be used to investigate peripheral electric-acoustic interaction.
A cloud-based web app provides an accessible tool for simulation and visualization of population responses of model auditory-nerve and midbrain neurons.
Results for V1 with both cochlear implant (CI) and hearing aid (HA) (top), CI only (middle) and HA only (bottom) in response to a set of 30 sentences extracted from Speech Banana auditory training app
We present an automated program for aligning stimulus-response phonemes collected from speech testing in order to visualize speech perception errors in individuals with hearing loss
Average scores for age-matched controls (C:AM), high-performing, blast-exposed SMs (B:Hi) and low-performing, blast-exposed SMs (B:Lo) groups. Figure panels show results for tests used to classify subjects, clinical audiometry metrics, and selective behavioral and electrophysiological metrics.
Difficulties integrating binaural cues and understanding speech in noise amongst blast-exposed Service members with audiometric thresholds within clinical norms are most likely due to sensory, and not cognitive deficits, as indicated by a poorer signal-to-noise ratio in the neural encoding of sound in the peripheral auditory system.
Computer simulation supports the hypothesis of cochlear synaptopathy selectively damaging low-spontaneous rate auditory nerves and impacting temporal envelope processing, and shows that some tasks are more sensitive to this issue than others.
A) Latency of the frequency following response to natural speech estimated by the complex linear model. B) Magnitudes of the model coefficients at 11 ms. C) Phases of the model coefficients at 11 ms.
Our findings suggest that the frequency following response tracking the fundamental frequency of voiced speech plays an active role in the rapid and continuous processing of spoken language.
A) Flowchart of the "Meniere Calendar" application, B) Final psychometric function fitted to data from a normal hearing participant (left) and a Meniere's patient (right). C) Screenshots of the application's environment
This study developed an application software that periodically records the course of Meniere's disease patients through the performance of automated, binaural audiometric tests.
Grand averaged auditory cortical evoked potential waveforms of the groups (Control/Blue; Tinnitus/Orange; Hyperacusis/Purple) generated by 500 Hz and 2000 Hz Narrow-Band Noise (NBN) stimuli across intensities.
Topics found with Latent Dirichlet Allocation on 100k Reddit posts. The top rectangle of each block is the topic's name assessed through the algorithm's output: the lemmatised words shown in the middle rectangle. The proportion of messages that mention the topic is displayed in the lower rectangle.
Exploiting spontaneous messages of Reddit users discussing tinnitus, this work identifies the main topics of interest, their heterogeneity, and how they relate to one another based on cooccurrence in users' discussions; to enhance patient-centered support.
The association of standard threshold shifts for occupational hearing loss among miners exposed to noise and platinum mine dust at a large-scale platinum mine in South Africa
A RCT was undertaken on a US populations indicating the value of an CBT internet-intervention for both reducing tinnitus distress, tinnitus comorbidities and managing the anxiety associated with the pandemic.