The goal is to find the model instance (folder from shelf y) that has the maximum likelihood to have generated the experimental data set d. The stimulus that leads to the smallest variance of parameter estimates is presented next. The process is repeated until the termination criterion is met.
Here we explore the potential for using machine learning to detect hearing loss from
Continue ReadingDetecting hearing loss from children’s speech using machine learning
Fig.1 Map of England by Government Office Regions, showing prevalence rates of self-reported hearing loss in eight Waves of the English Longitudinal Study of Ageing (ELSA). This work by Dialechti Tsimpida is licensed under a Creative Commons Attribution 4.0 International License.
Transient noise reduction using a deep recurrent neural network improves the subjective speech Intelligibility and comfort.
Continue ReadingUse of a deep recurrent neural network to reduce transient noise: Effects on subjective speech intelligibility and comfort
(A) We process signals with a binaural processing stage. The signal is converted to phoneme probabilities using a DNN. The degradation of these is used to predict binaural SI. (B) Results for subjective data, our model (BAPSI) and baseline models in terms of speech recognition threshold (SRT).
Attenuation-component-based model predictions of speech recognition thresholds like FADE seem to facilitate an estimation of the supra-threshold distortion component of hearing impairment.
Continue ReadingEstimating the distortion component of hearing impairment from attenuation-based model predictions using machine learning
Deep neural networks were trained to recognize spoken words from simulated auditory nerve representations. We measured effects of simulated OHC loss (broader frequency tuning and elevated thresholds) and ANF loss (reduced fidelity of temporal coding) on network speech recognition in noise.
Teenagers with bilateral cochlear implants (CI) often suffer from poor spatial hearing abilities; a set of multi-modal (audio-visual) and multi-domain training tasks (localisation, spatial speech-in-noise and spatial music) was designed by involving teenage CI users as co-creators during the development process.
Continue ReadingDesigning the BEARS (Both Ears) virtual reality training suite for improving spatial hearing abilities in teenage bilateral cochlear implantees
Graphic representation of how differences in PCA components are reflected in MAPs.
In a first of its kind study, we aimed to determine the accuracy and reliability of sound-level monitoring earphones and the effect of smartphone feedback as an intervention to encourage safe listening use among young people.
Continue ReadingSound-Level monitoring earphones with smartphone feedback as an intervention to promote healthy listening behaviors in young adults
Figure 1 A: Data streamed from the hearables to a PC running a recording script. B. Data from two different activities, recorded over 5 minutes each. C. Results from a classification using Naïve Bayes classification and a 5-fold cross-validation procedure.
Using a Naive Bayes classifier, we could show that twelve different activities were classified above chance.
Continue ReadingAutomatic detection of human activities from accelerometer sensors integrated in hearables
Example trial of the auditory Sternberg paradigm. The upper row shows the screen presented to the participant during each phase. The lower row illustrates the sound presented to the participant during each phase.
This ongoing study aims to investigate how effortful listening becomes, when neurocognitive mechanisms need to be activated while listening to a distorted speech signal through a CI vocoder, using EEG and pupillometry during an auditory digit working memory task.
Continue ReadingAssessing listening effort, using EEG and pupillometry, in response to adverse listening conditions and memory load.
Schematic showing experimental set-up including the placement of the participant, loudspeakers and two observers.
Training k-nearest neighbor classifiers to predict intelligibility, social state and participant perception of a listening task
Continue ReadingA classification approach to listening effort: combining features from the pupil and cardiovascular system
Figure 1. Stimulus spectra of standard (blue) and target (orange) vowel-like stimuli with ΔF2=2% and the corresponding model responses of AN fibers and IC band-enhanced cells (BE), which are excited by fluctuations within a band of modulation frequencies.
Formant-frequency difference limens in human with normal hearing or mild sensorineural hearing loss were estimated based on models for neural fluctuation profiles of neurons in the inferior colliculus.
Continue ReadingModeling formant-frequency discrimination based on auditory-nerve and midbrain responses: normal hearing and sensorineural hearing loss
VCN microcircuit structure of a pair of bushy cells. Green lines show the excitatory inputs originating from ANFs. Blue lines represent inhibitory inputs from DS cells while red lines are inhibitory inputs from TV cells. Yellow lines represent the gap junction connections between the bushy cells.
This study explores the effects of inhibition and gap junctions on the synchrony enhancement seen in ventral cochlear nucleus bushy cells by using biophysically detailed neural network models of bushy cells microcircuits.
Continue ReadingModeling the effects of inhibition and gap junctions on synchrony enhancement in bushy cells of the ventral cochlear nucleus
Top, SFIE model receiving input from a single frequency, bottom, a cross-frequency model receiving an array of excitatory and inhibitory inputs from multiple frequencies. Right, model responses to two opposite directions of frequency chirp.
We present a computational model of auditory nerve fiber responses elicited by combined electric and acoustic stimulation which can be used to investigate peripheral electric-acoustic interaction.
Continue ReadingA computational model of fast spectrotemporal chirp sensitivity in the inferior colliculus
The model consists of a phenomenological model of acoustic stimulation (Bruce et al., 2018, green) and an integrate-and-fire model of electric stimulation (Joshi et al., 2017, orange). The model output are simulated spike times. Different coupling methods between both models were investigated (red).
We present an automated program for aligning stimulus-response phonemes collected from speech testing in order to visualize speech perception errors in individuals with hearing loss
Continue ReadingVisualization of speech perception errors through phoneme alignment
Average scores for age-matched controls (C:AM), high-performing, blast-exposed SMs (B:Hi) and low-performing, blast-exposed SMs (B:Lo) groups. Figure panels show results for tests used to classify subjects, clinical audiometry metrics, and selective behavioral and electrophysiological metrics.
Difficulties integrating binaural cues and understanding speech in noise amongst blast-exposed Service members with audiometric thresholds within clinical norms are most likely due to sensory, and not cognitive deficits, as indicated by a poorer signal-to-noise ratio in the neural encoding of sound in the peripheral auditory system.
Continue ReadingFunctional hearing and communication deficits (FHCD) in blast-exposed service members with normal to near-normal hearing thresholds
Average thresholds for amplitude modulation detection task (N = 11) at different modulation rates (left: 16 Hz, middle: 32 Hz, right: 64 Hz). Carrier frequency: 500 Hz.
Computer simulation supports the hypothesis of cochlear synaptopathy selectively damaging low-spontaneous rate auditory nerves and impacting temporal envelope processing, and shows that some tasks are more sensitive to this issue than others.
Continue ReadingThe effect of selective loss of auditory nerve fibers on temporal envelope processing: a simulation study
A) Latency of the frequency following response to natural speech estimated by the complex linear model. B) Magnitudes of the model coefficients at 11 ms. C) Phases of the model coefficients at 11 ms.
We introduce a new generative model of selective attention during cocktail party listening, and treat selective attention as an inference problem.
Continue ReadingUsing active inference to model selective attention during cocktail party listening
A) Flowchart of the "Meniere Calendar" application, B) Final psychometric function fitted to data from a normal hearing participant (left) and a Meniere's patient (right). C) Screenshots of the application's environment
This study developed an application software that periodically records the course of Meniere's disease patients through the performance of automated, binaural audiometric tests.
Continue ReadingSystematic monitoring of Meniere’s disease: A smartphone-based approach for the periodical assessment of audiometric measures and fluctuating symptoms
Grand averaged auditory cortical evoked potential waveforms of the groups (Control/Blue; Tinnitus/Orange; Hyperacusis/Purple) generated by 500 Hz and 2000 Hz Narrow-Band Noise (NBN) stimuli across intensities.
The hyperacusis group differs significantly from the control and tinnitus groups in behavioral tests, but not in electrophysiological tests.
Continue ReadingBehavioral and electrophysiological evaluation of loudness growth in clinically normal hearing tinnitus patients with and without hyperacusis
Topics found with Latent Dirichlet Allocation on 100k Reddit posts. The top rectangle of each block is the topic's name assessed through the algorithm's output: the lemmatised words shown in the middle rectangle. The proportion of messages that mention the topic is displayed in the lower rectangle.
Exploiting spontaneous messages of Reddit users discussing tinnitus, this work identifies the main topics of interest, their heterogeneity, and how they relate to one another based on cooccurrence in users' discussions; to enhance patient-centered support.
A RCT was undertaken on a US populations indicating the value of an CBT internet-intervention for both reducing tinnitus distress, tinnitus comorbidities and managing the anxiety associated with the pandemic.
Continue ReadingOutcomes and experiences of delivering an internet-based intervention for tinnitus during the COVID-19 pandemic