Here you find all abstracts submitted for the VCCA2021:

Hearing test using smart speakers: Speech audiometry with Alexa
Illustration of the differences between the clinical test and speech audiometry at home, which can be performed with a smart speaker.

Hearing test using smart speakers: Speech audiometry with Alexa

We present an Alexa skill that performs a speech-in-noise listening test with matrix sentences. The skill is evaluated with four subject groups and in three different acoustic conditions.

Continue Reading Hearing test using smart speakers: Speech audiometry with Alexa
Model-based selection of most informative diagnostic tests and test parameters
The goal is to find the model instance (folder from shelf y) that has the maximum likelihood to have generated the experimental data set d. The stimulus that leads to the smallest variance of parameter estimates is presented next. The process is repeated until the termination criterion is met.

Model-based selection of most informative diagnostic tests and test parameters

The model should conduct the experiment because it knows best which condition is going to be the most informative for confining its free parameters, at least in theory

Continue Reading Model-based selection of most informative diagnostic tests and test parameters
Prevalence statistics of hearing loss in adults: Harnessing spatial big data to estimate patterns and trends
Fig.1 Map of England by Government Office Regions, showing prevalence rates of self-reported hearing loss in eight Waves of the English Longitudinal Study of Ageing (ELSA). This work by Dialechti Tsimpida is licensed under a Creative Commons Attribution 4.0 International License.

Prevalence statistics of hearing loss in adults: Harnessing spatial big data to estimate patterns and trends

Harnessing spatial big data to estimate patterns and trends of hearing loss

Continue Reading Prevalence statistics of hearing loss in adults: Harnessing spatial big data to estimate patterns and trends

Use of a deep recurrent neural network to reduce transient noise: Effects on subjective speech intelligibility and comfort

Transient noise reduction using a deep recurrent neural network improves the subjective speech Intelligibility and comfort.

Continue Reading Use of a deep recurrent neural network to reduce transient noise: Effects on subjective speech intelligibility and comfort
Binaural prediction of speech intelligibility based on a blind model using automatic phoneme recognition
(A) We process signals with a binaural processing stage. The signal is converted to phoneme probabilities using a DNN. The degradation of these is used to predict binaural SI. (B) Results for subjective data, our model (BAPSI) and baseline models in terms of speech recognition threshold (SRT).

Binaural prediction of speech intelligibility based on a blind model using automatic phoneme recognition

In this study, we show that phoneme probabilities from a DNN can produce good estimates of speech intelligibility when combined with a blind binaural processing stage.

Continue Reading Binaural prediction of speech intelligibility based on a blind model using automatic phoneme recognition
Estimating the distortion component of hearing impairment from attenuation-based model predictions using machine learning
Estimated distortion component of hearing impairment from the model's prediction errors as a function of the average hearing loss.

Estimating the distortion component of hearing impairment from attenuation-based model predictions using machine learning

Attenuation-component-based model predictions of speech recognition thresholds like FADE seem to facilitate an estimation of the supra-threshold distortion component of hearing impairment.

Continue Reading Estimating the distortion component of hearing impairment from attenuation-based model predictions using machine learning
Hearing-impaired artificial neural networks replicate speech recognition deficits of hearing-impaired humans
Deep neural networks were trained to recognize spoken words from simulated auditory nerve representations. We measured effects of simulated OHC loss (broader frequency tuning and elevated thresholds) and ANF loss (reduced fidelity of temporal coding) on network speech recognition in noise.

Hearing-impaired artificial neural networks replicate speech recognition deficits of hearing-impaired humans

We developed a deep learning model of hearing loss by training artificial neural networks to recognize words in noise from simulated auditory nerve input.

Continue Reading Hearing-impaired artificial neural networks replicate speech recognition deficits of hearing-impaired humans
Comparing phonemic information transmission with cochlear implants between human listeners and an end-to-end computational model of speech perception
A finite element model of the implanted cochlea, showing the voltage spread caused by stimulation of the implanted electrode.

Comparing phonemic information transmission with cochlear implants between human listeners and an end-to-end computational model of speech perception

A finite element model of a cochlea, a computational model of the auditory nerve, and an automatic speech recognition neural network were combined to replicate CI speech perception patterns.

Continue Reading Comparing phonemic information transmission with cochlear implants between human listeners and an end-to-end computational model of speech perception
Designing the BEARS (Both Ears) virtual reality training suite for improving spatial hearing abilities in teenage bilateral cochlear implantees
A cochlear implants teenage user trying one of the BEARS games at one of the participatory design sessions we organised during the past two years.

Designing the BEARS (Both Ears) virtual reality training suite for improving spatial hearing abilities in teenage bilateral cochlear implantees

Teenagers with bilateral cochlear implants (CI) often suffer from poor spatial hearing abilities; a set of multi-modal (audio-visual) and multi-domain training tasks (localisation, spatial speech-in-noise and spatial music) was designed by involving teenage CI users as co-creators during the development process.

Continue Reading Designing the BEARS (Both Ears) virtual reality training suite for improving spatial hearing abilities in teenage bilateral cochlear implantees
How variation in cochlear implant performance relates to differences in MAP parameters
Graphic representation of how differences in PCA components are reflected in MAPs.

How variation in cochlear implant performance relates to differences in MAP parameters

Statistical analysis of how fitting parameters relate to speech recognition scores finds meaningful differences between the highest- and lowest-scoring tertiles of recipients.

Continue Reading How variation in cochlear implant performance relates to differences in MAP parameters
Sound-Level monitoring earphones with smartphone feedback as an intervention to promote healthy listening behaviors in young adults
dbTrack (Westone) technology

Sound-Level monitoring earphones with smartphone feedback as an intervention to promote healthy listening behaviors in young adults

In a first of its kind study, we aimed to determine the accuracy and reliability of sound-level monitoring earphones and the effect of smartphone feedback as an intervention to encourage safe listening use among young people.

Continue Reading Sound-Level monitoring earphones with smartphone feedback as an intervention to promote healthy listening behaviors in young adults
Automatic detection of human activities from accelerometer sensors integrated in hearables
Figure 1 A: Data streamed from the hearables to a PC running a recording script. B. Data from two different activities, recorded over 5 minutes each. C. Results from a classification using Naïve Bayes classification and a 5-fold cross-validation procedure.

Automatic detection of human activities from accelerometer sensors integrated in hearables

Using a Naive Bayes classifier, we could show that twelve different activities were classified above chance.

Continue Reading Automatic detection of human activities from accelerometer sensors integrated in hearables
Assessing listening effort, using EEG and pupillometry, in response to adverse listening conditions and memory load.
Example trial of the auditory Sternberg paradigm. The upper row shows the screen presented to the participant during each phase. The lower row illustrates the sound presented to the participant during each phase.

Assessing listening effort, using EEG and pupillometry, in response to adverse listening conditions and memory load.

This ongoing study aims to investigate how effortful listening becomes, when neurocognitive mechanisms need to be activated while listening to a distorted speech signal through a CI vocoder, using EEG and pupillometry during an auditory digit working memory task.

Continue Reading Assessing listening effort, using EEG and pupillometry, in response to adverse listening conditions and memory load.
A classification approach to listening effort: combining features from the pupil and cardiovascular system
Schematic showing experimental set-up including the placement of the participant, loudspeakers and two observers.

A classification approach to listening effort: combining features from the pupil and cardiovascular system

Training k-nearest neighbor classifiers to predict intelligibility, social state and participant perception of a listening task

Continue Reading A classification approach to listening effort: combining features from the pupil and cardiovascular system
Modeling formant-frequency discrimination based on auditory-nerve and midbrain responses: normal hearing and sensorineural hearing loss
Figure 1. Stimulus spectra of standard (blue) and target (orange) vowel-like stimuli with ΔF2=2% and the corresponding model responses of AN fibers and IC band-enhanced cells (BE), which are excited by fluctuations within a band of modulation frequencies.

Modeling formant-frequency discrimination based on auditory-nerve and midbrain responses: normal hearing and sensorineural hearing loss

Formant-frequency difference limens in human with normal hearing or mild sensorineural hearing loss were estimated based on models for neural fluctuation profiles of neurons in the inferior colliculus.

Continue Reading Modeling formant-frequency discrimination based on auditory-nerve and midbrain responses: normal hearing and sensorineural hearing loss
Modeling the effects of inhibition and gap junctions on synchrony enhancement in bushy cells of the ventral cochlear nucleus
VCN microcircuit structure of a pair of bushy cells. Green lines show the excitatory inputs originating from ANFs. Blue lines represent inhibitory inputs from DS cells while red lines are inhibitory inputs from TV cells. Yellow lines represent the gap junction connections between the bushy cells.

Modeling the effects of inhibition and gap junctions on synchrony enhancement in bushy cells of the ventral cochlear nucleus

This study explores the effects of inhibition and gap junctions on the synchrony enhancement seen in ventral cochlear nucleus bushy cells by using biophysically detailed neural network models of bushy cells microcircuits.

Continue Reading Modeling the effects of inhibition and gap junctions on synchrony enhancement in bushy cells of the ventral cochlear nucleus
A computational model of fast spectrotemporal chirp sensitivity in the inferior colliculus
Top, SFIE model receiving input from a single frequency, bottom, a cross-frequency model receiving an array of excitatory and inhibitory inputs from multiple frequencies. Right, model responses to two opposite directions of frequency chirp.

A computational model of fast spectrotemporal chirp sensitivity in the inferior colliculus

We present a computational model of auditory nerve fiber responses elicited by combined electric and acoustic stimulation which can be used to investigate peripheral electric-acoustic interaction.

Continue Reading A computational model of fast spectrotemporal chirp sensitivity in the inferior colliculus
A computational single-fiber model of electric-acoustic stimulation
The model consists of a phenomenological model of acoustic stimulation (Bruce et al., 2018, green) and an integrate-and-fire model of electric stimulation (Joshi et al., 2017, orange). The model output are simulated spike times. Different coupling methods between both models were investigated (red).

A computational single-fiber model of electric-acoustic stimulation

We present a computational model of auditory nerve fiber responses elicited by combined electric and acoustic stimulation which can be used to investigate peripheral electric-acoustic interaction.

Continue Reading A computational single-fiber model of electric-acoustic stimulation

“Ear in the Clouds”– A web app supporting computational models for auditory-nerve and midbrain responses

A cloud-based web app provides an accessible tool for simulation and visualization of population responses of model auditory-nerve and midbrain neurons.

Continue Reading “Ear in the Clouds”– A web app supporting computational models for auditory-nerve and midbrain responses
Visualization of speech perception errors through phoneme alignment
Results for V1 with both cochlear implant (CI) and hearing aid (HA) (top), CI only (middle) and HA only (bottom) in response to a set of 30 sentences extracted from Speech Banana auditory training app

Visualization of speech perception errors through phoneme alignment

We present an automated program for aligning stimulus-response phonemes collected from speech testing in order to visualize speech perception errors in individuals with hearing loss

Continue Reading Visualization of speech perception errors through phoneme alignment
Functional hearing and communication deficits (FHCD) in blast-exposed service members with normal to near-normal hearing thresholds
Average scores for age-matched controls (C:AM), high-performing, blast-exposed SMs (B:Hi) and low-performing, blast-exposed SMs (B:Lo) groups. Figure panels show results for tests used to classify subjects, clinical audiometry metrics, and selective behavioral and electrophysiological metrics.

Functional hearing and communication deficits (FHCD) in blast-exposed service members with normal to near-normal hearing thresholds

Difficulties integrating binaural cues and understanding speech in noise amongst blast-exposed Service members with audiometric thresholds within clinical norms are most likely due to sensory, and not cognitive deficits, as indicated by a poorer signal-to-noise ratio in the neural encoding of sound in the peripheral auditory system.

Continue Reading Functional hearing and communication deficits (FHCD) in blast-exposed service members with normal to near-normal hearing thresholds
The effect of selective loss of auditory nerve fibers on temporal envelope processing: a simulation study
Average thresholds for amplitude modulation detection task (N = 11) at different modulation rates (left: 16 Hz, middle: 32 Hz, right: 64 Hz). Carrier frequency: 500 Hz.

The effect of selective loss of auditory nerve fibers on temporal envelope processing: a simulation study

Computer simulation supports the hypothesis of cochlear synaptopathy selectively damaging low-spontaneous rate auditory nerves and impacting temporal envelope processing, and shows that some tasks are more sensitive to this issue than others.

Continue Reading The effect of selective loss of auditory nerve fibers on temporal envelope processing: a simulation study
Correlates of linguistic processing in the frequency following response to naturalistic speech
A) Latency of the frequency following response to natural speech estimated by the complex linear model. B) Magnitudes of the model coefficients at 11 ms. C) Phases of the model coefficients at 11 ms.

Correlates of linguistic processing in the frequency following response to naturalistic speech

Our findings suggest that the frequency following response tracking the fundamental frequency of voiced speech plays an active role in the rapid and continuous processing of spoken language.

Continue Reading Correlates of linguistic processing in the frequency following response to naturalistic speech
Systematic monitoring of Meniere’s disease: A smartphone-based approach for the periodical assessment of audiometric measures and fluctuating symptoms
A) Flowchart of the "Meniere Calendar" application, B) Final psychometric function fitted to data from a normal hearing participant (left) and a Meniere's patient (right). C) Screenshots of the application's environment

Systematic monitoring of Meniere’s disease: A smartphone-based approach for the periodical assessment of audiometric measures and fluctuating symptoms

This study developed an application software that periodically records the course of Meniere's disease patients through the performance of automated, binaural audiometric tests.

Continue Reading Systematic monitoring of Meniere’s disease: A smartphone-based approach for the periodical assessment of audiometric measures and fluctuating symptoms
Behavioral and electrophysiological evaluation of loudness growth in clinically normal hearing tinnitus patients with and without hyperacusis
Grand averaged auditory cortical evoked potential waveforms of the groups (Control/Blue; Tinnitus/Orange; Hyperacusis/Purple) generated by 500 Hz and 2000 Hz Narrow-Band Noise (NBN) stimuli across intensities.

Behavioral and electrophysiological evaluation of loudness growth in clinically normal hearing tinnitus patients with and without hyperacusis

The hyperacusis group differs significantly from the control and tinnitus groups in behavioral tests, but not in electrophysiological tests.

Continue Reading Behavioral and electrophysiological evaluation of loudness growth in clinically normal hearing tinnitus patients with and without hyperacusis
What can we learn about tinnitus from social media posts?
Topics found with Latent Dirichlet Allocation on 100k Reddit posts. The top rectangle of each block is the topic's name assessed through the algorithm's output: the lemmatised words shown in the middle rectangle. The proportion of messages that mention the topic is displayed in the lower rectangle.

What can we learn about tinnitus from social media posts?

Exploiting spontaneous messages of Reddit users discussing tinnitus, this work identifies the main topics of interest, their heterogeneity, and how they relate to one another based on cooccurrence in users' discussions; to enhance patient-centered support.

Continue Reading What can we learn about tinnitus from social media posts?
A data-driven decision tree for diagnosing somatosensory tinnitus
Overview of the decision tree to diagnose somatosensory tinnitus

A data-driven decision tree for diagnosing somatosensory tinnitus

Based on the results of an online survey, we developed a decision tree to classify somatosensory tinnitus patients with an accuracy of over 80%.

Continue Reading A data-driven decision tree for diagnosing somatosensory tinnitus

Examining the association of standard threshold shifts for occupational hearing loss among miners exposed to noise and platinum mine dust at a large-scale platinum mine in South Africa

The association of standard threshold shifts for occupational hearing loss among miners exposed to noise and platinum mine dust at a large-scale platinum mine in South Africa

Continue Reading Examining the association of standard threshold shifts for occupational hearing loss among miners exposed to noise and platinum mine dust at a large-scale platinum mine in South Africa
Outcomes and experiences of delivering an internet-based intervention for tinnitus during the COVID-19 pandemic
Outline of the RCT

Outcomes and experiences of delivering an internet-based intervention for tinnitus during the COVID-19 pandemic

A RCT was undertaken on a US populations indicating the value of an CBT internet-intervention for both reducing tinnitus distress, tinnitus comorbidities and managing the anxiety associated with the pandemic.

Continue Reading Outcomes and experiences of delivering an internet-based intervention for tinnitus during the COVID-19 pandemic