Searching auditory phenotypes beyond audiometry from a large clinical dataset

We present a preliminary analysis on a large clinical auditory dataset collected at Rigshospitalet University Hospital in Copenhagen (Denmark) from 1995 to 2022 containing about 300,000 audiometries including pure-tone audiometry, speech audiometry and acoustic reflex thresholds to search for novel auditory phenotypes using a clustering algorithm based on Gaussian mixture models.

Continue ReadingSearching auditory phenotypes beyond audiometry from a large clinical dataset

Developing a Novel Hearing Aid Framework Using Machine Learning and a Model of the Impaired Cochlea

This project proposes a hearing aid framework based on Artificial Neural Network (ANN) and Cascade of Asymmetric Resonators with Fast-Acting Compression (CARFAC) model.

Continue ReadingDeveloping a Novel Hearing Aid Framework Using Machine Learning and a Model of the Impaired Cochlea

Developing novel electrical stimulation strategies for cochlear implant users based on a model of the healthy human cochlea

In this paper, it has been shown that a novel electrical stimulation strategy that is based on healthy cochlea, can produce neurograms that resemble the simulated neural activity of a healthy cochlea more that those produced by a current algorithm used in cochlear implants (ACE).

Continue ReadingDeveloping novel electrical stimulation strategies for cochlear implant users based on a model of the healthy human cochlea

Using CARFAC-JAX, a fast, differentiable model of the human cochlea, to efficiently fit personalized hearing loss

When implemented in JAX, CARFAC not only displays rapid calculation time but also allows differentiation relative to its parameters. This facilitates the fast and efficient generation of personalized models of hearing impairment.

Continue ReadingUsing CARFAC-JAX, a fast, differentiable model of the human cochlea, to efficiently fit personalized hearing loss

An Automated Digits-In-Noise Hearing Test Using Automatic Speech Recognition and Text-To-Speech: A Proof-of-Concept Study

The study shows that Digit-in-Noise tests work reasonably well when using Automatic Speech Recognition and Text-to-Speech instead of a user interface and pre-recorded stimuli.

Continue ReadingAn Automated Digits-In-Noise Hearing Test Using Automatic Speech Recognition and Text-To-Speech: A Proof-of-Concept Study

Perception of the NAO Robot via Self-Assessments and Behavioural Analyses When Conducting Audiological Tests

Human-robot interaction evaluation when conducting various psychophysical auditory tests showed that using a humanoid robot was enjoyed and favoured more than the standard computer interface.

Continue ReadingPerception of the NAO Robot via Self-Assessments and Behavioural Analyses When Conducting Audiological Tests

Curating routinely collected hearing-health data to facilitate the equitable re-use of NHS data for translational research using the OMOP framework

The National Institute of Health and Social Care Research (NIHR) Health Informatics Collaborative (HIC) for Hearing Health has developed a pipeline which allows hospitals to standardise patient audiogram data, making it easier to combine these data in a research database.

Continue ReadingCurating routinely collected hearing-health data to facilitate the equitable re-use of NHS data for translational research using the OMOP framework

Big Data Analysis on Coupling Selection and Model Development of a Data Driven Dome Proposer

We herein study a neural network-based approach for acoustic coupling selection in hearing devices, significantly enhancing user acceptance by incorporating previously overlooked factors such as user experience and contralateral hearing loss.

Continue ReadingBig Data Analysis on Coupling Selection and Model Development of a Data Driven Dome Proposer

From business need to model monitoring: Optimizing manufacturing of custom hearing aids with machine learning

This presentation will introduce a framework used for developing and deploying cloud-based machine learning models in the context of a project aimed at optimizing manufacturing of custom (in-the-ear style) hearing aids by improved detection of electroacoustic test failures due to feedback.

Continue ReadingFrom business need to model monitoring: Optimizing manufacturing of custom hearing aids with machine learning

Big data and data standards in audiology

The special session aims to assess the state-of-the-art of data standards available in audiology, and to discuss its potential, challenges, and required steps towards successful standardization. In the featured talk, we will introduce the openEHR approach, and discuss input from the VCCA community (collected by a survey conducted before). In a panel discussion, we will have different experts from audiology, data science/artificial intelligence, and database infrastructure perspective.

Continue ReadingBig data and data standards in audiology

Comparative Analysis of EEG-Based Sound Location Decoding between Real and Virtual Listening

This study compares EEG-based decoding of sound source locations between real (free-field) and virtual (non-individual HRTF) listening environments, finding differences in decoding accuracy and latency that suggest weaker neural representations and processing delays in certain virtual settings.

Continue ReadingComparative Analysis of EEG-Based Sound Location Decoding between Real and Virtual Listening