Why Computational Audiology ?

The purpose of this online forum is to share knowledge about computational audiology. We hope to bring experts from different disciplines such as AI and Audiology together in order to stimulate innovations for hearing impaired people anywhere. We publish blog articles about current developments, highlight ongoing projects and publications by research groups, and aim to facilitate discussion. In addition, the website is used to promote events related to computational audiology.

Anybody can leave a comment or share content to publish on the forum, after providing a name and valid email address. Comments only appear on the website after approval by the moderators.​ Content can be rejected if it is not within the scope of the forum or considered disrespectful. The tone of the forum is respectful and pleasantly informal.

What is computational audiology?

Computational audiology is the augmentation of traditional hearing health care by digital methods including artificial intelligence and machine learning. Continue reading

News & Agenda

Donders Institute and Radboud University open of a third ICAI lab.

The AI for Neurotech Lab aims to develop machine learning solutions for brain reading and writing, to restore sensory and cognitive functions. These solutions could include hearing tools for the deaf ...

Research Topic: Digital Hearing Healthcare

The Frontiers Research Topic “Digital Hearing Healthcare,” edited by Qinglin Meng, Jing Chen, Changxin Zhang, Dennis Barbour and Fan-Gang Zeng, is now open for submissions.

VCCA2020 on-demand

Register now to get access to the on-demand version of the VCCA2020 conference. A clean version of all recordings is available until November 1.

Blogs and project about computational audiology

Below you find blogs about computational audiology, but also featured talks, and presentations of on-going projects submitted for the VCCA2020 conference. An overview of the VCCA2020 conference program is provided here. Further backgrounds can be the virtual goodiebag.  

A system that predicts and identifies neural responses to overlapping speech sounds mimics human perception.
Diotic and antiphasic digits-in-noise to detect and classify types of hearing loss
A simple at-home self-check to screen for aberrant loudness growth in hearing aid and cochlear implant users
This study used machine learning methods to predict bone conduction abnormalities from air conduction pure tone audiometric thresholds.
The Panoramic ECAP Method models patient-specific electrode-neuron interfaces in cochlear implant users, and may provide important information for optimizing efficacy
This study used machine learning models trained on otoacoustic emissions and audiometric thresholds to predict self-reported difficulty hearing in noise
During the Musi-CI training methods are developed for CI users primarily to enhance music enjoyment and secondary to improve perception
We developed a new, automated, language-independent speech in noise screening test, we evaluated its performance in 150 subjects against the
How much audiological data is needed for convergence? One year!
A random forest classifier can predict response to high-definition transcranial direct current stimulation treatment for tinnitus with 82.41% accuracy.