Why Computational Audiology ?

The purpose of this online forum is to share knowledge about computational audiology. We hope to bring experts from different disciplines such as AI and Audiology together in order to stimulate innovations for hearing impaired people anywhere. We publish blog articles about current developments, highlight ongoing projects and publications by research groups, and aim to facilitate discussion. In addition, the website is used to promote events related to computational audiology.

Anybody can leave a comment or share content to publish on the forum, after providing a name and valid email address. Comments only appear on the website after approval by the moderators.​ Content can be rejected if it is not within the scope of the forum or considered disrespectful. The tone of the forum is respectful and pleasantly informal.

What is computational audiology?

Computational audiology is the augmentation of traditional hearing health care by digital methods including artificial intelligence and machine learning. Continue reading

News & Agenda

Donders Institute and Radboud University open of a third ICAI lab.

The AI for Neurotech Lab aims to develop machine learning solutions for brain reading and writing, to restore sensory and cognitive functions. These solutions could include hearing tools for the deaf ...

Research Topic: Digital Hearing Healthcare

The Frontiers Research Topic “Digital Hearing Healthcare,” edited by Qinglin Meng, Jing Chen, Changxin Zhang, Dennis Barbour and Fan-Gang Zeng, is now open for submissions.

VCCA2020 on-demand

Register now to get access to the on-demand version of the VCCA2020 conference. A clean version of all recordings is available until November 1.

Blogs and project about computational audiology

Below you find blogs about computational audiology, but also featured talks, and presentations of on-going projects submitted for the VCCA2020 conference. An overview of the VCCA2020 conference program is provided here. Further backgrounds can be the virtual goodiebag.  

A random forest classifier can predict response to high-definition transcranial direct current stimulation treatment for tinnitus with 82.41% accuracy.
The computational model consists of three main parts (auditory nerve, inferior colliculus and cochlear nuclei). The figure shows the input
Dynamically masked audiograms achieve accurate true threshold estimates and reduce test time compared to current clinical masking procedures.
A machine learning model is trained on real-world fitting data to predict the user's individual gain based on audiometric and
This work presents a CASA model of attentive voice tracking.
Computational modelling allowed us to explore the effects of non-invasive brain stimulation on cortical processing of speech.
The Automatic LAnguage-independent Development of the Digits-In-Noise test (Aladdin)-project aims to create a fully automatic test development procedure for digit-in-noise
When listening to speech, oscillatory activity in the auditory cortex entrains to the amplitude fluctuations. The entrainment can be influenced
Adaptation of the auditory nerve to electrical stimulation can best be described by a power law or a sum of
Speech recognition software has become increasingly sophisticated and accurate due to progress in information technology. This project aims to examine