8:00 (London), 9:00 (Amsterdam), 17:00 (Sydney), 3:00 (New York)

8:00 AM (London), 9:00 AM (Amsterdam) 5:00 PM (Sydney), 3:00 AM (New York)

Tobias Goehring & Jan-Willem Wasmann


ZoomRoom A
Chair: Tobias Goehring

8:10 (London), 9:10 (Amsterdam), 17:10 (Sydney), 3:10 (New York)

50 MIN

KEYNOTE 1: Prof Brian CJ Moore Time-efficient hearing tests and their use in the fitting of hearing aids
30 min

FEATURED 1: Dr Maartje Hendrikse Virtual audiovisual environments for hearing aid evaluation (and fitting)
20 min

SWITCH-OVER BREAK >>> Join Parallel session 15 min


9:15 (London), 10:15 (Amsterdam), 18:15 (Sydney), 4:15 (New York)

90 MIN

ZoomRoom A
Chair P1: Karina de Sousa

ZoomRoom B
Chair P2: Joaquin Valderrama-Valenzuela

P1: Hearing loss detection, monitoring & prevalence

Detecting hearing loss from children’s speech using machine learning

Hearing test using smart speakers: Speech audiometry with Alexa

Evaluation of multivariate classification algorithms for hearing loss detection through a speech-in-noise test

Model-based selection of most informative diagnostic tests and test parameters

Examining the association of standard threshold shifts for occupational hearing loss among miners exposed to noise and platinum mine dust at a large-scale platinum mine in South Africa

Prevalence statistics of hearing loss in adults: Harnessing spatial big data to estimate patterns and trends

P2: Listening effort, behaviour & intervention

A classification approach to listening effort: combining features from the pupil and cardiovascular system


Assessing listening effort, using EEG and pupillometry, in response to adverse listening conditions and memory load

Automatic detection of human activities from accelerometer sensors integrated in hearables


Sound-level monitoring earphones with smartphone feedback as an intervention to promote healthy listening behaviors in young adults

How variation in cochlear implant performance relates to differences in MAP parameters

Designing the BEARS (Both Ears) virtual reality training suite for improving spatial hearing abilities in teenage bilateral cochlear implantees

SWITCH-OVER BREAK >>> Join Main session 15 min



ZoomRoom A
Chair: Jan-Willem Wasmann

Discussion on devices: Alan Archer-Boy
Discussion on services: Saima Rajasingam

11:00 (London), 12:00 (Amsterdam), 20:00 (Sydney), 6:00 (New York)

60 MIN

KEYNOTE 2: Prof Nicholas Lesica Harnessing the power of AI to combat the global burden of hearing loss: opportunities and challenges
30 min

DISCUSSION 1 (Break-out room):

Hearing devices of the future

30 min
Overcoming barriers of stigma, logistics, costs and efficacy

DISCUSSION 2 (Break-out room):

Hearing services of the future

30 min
Ensuring wide and equitable access to hearing healthcare

LUNCH-BREAK /// Relax & Chat in Main session 60 min

12:00 (London), 13:00 (Amsterdam), 21:00 (Sydney), 07:00 (New York)



ZoomRoom A
Chair: Prof Waldo Nogueira

13:00 (London), 14:00 (Amsterdam), 22:00 (Sydney), 08:00 (New York)

120 MIN

Introduction by Prof Waldo Nogueira

FEATURED 2: Dr Raul Lopez-Sanchez

Hearing deficits and auditory profiling: data-driven approaches towards personalized audiology 20 min


FEATURED 3: Dr Niels Pontoppidan

Learning from audiological data collected in the lab and the real world 20 min

T1: Dr Rob Eikelboom
T2: Sarah Nee and Prof Michael Marschollek
T3: Dr Filiep Vanpoucke
T4: Dr Eugen Kludt
T5: Prof Rick Neitzel



COFFEE BREAK >>> Join Main session 30 min


ZoomRoom A
Chair: tbd

16:30 (London), 17:30 (Amsterdam), 1:30 (Sydney), 11:30 (New York)

50 MIN

KEYNOTE 3: Prof Josh McDermott New models of human hearing via deep learning
30 min

FEATURED 4: Dr Josef Schlittenlacher Machine learning for models of auditory perception
20 min

BREAK >>> Join Parallel session 10 min


16:30 (London), 17:30 (Amsterdam), 1:30 (Sydney), 11:30 (New York)

75 MIN

ZoomRoom A
Chair P1: tbd

ZoomRoom B
Chair P2: tbd

P3: Deep learning applications and models

Comparing phonemic information transmission with cochlear implants between human listeners and an end-to-end computational model of speech perception
Hearing-impaired artificial neural networks replicate speech recognition deficits of hearing-impaired humans
Estimating the distortion component of hearing impairment from attenuation-based model predictions using machine learning
Binaural prediction of speech intelligibility based on a blind model using automatic phoneme recognition
Use of a deep recurrent neural network to reduce transient noise: Effects on subjective speech intelligibility and comfort

P4: Interventions and diagnosis of tinnitus

Outcomes and experiences of delivering an internet-based intervention for tinnitus during the COVID-19 pandemic
A data-driven decision tree for diagnosing somatosensory tinnitus
What can we learn about tinnitus from social media posts?
Behavioral and electrophysiological evaluation of loudness growth in clinically normal hearing tinnitus patients with and without hyperacusis
Systematic monitoring of Meniere’s disease: A smartphone-based approach for the periodical assessment of audiometric measures and fluctuating symptoms

DINNER-BREAK /// Relax & chat >>> Join Main session 45 min

17:45 (London), 18:45 (Amsterdam), 2:45 (Sydney), 12:45 (New York)


ZoomRoom A
Chair: Alexander J Billig

18:30 (London), 19:30 (Amsterdam), 3:30 (Sydney), 13:30 (New York)

50 MIN

KEYNOTE 4: Prof Mounya Elhilali Auditory salience
30 min

FEATURED 5: Dr Simone Graetzer Clarity: machine learning challenges for improving hearing aid processing of speech in noise
20 min

SWITCH-OVER BREAK >>> Join Parallel session 10 min


19:30 (London), 20:30 (Amsterdam), 4:30 (Sydney), 14:30 (New York)

90 MIN

ZoomRoom A
Chair P5: Alexander J Billig

ZoomRoom B
Chair P6: Gerard Encina-Llamas

P5: Auditory attention and processes

Using active inference to model selective attention during cocktail party listening
Cortical tracking of a distractor speaker modulates the comprehension of a target speaker
Correlates of linguistic processing in the frequency following response to naturalistic speech
The effect of selective loss of auditory nerve fibers on temporal envelope processing: a simulation study
Functional hearing and communication deficits (FHCD) in blast-exposed service members with normal to near-normal hearing thresholds
Visualization of speech perception errors through phoneme alignment

P6: Computational auditory modelling

“Ear in the Clouds”– A web app supporting computational models for auditory-nerve and midbrain responses
Predicting fusion of dichotic vowels in normal hearing listeners with a physiologically-based model
A computational single-fiber model of electric-acoustic stimulation
A computational model of fast spectrotemporal chirp sensitivity in the inferior colliculus
Modeling the effects of inhibition and gap junctions on synchrony enhancement in bushy cells of the ventral cochlear nucleus
Modeling formant-frequency discrimination based on auditory-nerve and midbrain responses: normal hearing and sensorineural hearing loss


21:00 (London), 22:00 (Amsterdam), 6:00 (Sydney), 16:00 (New York)

15 MIN


21:15 (London), 22:15 (Amsterdam), 6:15 (Sydney), 16:15 (New York)

90 MIN

London Time:

We would like to take advantage of the virtual aspects of this conference by creating sessions that consist of a mixture of different talk and discussion formats.
'Bringing AI and Audiology together'
'The broader picture at the moment, in the world of telemedicine, is that 10 years progress has occurred in just a week'


The Virtual Conference on Computational Audiology is hosted and/or supported by: