Session 1 / Sydney


MS teams Sydney A
Chair: Jessica Monaghan

8:00 (Sydney), 0:00 (Pretoria / Amsterdam),  15:00  (California)

15 MIN


MS teams Sydney A
Chair: Jessica Monaghan

8:15  (Sydney), 0:15  (Pretoria / Amsterdam),  15:15  (California)

60 MIN

KEYNOTE 1: Prof Malcolm Slaney
Malcolm Slaney: Machine Learning for Audition
40 min

FEATURED 1: Dr Mohsen Imani
Brain-inspired Hyperdimensional Computing for Real-time and Robust Health Monitoring
20 min

BREAK >>> 15 min

Special Session A

MS teams Sydney A
Chair: Viji Easwar

9:30 (Sydney), 1:30 (Pretoria / Amsterdam),  16:30 (California)

60 MIN

A: Emerging trends for paediatrics challenges

09:30-09:4501:30-01:4516:30-16:45A1 Patrick Wong:  Predicting Language Outcomes through Neural Speech Encoding
09:45-10:0001:45-02:0016:45-17:00A2 Al-Rahim Habib: Artificial intelligence to triage ear disease in rural and remote areas
10:00-10:1502:00-02:1517:00-17:15A3 Collette McKay: Using fNIRS to Evaluate Infant Hearing
10:15-10:3002:15-02:3017:15-17:30A4 Viji Easwar: Using EEG to Assess Speech Audibility with Hearing Aids


BREAK >>> 30 min

Special Session B

MS teams Sydney A
Chair: Nicky Chong-White

11:00 (Sydney), 3:00 (Pretoria / Amsterdam),  18:00 (California)

60 MIN

B: Inclusive Design and Assistive Technology

11:00-11:1503:00-03:1518:00-18:15B1 Andrew Bellavia: Auracast: When Designing For Everyone is Designing For Accessibility
11:15-11:3003:15-03:3018:15-18:30B2 Nicky Chong-White: Enhancing Communication with Live Captioning and Apple AirPods Pro
11:30-11:4503:30-03:4518:30-18:45B3 Cassie Hames: See Me: Transforming User Experience on Public Transport for Everyone
11:45-12:0003:45-04:0018:45-19:00B4 Jessica Korte: How to Engage Culturally Deaf People with Assistive Technology

Lunch BREAK >>> 60 min

Contributed talks C1:

Innovations in Auditory Perception and Speech Understanding

MS teams Sydney A
Chair: Arun Sebastian

13:00 (Sydney), 5:00 (Pretoria / Amsterdam),  20:00 (California)

60 MIN

C1: Innovations in Auditory Perception and Speech Understanding

13:00-13:1205:00-05:1220:00-20:12C1-1 Shangqiguo Wang: Distinguishing difficulties In speech understanding due to hearing loss and cognitive decline
13:12-13:2405:12-05:2420:12-20:24C1-2 David Meng: Neural tracking of linguistic information as a measure of speech understanding in noise
13:24-13:3605:24-05:3620:24-20:36C1-3 Kumar Seluakumaran: Measuring frequency selectivity (FS) in normal-hearing and mild sensorineural hearing loss clinical subjects using FS audiogram
13:36-13:4805:36-05:4820:36-20:48C1-4 Mathew Croteau: Enhancing reliable monaural cues for sound lateralisation using CROS

Continental BREAK >>> Switch from Sydney to Pretoria session 150 min


Intro & Kick-off

MS teams Pretoria A
Chair: Karina de Sousa

 08:30 (Pretoria / Amsterdam), 16:30 (Sydney),  23:30 (California)

15 MIN


MS teams Pretoria A
Chair: Karina de Sousa

 08:45 (Pretoria / Amsterdam), 16:45 (Sydney),  23:45 (California)

60 MIN

KEYNOTE 2: Prof Sarah Verhulst  Personalised and neural-network-based closed-loop systems for augmented hearing 40 min

More details on the presented models are found here:

FEATURED 2: Luke Meyer  Humanoid Robot as an Audiological Interface? 20 min

BREAK >>> Join Parallel Special Sessions C & D 15 min

Special Session C 

10:00 (Pretoria / Amsterdam), 18:00 (Sydney),  1:00 (California)

60 MIN

MS teams Pretoria A
Chair: Trevor Cox 

Special Session C: Cadenza Challenge: Improving music for those with a hearing loss

Scott Bannister1, Alinka E. Greasley1, Gerardo Roa Dabike2, Trevor J. Cox2, Bruno M. Fazenda2, Rebecca R. Vos2, Simone Graetzer2, Jennifer L. Firth3, William M. Whitmer3, Michael A. Akeroyd3, Jon P. Barker4

1School of Music, University of Leeds, UK; 2Acoustics Research Centre, University of Salford, UK; 3School of Medicine, University of Nottingham, UK; 4Department of Computer Science, University of Sheffield, UK

How can we process and remix music, so it sounds better for those with a hearing loss? The Cadenza project ( is advancing our understanding of what music personalised for someone with a hearing loss should sound like. We have been running a sensory panel with hearing aid users to develop metrics of music audio quality. At VCCA2023, we will demonstrate a listening test using the scales arising from this panel. We are also running machine learning challenges to catalyse new signal processing to improve listening experiences for hearing aids and consumer devices. We will outline one of the live challenges and demonstrate the baseline software system. Input from discussions at VCCA will help shape future work in Cadenza.

BREAK >>> Join Parallel sessions 30 min

PARALLEL Sessions C2 & C3

11:30 (Pretoria / Amsterdam), 19:30 (Sydney),  2:30 (California)

60 MIN

MS teams Pretoria A
Chair: Tobias Goehring

MS teams Pretoria B
Chair: Gloria A Araiza Illan

C2: Cochlear Implants and Related Technologies

C3: Auditory Skills and Perception

11:30-11:4219:30-19:4202:30-02:42C2-1 Tom Gajecki: End-to-end deep denoising for cochlear implantsC3-1 David A Fabry: Enhancing Human Auditory Function with Artificial Intelligence and Listener Intent
11:42-11:5419:42-19:5402:42-02:54C2-2 Francois Guerit: A “hilltop” approach for tackling channel interactions in Cochlear Implant usersC3-2 Iordanis Thoidis: Having choices for enhancing voices: Target speaker extraction in noisy multi-talker environments using deep neural networks
11:54-12:0619:54-20:0602:54-03:06C2-3 Lelia A Erscoi: Gathering Ecological Data to Assess Real-life Benefits of Cochlear ImplantsC3-3 Kars Tjepkema: Receiver-in-ear hearing aid comparison on music perception for hearing-impaired listeners
12:06-12:1820:06-20:1803:06-03:18C2-4 Marieke M.W. ten Hoor: Associates of Talker Size in Cochlear Implant StimulationC3-4 Lana Biot: The acoustic change complex predictive validation (ACCEPT) study to predict speech perception
12:24-12:3620:24-20:3603:24-03:36C2-5 Floris Rotteveel: Electric pitch perception with cochlear implants: Using real-life sounds to get back on the right trackC3-5 Juergen Otten: Cortical speech tracking with different lip synchronization algorithms in virtual environments
12:36-12:4820:36-20:4803:36-03:48C2-6 Annesya Banerjee: Neural network models clarify the role of plasticity in cochlear implant outcomesC3-6 Franklin Y Alvarez Cardinale: Objective Measurement of Speech Intelligibility with the Spike Activity Mutual Information Index (SAMII)
12:48-13:0020:48-21:0003:48-04:00C2-7 Clément Gaultier: Recovering speech intelligibility for cochlear implants in noisy and reverberant situations using multi-microphone deep learning algorithmsC3-7 Alexis D Deighton MacIntyre: Cortical tracking of speech: Effects of intelligibility and spectral degradation

Lunch BREAK >>> 60 min

PARALLEL Sessions C4 & C5

14:00 (Pretoria / Amsterdam), 22:00 (Sydney),  5:00 (California)

60 MIN

MS teams Pretoria A
Chair: Charlotte Vercammen

MS teams Pretoria B
Chair: Alessia Pagliolonga

C4: Hearing Tests and Diagnostics

C5:  Innovations in Hearing Care

14:00-14:1222:00-22:1205:00-05:12C4-1 Brian CJ Moore: Diagnosing Noise-Induced Hearing Loss Sustained During Military Service Using Deep Neural NetworksC5-1 Amelie Hintermaier: PrimeHA: not a hearing aid
14:12-14:2422:12-22:2405:12-05:24C4-2 Tahereh Afghah: Development of a novel self-assessment hearing loss and communication disability tool (HEAR-COMMAND Tool) based on the ICF standardC5-2 Clara B Iplinsky: Hearing loops: old school for current communication issues
14:24-14:3622:24-22:3605:24-05:36C4-3 Chen Xu: Smartphone-based hearing tests for a Virtual Hearing Clinic: Influence of ambient noise on the absolute threshold and loudness scaling at homeC5-3 Anna Josefine Munch Sørensen: Method for continuous evaluation of hearing aid user key situations in the field
14:36-14:4822:36-22:4805:36-05:48C4-4 Caitlin Frisby: Smartphone-facilitated in-situ audiometry for community-based hearing testingC5-4 Eva Koderman: MindAffect’s EEG based Tone Audiometry Diagnostic System
14:54-15:0622:54-23:0605:54-06:06C4-5 Soner Türüdü: Comparing Digits-in-Noise Test Implementations on Various Platforms with Normal Hearing IndividualsC5-5 Sarah E Hughes: Embedding the patient voice in artificial intelligence for hearing care: the role of patient-reported outcomes
15:06-15:1823:06-23:1806:06-06:18C4-6 Gloria A Araiza Illan: Automated speech audiometry for children using Kaldi-NL automatic speech recognitionC5-6 Giulia Angonese: Towards an objective measurement of individual listening preferences: Trait consistency and state specificity
15:18-15:3023:18-23:3006:18-06:30C4-7 Maartje M. E. Hendrikse: Evaluation of a new VR-based hearing device fine-tuning procedureC5-7 Divan du Plessis: mHealth-Supported Hearing Health Training for Early Childhood Development Practitioners: An Intervention Study

Continental BREAK >>> Switch from Pretoria to Irvine (California) session 120 min


Intro & Kick-off

MS teams Irvine A
Chair: Kaye Wang

8:30 (California) 17:30 (Pretoria / Amsterdam), 1:30 (Sydney)

15 MIN


MS teams Irvine A
Chair: Kaye Wang

8:45 (California) 17:45 (Pretoria / Amsterdam), 1:45 (Sydney)

60 MIN

KEYNOTE 3: Prof Deliang Wang 
Neural Spectrospatial Filter 
40 min

FEATURED 3:  Greta Tuckute 
Driving and suppressing the human language network using large language models
20 min

BREAK >>> Join Special Session 15 min

Special Session E

MS teams Irive A
Chair: Jan-Willem Wasmann

10:00 (California) 19:00 (Pretoria / Amsterdam), 3:00 (Sydney)

60 MIN

E: Large Language Models and Chatbots

Presentation – De Wet Swanepoel – AI chatbots in hearing healthcare

Dr. De Wet Swanepoel is Professor of Audiology at the University of Pretoria, South Africa, and adjunct professor in OtolaryngologyHead & Neck Surgery, University of Colorado School of Medicine. He is Editor-in-Chief of the International Journal of Audiology and founder of a digital health company called hearX group.

Based upon: Swanepoel, D. W., Manchaiah, V., & Wasmann, J. W. A. (2023). The Rise of AI Chatbots in Hearing Health CareThe Hearing Journal76(04), 26-30.

Q&A & Panel discussion:
Moderator: Jan-Willem Wasmann, Panel: Greta Tuckute, Karrie Recker and Alan Truebot.

Greta Tuckute is a PhD candidate in the Department of Brain and Cognitive Sciences at MIT. Greta’s research focuses on language processing in the brain: How do humans effortlessly extract meaning from text and speech? Her work merges neuroscience with artificial intelligence to investigate how the mind and brain processes language.

Karrie Recker has a doctorate degree in Audiology from the University of Florida. She spent over 20 years working as a researcher for Starkey, a hearing aid manufacturer in MN. In that role, she designed and executed numerous research studies. Dr. Recker also has experience writing for, and presenting to, a range of audiences. Most recently, she wrote an article on Chat GPT and the Future of the Hearing Aid Industry. She currently has 22 patents issued or pending.

Alan the Virtual Audiologist

Welcome to Alan, the virtual audiologist. Ask any questions related to audiology and Alan will do his best to provide an answer.

SWITCH-OVER BREAK >>> Join Parallel session 15 min


11:15 (California) 20:15 (Pretoria / Amsterdam), 4:15 (Sydney)

60 MIN

MS teams Irvine A
Chair: Seba Ausili

MS teams Irvine B
Chair: Anil Nagathil

C6: Novel Approaches in Hearing Assessment and Understanding

C7: Technological Innovations in Hearing Aid Development

11:15-11:2720:15-20:2704:15-04:27C6-1 Andrew N Sivaprakasam: Towards an Open-Source Precision Audiological Diagnostics Core for Large-Scale Data AnalysisC7-1 Sarthak Mangla: IndivHear: An Individualized Adaptive Deep Learning-Based Hearing Aid
11:27-11:3920:27-20:3904:27-04:39C6-2 Lina Motlagh Zadeh: Spatial release from masking predicts listening difficulty (LiD) in childrenC7-2 Anil Nagathil: A WaveNet-based cochlear filtering and hair cell transduction model for applications in speech and music processing
11:39-11:5120:39-20:5104:39-04:51C6-3 Brittany N Jaekel: Mapping measures of vocal reaction time, perceived task demand, and speech recognition to understand the benefits of on-demand processing in hearing aidsC7-3 Justin R Burwinkel: Using Automated Speech-to-Text AI to Evaluate Relative Benefit of Assistive Listening Systems in Real-World Environments
11:51-12:0320:51-21:0304:51-05:03C6-4 Shagun Ajmera: Analyzing brain connections in sound tolerance disorders using fMRI and machine learningC7-4 Emil Hansen: GameHear: gamification and real-time motion tracking for conditioned play audiometry
12:03-12:1521:03-21:1505:03-05:15C6-5 Ivan Abraham: Within-network functional connectivity reduced for auditory & attention networks in the presence of hearing lossC7-5 Francesco Ganis: Gamified Musical Training for Children with Auditory Nerve Deficiency and Cochlear Implants: a Case Study

LUNCH BREAK >>> Join Parallel session 60 min


13:15 (California) 22:15 (Pretoria / Amsterdam), 6:15 (Sydney)

60 MIN

MS teams Irvine A
Chair: Ingrid Gielow

MS teams Irvine B
Chair: Nikki Philpott

 C8: Advances in Auditory Training and Education

 C9: Novel Approaches in Hearing Enhancement and Health Management

13:15-13:2722:15-22:2706:15-06:27C8-1 Ingrid Gielow: Artificial Intelligence for training Auditory SkillsC9-1 Artoghrul Alishbayli: Using auditory texture statistics for domain-neutral removal of background sounds
13:27-13:3922:27-22:3906:27-06:39C8-2 Vívian A Vespero:Educational material for the orientation of the elderly hearing impaired and communication partners + Preparation of videos for the orientation of hearing-impaired elderly individuals who use a hearing aidsC9-2 Pierre H Guilleminot: Improvement of speech-in-noise comprehension using vibrotactile stimuli
13:39-13:5122:30-22:5106:39-06:51C8-3 Karen M Gonzaga dos Santos: Digits-in-noise test in Brazilian Portuguese: preliminary study in schoolchildrenC9-3 Jiayue Liu: EEG as an Indicator for Perceptual Difficulties in Noise?
14:00-14:1223:00-23:1207:00-07:12C8-4 Adriano Arrigo: Wikiversity and e-Audiology: developing a MOOC in AudiologyC9-4 Karenina S Calarga: Prevention of hearing loss and hearing health management based on data from software technology
14:12-14:2423:12-23:2407:12-07:24C8-5 Hector Gabriel Corrale de Matos: Improving knowledge dissemination with Wikidata: potentialities of structured data in hearing healthC9-5 Jerusa Massola Oliveira: Datalog: monitoring tool for electronic devices

Continental BREAK >>> Switch from Irvine 1 to Sydney 2 60 min

MAIN SESSION 4 Sydney 2 

MS teams Sydney A2
Chair: Jessica Monaghan

8:20 (Sydney), 0:20 (Pretoria / Amsterdam),  15:20  (California)

45 MIN

KEYNOTE 4: Prof Antje Ihlefeld
 Improving spatial quality for hearing-aid and cochlear-implant users.
40 min

Q & A

BREAK >>> Join Panel 15 min

Panel discussion – Opportunities for AI to Advance Hearing Healthcare

Moderator: Padraig Kitterick, Panel: Brent Edwards, Andrew Dittberner, Dave Fabry, and Dennis Barbour

9:30 (Sydney), 1:30 (Pretoria / Amsterdam),  16:30  (California)

60 MIN


10:30 (Sydney), 2:30 (Pretoria / Amsterdam),  17:30  (California)

45 MIN

Sydney Time:

Pretoria Time:

Irvine California Time:

We would like to take advantage of the virtual aspects of this conference by creating sessions that consist of a mixture of different talk and discussion formats.
'It’s an exciting time to be organising VCCA, with so many new possibilities emerging in machine learning and AI. Computational Audiology has the potential to transform hearing care, enabling personalization, broadening accessibility, and aiding early intervention.'
'Quote Karina)
'Quote Kaye)
Join our CAN Slack channel to chat with other attendees in the #VCCA2023 Lobby

Alan the Virtual Audiologist

Welcome to Alan, the virtual audiologist. Ask any questions related to audiology and Alan will do his best to provide an answer.