VCCA2023 – Program
The scientific program of VCCA2023 will combine interactive keynotes, featured and invited talks with scientific contributions to highlight the wide range of world-class research and hot topics in computational audiology. We are developing the program and more info will follow soon.
When to participate
The program will be organized in three main blocks, to allow for participation from different time zones. Registration will open soon. The VCCA is a truly international event, uniting attendees from across the globe. We are excited to connect professionals and enthusiasts through our virtual platform, facilitating meaningful discussions and collaboration across different time zones. To ensure a seamless experience for our attendees, we have provided the session times for the full program, organized by the main blocks, for three major regions: Sydney, Australia; Pretoria, South Africa/Europe; and California, USA.
Session 1 / Sydney 1 (29th June):
- Sydney time (AEST): 8:00 AM – 3:15 PM
- Pretoria time (SAST) / Europe (CEST): 12:00 AM – 7:15 AM
- California time (PDT): 3:00 PM – 9:00 PM (28th June)
Session 2 / Pretoria (29th June):
- Sydney time (AEST): 4:30 PM – 11:15 PM
- Pretoria time (SAST) / Europe (CEST): 8:30 AM – 3:15 PM
- California time (PDT): 11:30 PM – 6:15 AM (29th June)
Session 3 /California, US (29th June):
- Sydney time (AEST): 1:30 AM – 8:15 AM (30th June)
- Pretoria time (SAST) / Europe (CEST): 5:30 PM – 12:15 AM (29th-30th June)
- California time (PDT): 8:30 AM – 3:15 PM
Session 4 / Sydney 2 (30th June):
- Sydney time (AEST): 8:00 AM – 11:00 AM
- Pretoria time (SAST) / Europe (CEST): 12:00 AM – 3:00 AM
- California time (PDT): 3:00 PM – 6:00 PM (29th June)
We hope that providing the session times for these regions will help our attendees join the sessions according to their local time zones. The VCCA is dedicated to fostering a global community of hearing scientists, clinicians, and audiologists (Computational Audiology Network, CAN). You can register here
Detailed program:
Session 1 / Sydney 1 (29th June):
| ||||
 | Time |  | ||
Sydney (GMT+10) June 29th | Pretoria/EU (GMT+2) June 29th | California (GMT -7) June 28th | Session / Author and Title | Room |
08:00-08:15 | 00:00-00:15 | 15:00-15:15 | Intro and Kick-off | Main |
08:15-09:15 | 00:00-00:15 | 15:15-16:15 | Main Session 1 | Main |
Chair: Jessica Monaghan | ||||
08:15-08:55 | 00:15-00:55 | 15:15-15:55 | Keynote 1 | |
Malcolm Slaney: Machine Learning for Audition | ||||
08:55-09:15 | 00:55-01:15 | 15:55-16:15 | Featured talk 1 | Main |
Mohsen Imani: Brain-inspired Hyperdimensional Computing for Real-time and Robust Health Monitoring | ||||
09:15-09:30 | 01:15-01:30 | 16:15-16:30 | Break 15 minutes | |
09:30-10:30 | 01:30-02:30 | 16:30-17:30 | Special Sessions A & B | |
09:30-10:30 | 01:30-02:30 | 16:30-17:30 | Special Sessions A: Emerging trends for paediatrics challenges | |
Chair: Viji Easwar | ||||
09:30-09:45 | 01:30-01:45 | 16:30-16:45 | Patrick Wong: Neural Speech Encoding to Predict Language Outcome from Infancy | |
09:45-10:00 | 01:45-02:00 | 16:45-17:00 | Al-Rahim Habib: DrumBeat.ai: Artificial intelligence to triage ear disease in rural and remote areas | |
10:00-10:15 | 02:00-02:15 | 17:00-17:15 | Collette McKay: Assessing infant hearing using fNIRS | |
10:15-10:30 | 02:15-02:30 | 17:15-17:30 | Viji Easwar: EEG to assess audibility of speech with hearing aids | |
Break 30 minutes | ||||
10:30-11:00 | 02:30:03:00 | 17:30-18:00 | Special Session B: Inclusive Design and Assistive Technology | |
Chair: Nicky Chong-White | ||||
11:00-11:15 | 03:00-03:15 | 18:00-18:15 | Jessica Korte: How to Engage Culturally Deaf People with Assistive Technology | |
11:15-11:30 | 03:15-03:30 | 18:15-18:30 | Nicky Chong-White: Enhancing Communication with Live Captioning and Apple AirPods Pro | |
11:30-11:45 | 03:30-03:45 | 18:30-18:45 | Cassie Hames: See Me: Transforming User Experience on Public Transport for Everyone | |
11:45-12:00 | 03:45-04:00 | 18:45-19:00 | Andrew Bellavia: Auracast: When Designing For Everyone is Designing For Accessibility | |
12:00-13:00 | 04:00-05:00 | 19:00-20:00 | Lunch break 1 hour | |
13:00-13:48 | 05:00-05:48 | 20:00-20:48 | Contributed talks | |
13:00-13:48 | 05:00-05:48 | 20:00-20:48 | C1: Innovations in Auditory Perception and Speech Understanding | |
Chair: TBD | ||||
13:00-13:12 | 05:00-05:12 | 20:00-20:12 | Shangqiguo Wang: Distinguishing difficulties In speech understanding due to hearing loss and cognitive decline | |
13:12-13:24 | 05:12-05:24 | 20:12-20:24 | David Meng: Neural tracking of linguistic information as a measure of speech understanding in noise | |
13:24-13:36 | 05:24-05:36 | 20:24-20:36 | Kumar Seluakumaran: Measuring frequency selectivity (FS) in normal-hearing and mild sensorineural hearing loss clinical subjects using FS audiogram | |
13:36-13:48 | 05:36-05:48 | 20:36-20:48 | Mathew Croteau: Enhancing reliable monaural cues for sound lateralisation using CROS | |
Session 2 / Pretoria (29th June):
| ||||
Time | ||||
Sydney (GMT+10) June 29th | Pretoria/EU (GMT+2) June 29th | California (GMT -7) June 28/29th | Session / Author and Title | Room |
16:30-16:45 | 08:30-08:45 | 23:30-23:45 | Intro and Kick-off | Main |
16:45-17:45 | 08:45-09:45 | 23:45-00:45 | Main Session 2 | Main |
Chair: Karina de Sousa | ||||
16:45-17:25 | 08:45-09:25 | 00:25-00:45 | Keynote 2 | Main |
Sarah Verhulst: Personalised and neural-network-based closed-loop systems for augmented hearing | ||||
09:25-09:45 | 09:25-09:45 | 00:45-01:00 | Featured talk 2 | Main |
Luke Meyer: Humanoid Robot as an Audiological Interface? | ||||
17:45-18:00 | 09:45-10:00 | 01:00-02:00 | Break 15 minutes | |
18:00-19:00 | 10:00-11:00 | 01:00-02:00 | Special Sessions C & D | |
18:00-19:00 | 10:00-11:00 | 01:00-02:00 | Special Session C: Cadenza Challenge: Improving music for those with a hearing loss | |
Chair: Trevor Cox | ||||
18:00-19:00 | 10:00-11:00 | 01:00-02:00 | Session D: AI generated music | |
Chair: Vaibhav Srivastav from Hugging Face | ||||
19:00-19:30 | 11:00-11:30 | 02:00-02:30 | Break 30 mins | |
19:30-21:00 | 11:30-13:00 | 02:30-04:00 | Contributed talks | |
19:30-21:00 | 11:30-13:00 | 02:30-04:00 | C2: Cochlear Implants and Related Technologies | |
Chair: Tobias Goehring | ||||
19:30-19:42 | 11:30-11:42 | 02:30-02:42 | Tom Gajecki: End-to-end deep denoising for cochlear implants | |
19:42-19:54 | 11:42-11:54 | 02:42-02:54 | Francois Guerit: A “hilltop” approach for tackling channel interactions in Cochlear Implant users | |
19:54-20:06 | 11:54-12:06 | 02:54-03:06 | Lelia A Erscoi: Gathering Ecological Data to Assess Real-life Benefits of Cochlear Implants | |
20:06-20:18 | 12:06-12:18 | 03:06-03:18 | Marieke M.W. ten Hoor: Associates of Talker Size in Cochlear Implant Stimulation | |
20:18-20:24 | 12:18-12:24 | 03:18-03:24 | Break | |
20:24-20:36 | 12:24-12:36 | 03:24-03:36 | Floris Rotteveel: Electric pitch perception with cochlear implants: Using real-life sounds to get back on the right track | |
20:36-20:48 | 12:36-12:48 | 03:36-03:48 | Annesya Banerjee: Neural network models clarify the role of plasticity in cochlear implant outcomes | |
20:48-21:00 | 12:48-13:00 | 03:48-04:00 | Clément Gaultier: Recovering speech intelligibility for cochlear implants in noisy and reverberant situations using multi-microphone deep learning algorithms | |
19:30-21:00 | 11:30-13:00 | 02:30-04:00 | C3: Auditory Skills and Perception | |
Chair: | ||||
19:30-19:42 | 11:30-11:42 | 02:30-02:42 | David A Fabry: Enhancing Human Auditory Function with Artificial Intelligence and Listener Intent | |
19:42-19:54 | 11:42-11:54 | 02:42-02:54 | Iordanis Thoidis: Having choices for enhancing voices: Target speaker extraction in noisy multi-talker environments using deep neural networks | |
19:54-20:06 | 11:54-12:06 | 02:54-03:06 | Kars Tjepkema: Receiver-in-ear hearing aid comparison on music perception for hearing-impaired listeners | |
20:06-20:18 | 12:06-12:18 | 03:06-03:18 | Lana Biot: The acoustic change complex predictive validation (ACCEPT) study to predict speech perception | |
20:18-20:24 | 12:18-12:24 | 03:18-03:24 | Break | |
20:24-20:36 | 12:24-12:36 | 03:24-03:36 | Juergen Otten: Cortical speech tracking with different lip synchronization algorithms in virtual environments | |
20:36-20:48 | 12:36-12:48 | 03:36-03:48 | Franklin Y Alvarez Cardinale: Objective Measurement of Speech Intelligibility with the Spike Activity Mutual Information Index (SAMII) | |
20:48-21:00 | 12:48-13:00 | 03:48-04:00 | Alexis D Deighton MacIntyre: Cortical tracking of speech: Effects of intelligibility and spectral degradation | |
21:00-22:00 | 13:00-14:00 | 04:00-05:00 | Lunch Break | |
22:30-23:30 | 14:30-15:30 | 05:30-06:30 | Contributed talks | |
22:00-23:30 | 14:00-15:30 | 05:00-06:30 | C4: Hearing Tests and Diagnostics | |
Chair: | ||||
22:00-22:12 | 14:00-14:12 | 05:00-05:12 | Brian CJ Moore: Diagnosing Noise-Induced Hearing Loss Sustained During Military Service Using Deep Neural Networks | |
22:12-22:24 | 14:12-14:24 | 05:12-05:24 | Tahereh Afghah: Development of a novel self-assessment hearing loss and communication disability tool (HEAR-COMMAND Tool) based on the ICF standard | |
22:24-22:36 | 14:24-14:36 | 05:24-05:36 | Chen Xu: Smartphone-based hearing tests for a Virtual Hearing Clinic: Influence of ambient noise on the absolute threshold and loudness scaling at home | |
22:36-22:48 | 14:36-14:48 | 05:36-05:48 | Caitlin Frisby: Smartphone-facilitated in-situ audiometry for community-based hearing testing | |
22:48-22:54 | 14:48-14:54 | 05:48-05:54 | Break | |
22:54-23:06 | 14:54-15:06 | 05:54-06:06 | Soner Türüdü: Comparing Digits-in-Noise Test Implementations on Various Platforms with Normal Hearing Individuals | |
23:06-23:18 | 15:06-15:18 | 06:06-06:18 | Gloria A Araiza Illan: Automated speech audiometry for children using Kaldi-NL automatic speech recognition | |
23:18-23:30 | 15:18-15:30 | 06:18-06:30 | Maartje M. E. Hendrikse: Evaluation of a new VR-based hearing device fine-tuning procedure | |
22:00-23:18 | 14:00-15:18 | 05:00-06:18 | C5: Innovations in Hearing Care | |
Chair: | ||||
22:00-22:12 | 14:00-14:12 | 05:00-05:12 | Amelie Hintermaier.: PrimeHA: not a hearing aid | |
22:12-22:24 | 14:12-14:24 | 05:12-05:24 | Clara B Iplinsky: Hearing loops: old school for current communication issues | |
22:24-22:36 | 14:24-14:36 | 05:24-05:36 | Anna Josefine Munch Sørensen: Method for continuous evaluation of hearing aid user key situations in the field | |
22:36-22:48 | 14:36-14:48 | 05:36-05:48 | Eva Koderman: MindAffect’s EEG based Tone Audiometry Diagnostic System | |
22:48-22:54 | 14:48-14:54 | 05:48-05:54 | Break | |
22:54-23:06 | 14:54-15:06 | 05:54-06:06 | Sarah E Hughes: Embedding the patient voice in artificial intelligence for hearing care: the role of patient-reported outcomes | |
23:06-23:18 | 15:06-15:18 | 06:06-06:18 | Giulia Angonese: Towards an objective measurement of individual listening preferences: Trait consistency and state specificity | |
23:18-23:30 | 15:18-15:30 | 06:18-06:30 | Divan du Plessis: mHealth-Supported Hearing Health Training for Early Childhood Development Practitioners: An Intervention Study | |
Session 3 /California, US (29th June):
| ||||
Time | ||||
Sydney (GMT+10) June 30th | Pretoria/EU (GMT+2) June 29th | California (GMT -7) June 29th | Session / Author and Title | Room |
01:30-01:45 | 17:30-17:45 | 08:30-08:45 | Intro and Kick-off | Main |
01:45-02:45 | 17:45-18:45 | 08:45-09:45 | Main Session 3 | Main |
Chair: Kaye Wang | ||||
01:45-02:25 | 17:45-18:25 | 08:45-09:25 | Keynote 3 | Main |
Deliang Wang: Neural Spectrospatial Filter | ||||
02:25-02:45 | 18:25-18:45 | 09:25-09:45 | Featured talk 3 | Main |
Greta Tuckute: Driving and suppressing the human language network using large language models | ||||
02:45-03:00 | 18:45-19:00 | 09:45-10:00 | Break 15 minutes | |
03:00-04:00 | 19:00-20:00 | 10:00-11:00 | Special Session E | |
03:00-04:00 | 19:00-20:00 | 10:00-11:00 | Session E: Large Language Models and Chatbots | |
Chair: Jan-Willem Wasmann | ||||
Presentation 1: Adnan Boz – How to choose the right LLM integration for your project (15 minutes) Presentation 2: De Wet Swanepoel – “AI chatbots in hearing healthcare” (15 minutes) | ||||
Q&A & Panel discussion | ||||
04:00-04:15 | 20:00-20:15 | 11:00-11:15 | Break 15 minutes | |
04:15-05:15 | 20:15-21:15 | 11:15-12:15 | Contributed talks | |
04:15-05:15 | 20:15-21:15 | 11:15-12:15 | C6: Novel Approaches in Hearing Assessment and Understanding | |
04:15-04:27 | 20:15-20:27 | 11:15-11:27 | Andrew N Sivaprakasam: Towards an Open-Source Precision Audiological Diagnostics Core for Large-Scale Data Analysis | |
04:27-04:39 | 20:27-20:39 | 11:27-11:39 | Lina Motlagh Zadeh: Spatial release from masking predicts listening difficulty (LiD) in children | |
04:39-04:51 | 20:39-20:51 | 11:39-11:51 | Brittany N Jaekel: Mapping measures of vocal reaction time, perceived task demand, and speech recognition to understand the benefits of on-demand processing in hearing aids | |
04:51-05:03 | 20:51-21:03 | 11:51-12:03 | Shagun Ajmera: Analyzing brain connections in sound tolerance disorders using fMRI and machine learning | |
05:03-05:15 | 21:03-21:15 | 12:03-12:15 | Ivan Abraham: Within-network functional connectivity reduced for auditory & attention networks in the presence of hearing loss | |
04:15-05:15 | 20:15-21:15 | 11:15-12:15 | C7: Technological Innovations in Hearing Aid Development | |
04:15-04:27 | 20:15-20:27 | 11:15-11:27 | Sarthak Mangla: IndivHear: An Individualized Adaptive Deep Learning-Based Hearing Aid | |
04:27-04:39 | 20:27-20:39 | 11:27-11:39 | Anil Nagathil: A WaveNet-based cochlear filtering and hair cell transduction model for applications in speech and music processing | |
04:39-04:51 | 20:39-20:51 | 11:39-11:51 | Justin R Burwinkel: Using Automated Speech-to-Text AI to Evaluate Relative Benefit of Assistive Listening Systems in Real-World Environments | |
04:51-05:03 | 20:51-21:03 | 11:51-12:03 | Break | |
05:03-05:15 | 21:03-21:15 | 12:03-12:15 | Emil Hansen: GameHear: gamification and real-time motion tracking for conditioned play audiometry | |
05:00-05:12 | 21:00-21:12 | 12:00-12:12 | Francesco Ganis: Gamified Musical Training for Children with Auditory Nerve Deficiency and Cochlear Implants: a Case Study | |
05:15-06:15 | 21:15-22:15 | 12:15-13:15 | Lunch Break | |
06:15-07:36 | 22:15-23:36 | 13:15-14:36 | Contributed talks | |
06:15-07:36 | 22:15-23:36 | 13:15-14:36 | C8: Advances in Auditory Training and Education | |
06:15-06:27 | 22:15-22:27 | 13:15-13:27 | Ingrid Gielow: Artificial Intelligence for training Auditory Skills | |
06:27-06:39 | 22:27-22:39 | 13:27-13:39 | VÃvian A Vespero:Educational material for the orientation of the elderly hearing impaired and communication partners | |
06:39-06:51 | 22:30-22:51 | 13:39-13:51 | VÃvian A Vespero: Preparation of videos for the orientation of hearing-impaired elderly individuals who use a hearing aids | |
06:51-07:00 | 22:51-23:00 | 13:51-14:00 | Break | |
07:00-07:12 | 23:00-23:12 | 14:00-14:12 | Karen M Gonzaga dos Santos: Digits-in-noise test in Brazilian Portuguese: preliminary study in schoolchildren | |
07:12-07:24 | 23:12-23:24 | 14:12-14:24 | Adriano Arrigo: Wikiversity and e-Audiology: developing a MOOC in Audiology | |
07:24-07:36 | 23:24-23:36 | 14:24-14:36 | Hector Gabriel Corrale de Matos: Improving knowledge dissemination with Wikidata: potentialities of structured data in hearing health | |
06:15-07:24 | 22:15-23:24 | 13:15-14:24 | C9: Novel Approaches in Hearing Enhancement and Health Management | |
06:15-06:27 | 22:15-22:27 | 13:15-13:27 | Artoghrul Alishbayli: Using auditory texture statistics for domain-neutral removal of background sounds | |
06:27-06:39 | 22:27-22:39 | 13:27-13:39 | Pierre H Guilleminot: Improvement of speech-in-noise comprehension using vibrotactile stimuli | |
06:39-06:51 | 22:30-22:51 | 13:39-13:51 | Jiayue Liu: EEG as an Indicator for Perceptual Difficulties in Noise? | |
06:51-07:00 | 22:51-23:00 | 13:51-14:00 | Break | |
07:00-07:12 | 23:00-23:12 | 14:00-14:12 | Karenina S Calarga: Prevention of hearing loss and hearing health management based on data from software technology | |
07:12-07:24 | 23:12-23:24 | 14:12-14:24 | Jerusa Massola Oliveira: Datalog: monitoring tool for electronic devices | |
Session 4 / Sydney 2 (30th June):
| ||||
Time | ||||
Sydney (GMT+10) June 30th | Pretoria/EU (GMT+2) June 29th | California (GMT -7) June 29th | Session / Author and Title | Room |
08:30-09:15 | 00:30-01:15 | 15:30-16:15 | Main Session 4 | Main |
Chairs: Jessica Monaghan and Kaye Wang | ||||
08:30-09:15 | 00:30-01:15 | 15:30-16:15 | Keynote 4 | Main |
Antje Ihlefeld: Improving spatial quality for hearing-aid and cochlear-implant users. | ||||
09:15-09:30 | 01:15-01:30 | 16:15-16:30 | Break | |
09:30-10:30 | 01:30-02:30 | 16:30-17:30 | Panel discussion – Opportunities for AI to Advance Hearing Healthcare | |
Moderator: Padraig Kitterick, Panel: Brent Edwards, | ||||
10:30-11:15 | 02:30-03:15 | 17:30-18:15 | Prize giving, closing ceremony, and wrap-up | Main |
- VCCA2023 Overview
- VCCA2023 Registration
- VCCA2023 Program Highlights
- VCCA2023 Preliminary Program
- VCCA2023 Accepted Abstracts
- VCCA2023 Pitch Videos
- VCCA2023 Call-for-abstracts
- VCCA2023 Scientific Committee