VCCA2023 – Program

The scientific program of VCCA2023 will combine interactive keynotes, featured and invited talks with scientific contributions to highlight the wide range of world-class research and hot topics in computational audiology. We are developing the program and more info will follow soon.

When to participate

The program will be organized in three main blocks, to allow for participation from different time zones. Registration will open soon. The VCCA is a truly international event, uniting attendees from across the globe. We are excited to connect professionals and enthusiasts through our virtual platform, facilitating meaningful discussions and collaboration across different time zones. To ensure a seamless experience for our attendees, we have provided the session times for the full program, organized by the main blocks, for three major regions: Sydney, Australia; Pretoria, South Africa/Europe; and California, USA.

Session 1 / Sydney 1 (29th June):

  • Sydney time (AEST): 8:00 AM – 3:15 PM
  • Pretoria time (SAST) / Europe (CEST): 12:00 AM – 7:15 AM
  • California time (PDT): 3:00 PM – 9:00 PM (28th June)

Session 2 / Pretoria (29th June):

  • Sydney time (AEST): 4:30 PM – 11:15 PM
  • Pretoria time (SAST) / Europe (CEST): 8:30 AM – 3:15 PM
  • California time (PDT): 11:30 PM – 6:15 AM (29th June)

Session 3 /California, US (29th June):

  • Sydney time (AEST): 1:30 AM – 8:15 AM (30th June)
  • Pretoria time (SAST) / Europe (CEST): 5:30 PM – 12:15 AM (29th-30th June)
  • California time (PDT): 8:30 AM – 3:15 PM

Session 4 / Sydney 2 (30th June):

  • Sydney time (AEST): 8:00 AM – 11:00 AM
  • Pretoria time (SAST) / Europe (CEST): 12:00 AM – 3:00 AM
  • California time (PDT): 3:00 PM – 6:00 PM (29th June)

We hope that providing the session times for these regions will help our attendees join the sessions according to their local time zones. The VCCA is dedicated to fostering a global community of hearing scientists, clinicians, and audiologists (Computational Audiology Network, CAN). You can register here

Detailed program:

Session 1 / Sydney 1 (29th June):

  • Sydney time (AEST): 8:00 AM – 3:15 PM
  • Pretoria time (SAST) / Europe (CEST): 12:00 AM – 7:15 AM
  • California time (PDT): 3:00 PM – 9:00 PM (28th June)
 Time 
Sydney (GMT+10) June 29thPretoria/EU (GMT+2) June 29thCalifornia (GMT -7) June 28thSession / Author and TitleRoom
08:00-08:1500:00-00:1515:00-15:15Intro and Kick-offMain
08:15-09:1500:00-00:1515:15-16:15Main Session 1Main
Chair: Jessica Monaghan
08:15-08:5500:15-00:5515:15-15:55Keynote 1
Malcolm Slaney: Machine Learning for Audition
08:55-09:1500:55-01:1515:55-16:15Featured talk 1Main
Mohsen Imani: Brain-inspired Hyperdimensional Computing for Real-time and Robust Health Monitoring
09:15-09:3001:15-01:3016:15-16:30Break 15 minutes
09:30-10:3001:30-02:3016:30-17:30Special Sessions A & B
09:30-10:3001:30-02:3016:30-17:30Special Sessions A: Emerging trends for paediatrics challenges
Chair: Viji Easwar
09:30-09:4501:30-01:4516:30-16:45Patrick Wong: Neural Speech Encoding to Predict Language Outcome from Infancy
09:45-10:0001:45-02:0016:45-17:00Al-Rahim Habib: DrumBeat.ai: Artificial intelligence to triage ear disease in rural and remote areas
10:00-10:1502:00-02:1517:00-17:15Collette McKay: Assessing infant hearing using fNIRS
10:15-10:3002:15-02:3017:15-17:30Viji Easwar: EEG to assess audibility of speech with hearing aids
Break 30 minutes
10:30-11:0002:30:03:0017:30-18:00Special Session B: Inclusive Design and Assistive Technology
Chair: Nicky Chong-White
11:00-11:1503:00-03:1518:00-18:15Jessica Korte: How to Engage Culturally Deaf People with Assistive Technology
11:15-11:3003:15-03:3018:15-18:30Nicky Chong-White: Enhancing Communication with Live Captioning and Apple AirPods Pro
11:30-11:4503:30-03:4518:30-18:45Cassie Hames: See Me: Transforming User Experience on Public Transport for Everyone
11:45-12:0003:45-04:0018:45-19:00Andrew Bellavia: Auracast: When Designing For Everyone is Designing For Accessibility
12:00-13:0004:00-05:0019:00-20:00Lunch break 1 hour
13:00-13:4805:00-05:4820:00-20:48Contributed talks
13:00-13:4805:00-05:4820:00-20:48C1: Innovations in Auditory Perception and Speech Understanding
Chair: TBD
13:00-13:1205:00-05:1220:00-20:12Shangqiguo Wang: Distinguishing difficulties In speech understanding due to hearing loss and cognitive decline
13:12-13:2405:12-05:2420:12-20:24David Meng: Neural tracking of linguistic information as a measure of speech understanding in noise
13:24-13:3605:24-05:3620:24-20:36Kumar Seluakumaran: Measuring frequency selectivity (FS) in normal-hearing and mild sensorineural hearing loss clinical subjects using FS audiogram
13:36-13:4805:36-05:4820:36-20:48Mathew Croteau: Enhancing reliable monaural cues for sound lateralisation using CROS
Session 2 / Pretoria (29th June):

  • Sydney time (AEST): 4:30 PM – 11:15 PM
  • Pretoria time (SAST) / Europe (CEST): 8:30 AM – 3:15 PM
  • California time (PDT): 11:30 PM – 6:15 AM (29th June)
Time
Sydney (GMT+10) June 29thPretoria/EU (GMT+2) June 29thCalifornia (GMT -7) June 28/29thSession / Author and TitleRoom
16:30-16:4508:30-08:4523:30-23:45Intro and Kick-offMain
16:45-17:4508:45-09:4523:45-00:45Main Session 2Main
Chair: Karina de Sousa
16:45-17:2508:45-09:2500:25-00:45Keynote 2Main
Sarah Verhulst: Personalised and neural-network-based closed-loop systems for augmented hearing
09:25-09:4509:25-09:4500:45-01:00Featured talk 2Main
Luke Meyer: Humanoid Robot as an Audiological Interface?
17:45-18:0009:45-10:0001:00-02:00Break 15 minutes
18:00-19:0010:00-11:0001:00-02:00Special Sessions C & D
18:00-19:0010:00-11:0001:00-02:00Special Session C: Cadenza Challenge: Improving music for those with a hearing loss
Chair: Trevor Cox
18:00-19:0010:00-11:0001:00-02:00Session D: AI generated music
Chair: Vaibhav Srivastav from Hugging Face
19:00-19:3011:00-11:3002:00-02:30Break 30 mins
19:30-21:0011:30-13:0002:30-04:00Contributed talks
19:30-21:0011:30-13:0002:30-04:00C2: Cochlear Implants and Related Technologies
Chair: Tobias Goehring
19:30-19:4211:30-11:4202:30-02:42Tom Gajecki: End-to-end deep denoising for cochlear implants
19:42-19:5411:42-11:5402:42-02:54Francois Guerit: A “hilltop” approach for tackling channel interactions in Cochlear Implant users
19:54-20:0611:54-12:0602:54-03:06Lelia A Erscoi: Gathering Ecological Data to Assess Real-life Benefits of Cochlear Implants
20:06-20:1812:06-12:1803:06-03:18Marieke M.W. ten Hoor: Associates of Talker Size in Cochlear Implant Stimulation
20:18-20:2412:18-12:2403:18-03:24Break
20:24-20:3612:24-12:3603:24-03:36Floris Rotteveel: Electric pitch perception with cochlear implants: Using real-life sounds to get back on the right track
20:36-20:4812:36-12:4803:36-03:48Annesya Banerjee: Neural network models clarify the role of plasticity in cochlear implant outcomes
20:48-21:0012:48-13:0003:48-04:00Clément Gaultier: Recovering speech intelligibility for cochlear implants in noisy and reverberant situations using multi-microphone deep learning algorithms
19:30-21:0011:30-13:0002:30-04:00C3: Auditory Skills and Perception
Chair:
19:30-19:4211:30-11:4202:30-02:42David A Fabry: Enhancing Human Auditory Function with Artificial Intelligence and Listener Intent
19:42-19:5411:42-11:5402:42-02:54Iordanis Thoidis: Having choices for enhancing voices: Target speaker extraction in noisy multi-talker environments using deep neural networks
19:54-20:0611:54-12:0602:54-03:06Kars Tjepkema: Receiver-in-ear hearing aid comparison on music perception for hearing-impaired listeners
20:06-20:1812:06-12:1803:06-03:18Lana Biot: The acoustic change complex predictive validation (ACCEPT) study to predict speech perception
20:18-20:2412:18-12:2403:18-03:24Break
20:24-20:3612:24-12:3603:24-03:36Juergen Otten: Cortical speech tracking with different lip synchronization algorithms in virtual environments
20:36-20:4812:36-12:4803:36-03:48Franklin Y Alvarez Cardinale: Objective Measurement of Speech Intelligibility with the Spike Activity Mutual Information Index (SAMII)
20:48-21:0012:48-13:0003:48-04:00Alexis D Deighton MacIntyre: Cortical tracking of speech: Effects of intelligibility and spectral degradation
21:00-22:0013:00-14:0004:00-05:00Lunch Break
22:30-23:3014:30-15:3005:30-06:30Contributed talks
22:00-23:3014:00-15:3005:00-06:30C4: Hearing Tests and Diagnostics
Chair:
22:00-22:1214:00-14:1205:00-05:12Brian CJ Moore: Diagnosing Noise-Induced Hearing Loss Sustained During Military Service Using Deep Neural Networks
22:12-22:2414:12-14:2405:12-05:24Tahereh Afghah: Development of a novel self-assessment hearing loss and communication disability tool (HEAR-COMMAND Tool) based on the ICF standard
22:24-22:3614:24-14:3605:24-05:36Chen Xu: Smartphone-based hearing tests for a Virtual Hearing Clinic: Influence of ambient noise on the absolute threshold and loudness scaling at home
22:36-22:4814:36-14:4805:36-05:48Caitlin Frisby: Smartphone-facilitated in-situ audiometry for community-based hearing testing
22:48-22:5414:48-14:5405:48-05:54Break
22:54-23:0614:54-15:0605:54-06:06Soner Türüdü: Comparing Digits-in-Noise Test Implementations on Various Platforms with Normal Hearing Individuals
23:06-23:1815:06-15:1806:06-06:18Gloria A Araiza Illan: Automated speech audiometry for children using Kaldi-NL automatic speech recognition
23:18-23:3015:18-15:3006:18-06:30Maartje M. E. Hendrikse: Evaluation of a new VR-based hearing device fine-tuning procedure
22:00-23:1814:00-15:1805:00-06:18C5: Innovations in Hearing Care
Chair:
22:00-22:1214:00-14:1205:00-05:12Amelie Hintermaier.: PrimeHA: not a hearing aid
22:12-22:2414:12-14:2405:12-05:24Clara B Iplinsky: Hearing loops: old school for current communication issues
22:24-22:3614:24-14:3605:24-05:36Anna Josefine Munch Sørensen: Method for continuous evaluation of hearing aid user key situations in the field
22:36-22:4814:36-14:4805:36-05:48Eva Koderman: MindAffect’s EEG based Tone Audiometry Diagnostic System
22:48-22:5414:48-14:5405:48-05:54Break
22:54-23:0614:54-15:0605:54-06:06Sarah E Hughes: Embedding the patient voice in artificial intelligence for hearing care: the role of patient-reported outcomes
23:06-23:1815:06-15:1806:06-06:18Giulia Angonese: Towards an objective measurement of individual listening preferences: Trait consistency and state specificity
23:18-23:3015:18-15:3006:18-06:30Divan du Plessis: mHealth-Supported Hearing Health Training for Early Childhood Development Practitioners: An Intervention Study
Session 3 /California, US (29th June):

  • Sydney time (AEST): 1:30 AM – 8:15 AM (30th June)
  • Pretoria time (SAST) / Europe (CEST): 5:30 PM – 12:15 AM (29th-30th June)
  • California time (PDT): 8:30 AM – 3:15 PM
Time
Sydney (GMT+10) June 30thPretoria/EU (GMT+2) June 29thCalifornia (GMT -7) June 29thSession / Author and TitleRoom
01:30-01:4517:30-17:4508:30-08:45Intro and Kick-offMain
01:45-02:4517:45-18:4508:45-09:45Main Session 3Main
Chair: Kaye Wang
01:45-02:2517:45-18:2508:45-09:25Keynote 3Main
Deliang Wang: Neural Spectrospatial Filter
02:25-02:4518:25-18:4509:25-09:45Featured talk 3Main
Greta Tuckute: Driving and suppressing the human language network using large language models
02:45-03:0018:45-19:0009:45-10:00Break 15 minutes
03:00-04:0019:00-20:0010:00-11:00Special Session E
03:00-04:0019:00-20:0010:00-11:00Session E: Large Language Models and Chatbots
Chair: Jan-Willem Wasmann
Presentation 1: Adnan Boz – How to choose the right LLM integration for your project (15 minutes)
Presentation 2: De Wet Swanepoel – “AI chatbots in hearing healthcare” (15 minutes)
Q&A & Panel discussion
04:00-04:1520:00-20:1511:00-11:15Break 15 minutes
04:15-05:1520:15-21:1511:15-12:15Contributed talks
04:15-05:1520:15-21:1511:15-12:15C6: Novel Approaches in Hearing Assessment and Understanding
04:15-04:2720:15-20:2711:15-11:27Andrew N Sivaprakasam: Towards an Open-Source Precision Audiological Diagnostics Core for Large-Scale Data Analysis
04:27-04:3920:27-20:3911:27-11:39Lina Motlagh Zadeh: Spatial release from masking predicts listening difficulty (LiD) in children
04:39-04:5120:39-20:5111:39-11:51Brittany N Jaekel: Mapping measures of vocal reaction time, perceived task demand, and speech recognition to understand the benefits of on-demand processing in hearing aids
04:51-05:0320:51-21:0311:51-12:03Shagun Ajmera: Analyzing brain connections in sound tolerance disorders using fMRI and machine learning
05:03-05:1521:03-21:1512:03-12:15Ivan Abraham: Within-network functional connectivity reduced for auditory & attention networks in the presence of hearing loss
04:15-05:1520:15-21:1511:15-12:15C7: Technological Innovations in Hearing Aid Development
04:15-04:2720:15-20:2711:15-11:27Sarthak Mangla: IndivHear: An Individualized Adaptive Deep Learning-Based Hearing Aid
04:27-04:3920:27-20:3911:27-11:39Anil Nagathil: A WaveNet-based cochlear filtering and hair cell transduction model for applications in speech and music processing
04:39-04:5120:39-20:5111:39-11:51Justin R Burwinkel: Using Automated Speech-to-Text AI to Evaluate Relative Benefit of Assistive Listening Systems in Real-World Environments
04:51-05:0320:51-21:0311:51-12:03Break
05:03-05:1521:03-21:1512:03-12:15Emil Hansen: GameHear: gamification and real-time motion tracking for conditioned play audiometry
05:00-05:1221:00-21:1212:00-12:12Francesco Ganis: Gamified Musical Training for Children with Auditory Nerve Deficiency and Cochlear Implants: a Case Study
05:15-06:1521:15-22:1512:15-13:15Lunch Break
06:15-07:3622:15-23:3613:15-14:36Contributed talks
06:15-07:3622:15-23:3613:15-14:36C8: Advances in Auditory Training and Education
06:15-06:2722:15-22:2713:15-13:27Ingrid Gielow: Artificial Intelligence for training Auditory Skills
06:27-06:3922:27-22:3913:27-13:39Vívian  A Vespero:Educational material for the orientation of the elderly hearing impaired and communication partners
06:39-06:5122:30-22:5113:39-13:51Vívian  A Vespero: Preparation of videos for the orientation of hearing-impaired elderly individuals who use a hearing aids
06:51-07:0022:51-23:0013:51-14:00Break
07:00-07:1223:00-23:1214:00-14:12Karen M Gonzaga dos Santos: Digits-in-noise test in Brazilian Portuguese: preliminary study in schoolchildren
07:12-07:2423:12-23:2414:12-14:24Adriano Arrigo: Wikiversity and e-Audiology: developing a MOOC in Audiology
07:24-07:3623:24-23:3614:24-14:36Hector Gabriel Corrale de Matos: Improving knowledge dissemination with Wikidata: potentialities of structured data in hearing health
06:15-07:2422:15-23:2413:15-14:24C9: Novel Approaches in Hearing Enhancement and Health Management
06:15-06:2722:15-22:2713:15-13:27Artoghrul Alishbayli: Using auditory texture statistics for domain-neutral removal of background sounds
06:27-06:3922:27-22:3913:27-13:39Pierre H Guilleminot: Improvement of speech-in-noise comprehension using vibrotactile stimuli
06:39-06:5122:30-22:5113:39-13:51Jiayue Liu: EEG as an Indicator for Perceptual Difficulties in Noise?
06:51-07:0022:51-23:0013:51-14:00Break
07:00-07:1223:00-23:1214:00-14:12Karenina S Calarga: Prevention of hearing loss and hearing health management based on data from software technology
07:12-07:2423:12-23:2414:12-14:24Jerusa Massola Oliveira: Datalog: monitoring tool for electronic devices
Session 4 / Sydney 2 (30th June):

  • Sydney time (AEST): 8:00 AM – 11:00 AM
  • Pretoria time (SAST) / Europe (CEST): 12:00 AM – 3:00 AM
  • California time (PDT): 3:00 PM – 6:00 PM (29th June)
Time
Sydney (GMT+10) June 30thPretoria/EU (GMT+2) June 29thCalifornia (GMT -7) June 29thSession / Author and TitleRoom
08:30-09:1500:30-01:1515:30-16:15Main Session 4Main
Chairs: Jessica Monaghan and Kaye Wang
08:30-09:1500:30-01:1515:30-16:15Keynote 4Main
Antje Ihlefeld: Improving spatial quality for hearing-aid and cochlear-implant users.
09:15-09:3001:15-01:3016:15-16:30Break
09:30-10:3001:30-02:3016:30-17:30Panel discussion – Opportunities for AI to Advance Hearing Healthcare
Moderator: Padraig Kitterick, Panel: Brent Edwards,
10:30-11:1502:30-03:1517:30-18:15Prize giving, closing ceremony, and wrap-upMain