VCCA2023 – Program Highlights

The scientific program of VCCA2023 will combine interactive keynotes, featured and invited talks with scientific contributions to highlight the wide range of world-class research and hot topics in computational audiology. Below are a selection of program highlights or continue to the full Conference program.

The program will be organized in three main blocks, to allow for participation from different time zones.

Keynote speakers:

Topic and Bio

Dr. Malcolm Slaney
He is a scientist in Google’s Machine Hearing Research Group and Adjunct Professor at Stanford CCRMA

 

We are looking forward to an interactive session with Malcolm! Jessica will interview Malcolm about his career and his thoughts how Google’s research aims to transform hearing and communication.

You may know Malcolm as one of the initiators of the auditory toolbox, hopefully he can reveal more about the latest models. Malcolm is a scientist in Google’s Machine Hearing Research Group. He is an Adjunct Professor at Stanford CCRMA, leading the Hearing Seminar for more than 30 years. He is a coauthor, with A. C. Kak, of the SIAM  “Classics in Applied Mathematics” book “Principles of Computerized Tomographic Imaging”.

 

Prof. Dr. Sarah Verhulst
She is a Full Professor in Hearing Technology and leads the Hearing Technology lab at Ghent University.

 

In her Keynote Sarah will share insights how computational auditory models and DNN could lead to more precise diagnosis and DNN-based hearing aids.

Sarah’s interdisciplinary group works on auditory neuroscience, computational modeling, and hearing technologies. She has several EU and FWO research projects on hearing diagnosis, treatment and machine hearing. She is a member of the Belgian Young Academy of Sciences and a Fellow of the Acoustical Society of America.

 

 

Prof. Dr. DeLiang Wang
Department of Computer Science & Engineering and the Center for Cognitive and Brain Sciences at The Ohio State University.

Title of presentation: Neural Spectrospatial Filter

DeLiang Wang received the B.S. degree and the M.S. degree from Peking (Beijing) University and the Ph.D. degree in 1991 from the University of Southern California all in computer science. Since 1991, he has been with the Department of Computer Science & Engineering and the Center for Cognitive and Brain Sciences at The Ohio State University, where he is a Professor and University Distinguished Scholar. He received the U.S. Office of Naval Research Young Investigator Award in 1996, the 2008 Helmholtz Award from the International Neural Network Society, the 2007 Outstanding Paper Award of the IEEE Computational Intelligence Society and the 2019 Best Paper Award of the IEEE Signal Processing Society. He is an IEEE Fellow and ISCA Fellow, and currently serves as Co-Editor-in-Chief of Neural Networks.

Dr. Antje Ihlefeld
Technical Lead for Spatial Audio Quality Research in VR/AR at Meta Reality Labs and visiting professor at the Neuroscience Institute at Carnegie Mellon University.

Title of presentation: Improving spatial quality for hearing-aid and cochlear-implant users.

Antje Ihlefeld applies principles of auditory neuroscience towards immersive AR/VR technology. Prior to joining Reality Labs at Meta as Tech Lead for Auditory Perception, she was the principal investigator in a federally funded lab that worked on restoring hearing in individuals with profound hearing loss, and a professor for biomedical engineering. Antje is passionate about driving technological advances through science and maintains close ties with higher education. She is a visiting professor at the Neuroscience Institute at Carnegie Mellon University.

 

Featured talks:

Topic

Greta Tuckute is a PhD candidate in the Department of Brain and Cognitive Sciences at MIT.

Title of presentation: Driving and suppressing the human language network using large language models

Greta’s research focuses on language processing in the brain: How do humans effortlessly extract meaning from text and speech? Her work merges neuroscience with artificial intelligence to investigate how the mind and brain processes language.

Luke Meyer is a PhD candidate in the

Department of OtorhinolaryngologyUniversity Medical Centre Groningen and theKolff Institute, University of Groningen

Title of presentation: Humanoid Robot as an Audiological Testing Interface

Luke’s PhD research explores the possibility of using a humanoid NAO robot as an alternative interface for hearing tests. His research combines the fields of robotics and human-robot interaction with the field of audiology and psychophysical testing.

Dr. Mohsen Imani is an Assistant Professor in the Department of Computer Science at UC Irvine. He is also a director of the Bio-Inspired Architecture and Systems Laboratory (BIASLab). He is working on many practical problems in brain-inspired computing and machine learning.

Title of presentation: Brain-inspired Hyperdimensional Computing for Real-time and Robust Health Monitoring

Mohsen’s research goal is to design real-time, robust, and programmable computing platforms that can natively support a wide range of learning and cognitive tasks on edge devices. Dr. Imani received his Ph.D. from the Department of Computer Science and Engineering at UC San Diego. He has a stellar publication record with over 170 papers in top conferences/journals. His contribution has led to a new direction in brain-inspired hyperdimensional computing that enables ultra-efficient and real-time learning and cognitive support. His research was also the main initiative in opening up multiple industrial and governmental research programs. Dr. Imani’s research has been recognized with several awards, including the DARPA Young Faculty Award 2023, the DARPA Riser Award 2022, the Bernard and Sophia Gordon Engineering Leadership Award, and the Outstanding Researcher Award. He also received the Best Doctorate Research from UCSD and several best paper awards and nominations at multiple top conferences.

Special sessions:

Special Session A: Emerging Trends for Paediatrics Challenges – Chair: Viji Easwar

  • Al-Rahim Habib: DrumBeat.ai: Artificial intelligence to triage ear disease in rural and remote areas
  • Patrick Wong: Neural Speech Encoding to Predict Language Outcome from Infancy
  • Collette McKay: Assessing infant hearing using fNIRS
  • Viji Easwar: EEG to assess audibility of speech with hearing aids

Special Session B: Inclusive Design and Assistive Technology – Chair: Nicky Chong-White

  • Jessica Korte: Engaging Culturally Deaf People with Assistive Technology: A Pathway to Inclusion
  • Nicky Chong-White: Breaking Communication Barriers: Live Captioning and Apple AirPods Pro
  • Cassie Hames: ‘See Me’: Elevating Public Transport Experience for All
  • Andrew Bellavia: Auracast: Exploring the Intersection of Universal Design and Accessibility

Special Session C: Cadenza Challenge: Improving music for those with a hearing loss – Chair: Trevor Cox
Scott Bannister1, Alinka E. Greasley1, Gerardo Roa Dabike2, Trevor J. Cox2, Bruno M. Fazenda2, Rebecca R. Vos2, Simone Graetzer2, Jennifer L. Firth3, William M. Whitmer3, Michael A. Akeroyd3, Jon P. Barker4

1School of Music, University of Leeds, UK; 2Acoustics Research Centre, University of Salford, UK; 3School of Medicine, University of Nottingham, UK; 4Department of Computer Science, University of Sheffield, UK

How can we process and remix music, so it sounds better for those with a hearing loss? The Cadenza project (https://cadenzachallenge.org/) is advancing our understanding of what music personalised for someone with a hearing loss should sound like. We have been running a sensory panel with hearing aid users to develop metrics of music audio quality. At VCCA2023, we will demonstrate a listening test using the scales arising from this panel. We are also running machine learning challenges to catalyse new signal processing to improve listening experiences for hearing aids and consumer devices. We will outline one of the live challenges and demonstrate the baseline software system. Input from discussions at VCCA will help shape future work in Cadenza.

Special Session E: Large Language Models and Chatbots – Chair: Jan-Willem Wasmann

In this special session De Wet Swanepoel will update us on the latest developments in Large Language Models (LLM) and how to apply those in practice. De Wet will guide us through the potential of AI chatbots in hearing healthcare. After the presentations, Jan-Willem Wasmann will lead a panel discussion with Karrie Recker and Greta Tuckute, providing an opportunity for questions and further discussion.
Presentation – De Wet Swanepoel – AI chatbots in hearing healthcare
Q&A and Panel discussion
Dr. De Wet Swanepoel is Professor of Audiology at the University of Pretoria, South Africa, and adjunct professor in OtolaryngologyHead & Neck Surgery, University of Colorado School of Medicine. He is Editor-in-Chief of the International Journal of Audiology and founder of a digital health company called hearX group.

 


Up Next

Stay tuned for more details on the upcoming VCCA2024 and other Computational Audiology Network events. We look forward to your participation!

VCCA2023 Links