VCCA2023 – Program Highlights

The scientific program of VCCA2023 will combine interactive keynotes, featured and invited talks with scientific contributions to highlight the wide range of world-class research and hot topics in computational audiology. We are developing the program and more info will follow soon.

The program will be organized in three main blocks, to allow for participation from different time zones. Registration will open soon.

Keynote speakers:

Topic and Bio

Dr. Malcolm Slaney
He is a scientist in Google’s Machine Hearing Research Group and Adjunct Professor at Stanford CCRMA


We are looking forward to an interactive session with Malcolm! Jessica will interview Malcolm about his career and his thoughts how Google’s research aims to transform hearing and communication.

You may know Malcolm as one of the initiators of the auditory toolbox, hopefully he can reveal more about the latest models. Malcolm is a scientist in Google’s Machine Hearing Research Group. He is an Adjunct Professor at Stanford CCRMA, leading the Hearing Seminar for more than 30 years. He is a coauthor, with A. C. Kak, of the SIAM  “Classics in Applied Mathematics” book “Principles of Computerized Tomographic Imaging”.


Prof. Dr. Sarah Verhulst
She is a Full Professor in Hearing Technology and leads the Hearing Technology lab at Ghent University.


In her Keynote Sarah will share insights how computational auditory models and DNN could lead to more precise diagnosis and DNN-based hearing aids.

Sarah’s interdisciplinary group works on auditory neuroscience, computational modeling, and hearing technologies. She has several EU and FWO research projects on hearing diagnosis, treatment and machine hearing. She is a member of the Belgian Young Academy of Sciences and a Fellow of the Acoustical Society of America.



Prof. Dr. DeLiang Wang
Department of Computer Science & Engineering and the Center for Cognitive and Brain Sciences at The Ohio State University.

Title of presentation: Neural Spectrospatial Filter

DeLiang Wang received the B.S. degree and the M.S. degree from Peking (Beijing) University and the Ph.D. degree in 1991 from the University of Southern California all in computer science. Since 1991, he has been with the Department of Computer Science & Engineering and the Center for Cognitive and Brain Sciences at The Ohio State University, where he is a Professor and University Distinguished Scholar. He received the U.S. Office of Naval Research Young Investigator Award in 1996, the 2008 Helmholtz Award from the International Neural Network Society, the 2007 Outstanding Paper Award of the IEEE Computational Intelligence Society and the 2019 Best Paper Award of the IEEE Signal Processing Society. He is an IEEE Fellow and ISCA Fellow, and currently serves as Co-Editor-in-Chief of Neural Networks.

Dr Antje Ihlefeld
Technical Lead for Spatial Audio Quality Research in VR/AR at Meta Reality Labs and visiting professor at the Neuroscience Institute at Carnegie Mellon University.

Title of presentation: Improving spatial quality for hearing-aid and cochlear-implant users.

Antje Ihlefeld applies principles of auditory neuroscience towards immersive AR/VR technology. Prior to joining Reality Labs at Meta as Tech Lead for Auditory Perception, she was the principal investigator in a federally funded lab that worked on restoring hearing in individuals with profound hearing loss, and a professor for biomedical engineering. Antje is passionate about driving technological advances through science and maintains close ties with higher education. She is a visiting professor at the Neuroscience Institute at Carnegie Mellon University.


Featured talks:


Greta Tuckute is a PhD candidate in the Department of Brain and Cognitive Sciences at MIT.

Title of presentation: Driving and suppressing the human language network using large language models

Greta’s research focuses on language processing in the brain: How do humans effortlessly extract meaning from text and speech? Her work merges neuroscience with artificial intelligence to investigate how the mind and brain processes language.

More to be announced soon.More to be announced soon.

Special sessions:

More information will follow