VCCA2022 – Program Highlights

The scientific program of VCCA2022 will combine keynote and featured talks with scientific contributions to highlight the wide range of world-class research and hot topics in computational audiology. Three special sessions will be held to showcase and discuss developments and applications of remote audiology, predictive coding, and machine learning for hearing devices.

The program will be organised in different blocks, to allow for participation from different time zones. Register for free.

 

Thanks to the VCCA2022 Sponsors!

Keynote talks

Title and Abstract

Deep machine learning in audiology: Models of perception and automated listening tests

In this talk, I will provide two examples how deep learning can be exploited in the context of audiology. First, models of speech perception will be introduced that aim at predicting speech intelligibility by using techniques from current speech technology. Second, automatic speech recognition is presented as a tool for conducting hearing tests. In a clinical setting, listeners’ responses are usually collected by a human supervisor, which is time-consuming and expensive. I will show results for speech audiometry using an automatic system in the clinic and briefly introduce an Alexa skill for performing a screening procedure at home.

Big Hearing Science: Making More from Data, Code, and Models

In the world of machine learning, artifacts like data, code, and predictive models are widely shared to accelerate research progress. This is the result of technical, institutional, and cultural investment in infrastructure. This keynote will discuss some learnings and approaches from other disciplines that may support the next generation of data-intensive hearing science.

  • Prof. Stefan Launer (Sonova AG)
Holistic hearing and health care in the context of digital health care

Hearing systems have entered the age of connectivity in that wireless connectivity based on standard, widely available wireless communication protocols has become a standard component of every hearing device. Connectivity in hearing instruments has a wide ranging impact on the devices and their functionalities available to patients as well as on the way hearing health care services are being delivered to patients.  Today we expand the scope of our focus in technology development beyond the classical aspects of hearing instruments to offer innovative approaches to hearing health care along the entire client journey. Currently, the ear is also turning into a hotspot for vital sign monitoring by integrating various sensors in ear level worn devices. Next, the occurrence of hearing loss is strongly correlated with e.g. the occurrence of diabetes type II, an increased risk of cognitive decline a higher risk of falls and various other health issues. So overall, hearing aids are morphing in digital health agents serving people most in need of a broader, more holistic approach to hearing and health care. In my presentation I will discuss the transformation of hearing aids to health agents in the context of current overall trends in digital health care.

The music of silence: Investigating the predictive brain with music and electrophysiology

The human brain rapidly learns regularities of our sensory environment, helping us to predict and prepare ourselves for future actions. Prediction mechanisms have an even more fundamental role, as they have been shown to shape perception itself. While the impact of prediction on perception has been observed in a myriad of scenarios, the underlying neural mechanisms remain under intense debate. One challenge is that most investigations have been carried out with simple sensory stimuli and it remains unclear how those findings apply to more complex and realistic scenarios. In this talk, I will discuss prediction mechanisms in the context of music perception. I will describe my research framework to assess the neural tracking of sounds with ecologically-valid stimuli and temporal response function analyses. Then, I will present a series of recent EEG and ECoG results on music perception that provide a provocative perspective on the role of predictions in auditory processing.

Virtual Reality in hearing research: achievements and future challenges

Virtual Reality (VR) has the potential to provide dynamic and immersive audio-visual experiences which are at the same time very realistic and highly controllable. Several successful attempts have been made in the past to create and validate VR versions of standard audiological tests, as well as to design and prototype new assessment procedures and technologies in order to obtain more meaningful and ecologically-valid data. Similarly, work has been done looking at hearing training, therefore at improving perceptual skills in tasks such as speech understanding and sound sources localisation. Despite the potential of these approaches, several challenges are still open, and several others have not yet been tackled.

 

Featured talks

Title and Abstract

Best Practices for Data Sharing in Computational Audiology: How to Accelerate Discovery?

It is widely believed that large repositories of shared scientific data will provide novel insights beyond validation of research results by independent groups, but how will the field of computational audiology realize that goal? This talk will provide a brief overview of efforts from the National Institutes of Health
(NIH) to encourage widespread access to shared data, along with other advancements in data science, and accelerate the discovery of insights that improve the lives of millions of people with communication disorders.

Special sessions:

Title and chairsDescriptionStructure
Remote audiology

Prof. Erick Gallun, Dr. Ellen Peng, and Prof. De Wet Swanepoel

Karina De Sousa: Remote digits-in-noise testing to triage hearing loss

Vicky Zhang: Comparing speech communication in-person and videoconference.based assesments in normal hearing people: A pilot study

Liesbeth Gijbels: Considerations for Virtual Studies of early Childhood Development

Esteban Lelo de Larrea-Mancera: Gamification effect (or rather, Using gaimg technology to train auditory function)

Jo Evershed: Why and How to Gamify Auditory Research (Workshop)

Panel discussion

Predictive coding

Dr. Bernhard Englitz, Dr. Emma Holmes, and Prof. Floris de Lange

Emma Holmes: An introduction to predictive coding, active inference, and applications to speech perception

Floris de Lange: Predictive neural representations in music

Bernhard Englitz: Evidence for Predictive Coding in the Auditory System and its Use in Acoustic Filtering

General Discussion

Machine learning challenges to improve hearing devices

Prof. Jon Barker (University of Sheffield, UK), Prof. Trevor Cox (University of Salford, UK), Prof. Annamaria Mesaros (Tampere University, Finland), and Zehai Tu (University of Sheffield, UK)

In many machine learning domains (e.g., speech recognition, speech synthesis, scene classification), rapid advances have been made by adopting a ‘shared-task’ methodology, i.e. inviting teams to compete in open machine learning challenges with shared datasets and shared objectives. This session focuses on the challenges and opportunities of using this approach to bring advances to hearing device processing. The session will be a mix of presentations and group discussions. A set of lighting talks will present three current shared-task projects in the domains of speech, music and environmental audio. These case studies will be used to motivate group discussion about directions and approaches for future hearing device machine learning task design.Introduction

Jon Barker / Zehai Tu: The Clarity Project (claritychallenge.org): Machine Learning Challenges to Improve Speech in Noise for People with Hearing Loss

Trevor Cox: The Cadenza Project (cadenzachallenge.org): Machine-Learning Challenges to Improve Music Listening for People with Hearing Loss

Annamaria Mesaros: DCASE Challenges (dcase.community): Environmental Audio

Trevor Cox: Practical issues in running challenges

Discussion

Virtual reality for hearing research and auditory modeling in realistic environments

Dr. Axel Ahrens, Dr. Maartje Hendrikse, and Dr. Lorenzo Picinali

Lubos Hladek: On behavior during free conversations in the realistic audio visual simulation of the underground station

Bhavisha Parmar: Measuring outcomes of VR-based spatial hearing interventions

Merle Gerken: Speech Recognition Predictions in Complex Auditory Environments for Listeners with Hearing Impairment

Maartje Hendrikse: Improving hearing device fitting with virtual reality

Abigail Kressner: Towards better predictions of speech intelligibility in cochlear implant
recipients

Thibault Vincente: Predicting the effect of hearing impairment and presentation level on binaural speech intelligibility in noise

Panel discussion