Newsroom Computational Audiology, September 23
We trust you enjoyed a restful summer break. In this edition, we are introducing the VCCA2024 organizers, sharing our VCCA2024 survey, providing access to the VCCA2023 public recordings and sharing the VCCA2023 conference report written by Jermy Pang & Nikki Philpott. Computational Audiology is now present on Wikipedia in Brazilian Portuguese, Dutch, and English. Please consider joining the ICASSP Machine Learning Challenge to improve music for those with a hearing loss, and don’t forget the final manuscript call for Trends in Digital Hearing Health and Computational Audiology.
We are pleased to announce the next Virtual Conference on Computational Audiology, VCCA2024. This will bring together hearing, audiology, AI and computational researchers from across the world. We are seeking feedback on preferred dates and time zones via our survey here: https://forms.gle/MMbxVCw7nDou2L3EA. The deadline for completing this survey is Saturday 30th September 2023.
This conference will be chaired by Simone Graetzer (University of Salford) with co-chairs Trevor Cox (also Salford) and Jan-Willem Wasmann (Radboud University Medical Center). Technical chairs will include Jon Barker (University of Sheffield), Michael Akeroyd and Graham Naylor (University of Nottingham), Alinka Greasley (University of Leeds) and John Culling (University of Cardiff).
VCCA2024 will build on previous conferences to provide an interactive platform for the presentation and discussion of the latest research in Computational Audiology. Key areas of exploration will include the application of computational approaches to audiological interventions and hearing science, AI in audiology and hearing healthcare, auditory neuroscience and acoustics to facilitate hearing. The programme will feature submitted talks, special sessions and keynote presentations.
Simone, Trevor and the technical chairs are from the Cadenza and Clarity projects. Both projects are running a series of machine learning challenges to improve the processing of sound for those with hearing loss. Cadenza is improving the audio quality of music from hearing aids or consumer devices; see elsewhere in the newsletter for more on the Cadenza ICASSP 2024 challenge that has just launched. Clarity is focussed on the intelligibility of speech processed by hearing aids. The prediction challenge series involves developing an objective metric for how intelligible speech is for someone with a hearing loss listening via a hearing aid. The results of the second prediction challenge (CPC2) were announced recently at Interspeech 2023 in Dublin. The enhancement challenge series involves developing software that enhances speech for the members of our listening panel of hearing aid wearers. The next enhancement challenges will start in 2024 including dynamic outdoor scenes with moving noise sources. The team is currently seeking feedback on the plan for this final round. To learn more and to contact the team, go to the website and join the Google Group.
We love to hear your suggestions for the VCCA2024! Please use our google form to suggest improvements, additions, or any ideas you have for our next conference, VCCA2024. Your input is valuable in making our future events and activities even better and help us to select the date for VCCA2024!
We look back at a memorable event, due in large part to the diversity and richness of ideas, discussions, and contributions from all participants. Want to watch back some of the material presented? We’ve been granted permissions from several presenters to broadcast their talks on Computational Audiology TV. Checkout the VCCA2023 public playlist.
1 National Acoustic Laboratories, Dharug Country, New South Wales, Australia
2 Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud university medical center, Nijmegen, The Netherlands
3 Cochlear Ltd, Mechelen, Belgium
The VCCA 2023 organising committee ACED the conference this year! It was action-packed, catering across three main sessions running on Sydney, Pretoria and California time. The topics were comprehensive and accommodated a broad selection of delegates. From improving music appreciation to paediatric considerations, applications of AI in hearing healthcare to enhancing client education about managing hearing loss – it led to a fantastic knowledge exchange across the three-day conference. Managing the technical support and navigating a long list of presentations was no small task, and it was the result of many weeks of collaboration between teams of researchers traversing various continents. The VCCA organisers and events team showcased efficiency such that delegates were shepherded appropriately and the concurrent sessions ran like clockwork. All the presenters and delegates were from diverse and multidisciplinary backgrounds spanning computer scientists, engineers, clinicians, and early-career researchers, which generated engaging discussions at Q&A time and networking sessions via dedicated Slack channels.
It would not be a short report if it were to include all the amazing talks, however, two presentations and the special sessions certainly deserve a shout-out:
Greta Tuckute (MIT, USA), a Featured Speaker and winner of this year’s Young Scientist Award, gave an inspiring presentation titled “Driving and suppressing the human language network using large language models”. Using “Transformer language models,” she showed that these accurate models of the brain’s language processing can be used to control brain activity in certain language-related areas, without invasive methods.
Prof Sarah Verhulst (Ghent University, Belgium) gave an excellent Keynote Talk on “Personalised and neural-network-based closed-loop systems for augmented hearing”. Her presentation took listeners on a journey from the foundations of auditory modelling and precision diagnostics, to clinical implications of future hearing technologies.
As part of a Special Session, the Cadenza team provided an interactive presentation on personalizing music to improve music listening experiences of people with hearing assistive technologies. The session led to a lively discussion with participants providing input and feedback that will help the Cadenza team move the needle for improved music perception in individuals with hearing loss.
Special Session A was entirely dedicated to four talks on emerging trends in paediatric hearing loss diagnosis to management. The focus of the talks was centred on improving early detection and optimising intervention for young people, which is often challenging without objective means to accurately evaluate listening experiences of the paediatric population.
Special Session B was on inclusive design and assistive technology, and true to its title, the projects presented included other sensory disabilities. While the potential to collaborate and cross-pollinate learnings is exciting, the highlight of this session was certainly the call to action for more consumer and community involvement (or patient and public involvement as it is known in other parts of the world), and inclusion of lived experience in the design and implementation of emerging assistive technology is an invigorating proposition.
The salient benefit of a virtual conference is certainly one of the ‘play-on-demand’ privileges. While it was not possible to watch all the sessions from the various time zones, the ability to access them at a time of participants’ choosing is one that we can all appreciate. It has been a wonderfully inviting conference of intersecting disciplines that are committed and united to improving hearing loss detection, holistic management, and multi-stakeholder involvement throughout the hearing loss journey. And with that, the authors are already sold on VCCA2024!
The conference report was originally written for ENT and Audiology News.
The Cadenza project is running an ICASSP Machine Learning Challenge to improve music for those with a hearing loss. https://cadenzachallenge.org/ The best entrants get invited to present papers at ICASSP 2024 in Korea. Please enter or spread the word.
Someone with a hearing loss is listening to music via their hearing aids. The challenge is to develop a signal processing system that allows a personalised rebalancing of the music to improve the listening experience, for example by amplifying the vocals relative to the sound of the band. One approach to this would be to a demix the music and then apply gains to the separated tracks to change the balance when the music is downmixed to stereo.
What makes the demix more difficult than previous demix challenges?
The left and right signals you are working with are those picked up by a microphone at each ear when the person is listening to a pair of stereo loudspeakers. This means the signals at the ear that you have for demix, is a combination of both the right and left stereo signals because of cross-talk. This cross-talk will be strongest at low frequency, when the wavelength is largest. This means that the spatial distribution of an instrument will be different in the microphone signals at the ear, compared to the original left-right music signals. Stereo demix algorithms will need to be revised to allow for this frequency-dependent change. We will model the cross-talk using HRTFs (Head Related Transfer Functions).
In the long term, any demix approaches will need to be causal and low latency. For ICASSP 2024, we are allowing causal and non-causal approaches. For those attempting to make a low latency solution, this is much more challenging!
Why this challenge?
We want to get more of the ICASSP community to consider diverse hearing and so allow those with a hearing loss to benefit from the latest signal processing advances.
Final Call for Manuscript Submission for Trends in Digital Hearing Health and Computational Audiology
Computational Audiology is now present on Wikipedia in Brazilian Portuguese, Dutch and English. Audiologia computacional é um ramo da Audiologia que emprega técnicas da matemática e informática para aprimorar o tratamento clínico e o entendimento científico do sistema auditivo.
At the VCCA2023, Hector Gabriel explains how Wikidata can be used to disseminate knowledge:
Help us expand our global outreach by contributing translations in the missing languages.
- Challenge accepted? 👇
- ¿Quién va a escribir la versión española?
- Türkçe sürümü kim yazacak?
- Qui écrira la version française?
- Wer wird die deutsche Version schreiben?
- Chi scriverà la versione italiana?
- Feel free to add your own language ;-)!
Stay tuned for more details on the upcoming VCCA2024 and other Computational Audiology Network events. We look forward to your participation!
- Register for updates (CAN newsletter)
- Computational Audiology TV (here you will find many recordings)
- Follow us on social media (LinkedIn)
- CAN Slack Channel
Last edited Sep 23.