Future development of AI for enhancing music for listeners with hearing loss

Authors: William Whitmer1

1Hearing Sciences – Scottish Section, Division of Clinical Neuroscience at the University of Nottingham in the Glasgow Royal Infirmary, Glasgow, Scotland

In this panel session, there will be a discussion of the future development of Artificial Intelligence (AI) for enhancing music listening for people with hearing loss. Currently, there is an accessibility gap in music listening for individuals with hearing loss. In the Cadenza project, the team is running signal processing challenges aimed at improving hearing aid technology specifically for music listening. Generally, hearing aids employ strategies to enhance speech that are often suboptimal for music listening. Hearing aid wearers often turn off their device or don’t make use of a specific music setting on their hearing aids, as the existing technology degrades their music listening experience.

In this session, the panel will discuss emerging trends in AI-driven methodologies such as sound source separation, equalisation and personalised audio rendering that can be used to address the auditory challenges faced by this growing population. The discussion will be informed by what has been learned in the Cadenza project. Another topic of discussion will be the integration of AI algorithms into hearing aids for real-time adaptive music processing, offering customisable listening experiences tailored to individual hearing profiles. We also need to find an objective metric that can be used to configure music processing algorithms, for cases when running listening tests is not feasible. Enhancing music listening for people with hearing loss is a multidisciplinary endeavour worth pursuing to improve quality of life across hearing abilities.