Authors: Nicky Chong-White1 , Jason Heeris1 , Georgie Kennedy2
1National Acoustic Laboratories, Sydney, Australia
2Australian Institute of Health Innovation, Macquarie University, Sydney, Australia
Background: Complaints of struggling to hear speech, especially in noisy environments, are not uncommon in people who have hearing thresholds within normal limits. They often miss bits of conversations and become frustrated when audiometry tests indicate they have “normal” hearing. Currently, there are limited clinical tools available to detect hidden hearing loss. This project aims to predict whether a person perceives abnormal hearing difficulty in background noise using basic client information and standard audiological measures, without requiring additional equipment or tests to be performed.
Methods: Standard clinical measures of hearing function and responses to a detailed questionnaire about noise exposure and lifestyle were obtained from 1400 Australians aged between 11 and 35 years (mean = 22.1 years) as part of the NAL iHEAR study. Participants were classified based on responses to questions about hearing ability in different environments. Participants with hearing thresholds outside of the normal hearing range were excluded from the data.
Results: Shallow machine learning models with audiogram, otoacoustic emissions, age and gender as inputs achieved a classification accuracy of 74% for identifying hidden hearing loss (self-reported ‘difficulty in noise’). Improved results using deep learning models and additional questionnaire data as inputs are currently being investigated.
Conclusions: Preliminary results indicate that there is potential to predict self-reported difficulty hearing in noise using basic client information and standard audiological measures. The goal is to develop data-driven clinical assessment tools for audiologists to quickly identify clients who may have hidden hearing loss, so they can provide appropriate support and recommendations.