Evaluation of multivariate classification algorithms for hearing loss detection through a speech-in-noise test

Marta Lenatti1, Edoardo M. Polo2,3 Martina Paolini3, Maximiliano Mollura3, Marco Zanet1, Riccardo Barbieri3, Alessia Paglialonga1

1Consiglio Nazionale delle Ricerche (CNR), Istituto di Elettronica e di Ingegneria dell’Informazione e delle Telecomunicazioni (IEIIT), Milan, Italy
2DIAG, Sapienza University of Rome, Rome, Italy
3Politecnico di Milano, Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB), Milan, Italy

Background: Online speech-in-noise screening tests are becoming increasingly popular as means to promote awareness and enable early identification of age-related hearing loss. To date, these tests are mainly based on the analysis of a single measure, that is the speech reception threshold (SRT). However, other features may provide significant predictors of hearing loss. The aim of this study is to address a hearing screening procedure that integrates a novel speech-in-noise test that can be used remotely in individuals of unknown language and artificial intelligence (AI) algorithms to analyze features extracted from the test.

Methods: In addition to the SRT, estimated using a newly developed staircase, our system extracted features such as percentage of correct responses, average reaction time, and test duration from 177 tested ears (including 68 ears with slight/mild or moderate hearing loss). These features were fed into a collection of AI algorithms including both explainable (XAI, e.g. Decision Tree) and conventional methods (e.g., Logistic Regression, Support Vector Machines) to train a multivariate classifier identifying ears with hearing loss.

Results: Our AI-based multivariate classifiers achieved better performance and sensitivity (e.g., Logistic Regression: AUC=0.88; sensitivity = 0.80; specificity = 0.77) when compared to a conventional univariate classifier based on SRT (cut-off: -8.87 dB SNR; AUC=0.82; sensitivity = 0.75; specificity = 0.81). According to XAI methods, in addition to SRT, other features like the number of correct responses and age were relevant in identifying slight/mild or higher degree of hearing loss.

Conclusion: The proposed hearing screening procedure showed good performance in terms of hearing loss detection. Ongoing research includes the implementation of an icon-based module to assess additional features, specifically risk factors for hearing loss (e.g., noise exposure, diabetes), that will be validated on a large population.

This study was partially supported by Capita Foundation (project WHISPER, Widespread Hearing Impairment Screening and PrEvention of Risk, 2020 Auditory Research Grant).

Lenatti et al

 

  • New analyses have been carried out within the WHISPER (Widespread Hearing Impairment Screening and PrEvention of Risk) project.
    Specifically, the new insights address training and evaluation of machine learning models for hearing loss detection through speech-in-noise testing and post-hoc explainability analysis (SHapley Additive exPlanations-SHAP, Partial Dependence Plots-PDPs, and Feature Permutation Importance) applied to non-natively explainable models (e.g., Random Forests).
    Our Python codes, together with a sample of synthetic data, are now available at https://github.com/lenattimarta/whisper_posthocXAI