Authors: Sarthak Mangla* 1 ; Michael G Heinz 1
1 Purdue University
Hearing aid benefits can be limited due to low accessibility of professional help in many places worldwide, limitations in individual fitting and diagnostic procedures of hearing devices, and the tedious process of setting up hearing aids. Hearing support can also be inadequate due to limited human-machine interfaces to steer the device based on the patient’s individual needs.
We present IndivHear, a deep reinforcement learning-based solution centered around the complex nature of hearing loss. We utilize a convolutional autoencoder with a BiLSTM between the encoder and decoder. The model is initialized with state-of-the-art noise-reduction weights before a two-step individualized patient training process. Firstly, the model is fine-tuned to fit a patient’s hearing profile. This step is time-efficient as models of previous patients with similar hearing profiles are utilized. Next, the model is further adapted using Reinforcement Learning with Human Feedback (RLHF). This entire process is fully remote and self-controlled, making it easy to carry out with no required input from audiologists.
We utilize a novel metric called Speech Quality Assessment (SQA) to allow the patient to grade sounds. This metric consists of three components – loudness, clarity, and intelligibility. During the initial testing of IndivHear, the test subject scored the SQA of the final model’s output higher or the same compared to the best SQA produced during the initial training phase. The prototype of IndivHear costs around $70 for both ears and is optimized for real-time inference on any Android smartphone.
Reinforcement Learning with Human Feedback is generally effective for problems where defining a reward function that fully captures the desired behavior is hard. Coupled with state-of-the-art noise-reduction network architecture, initial testing indicates IndivHear is able to adapt to the individualized nature of hearing preferences of patients.