Authors: David A Fabry*1
Affiliations:
1 Starkey
Artificial Intelligence (AI), including rule-based machine-learning algorithms, has been used in hearing aids for the past decade to improve speech understanding via environment classification. Even the most sophisticated machine-learning systems have proven to be accurate only 80-85% of the time, due to the fact that speech and music may both serve as a signal of interest OR background noise, complicating classification using acoustic environmental classification (AEC).
Recent advances in computation hardware and memory capacity have facilitated new AI applications, including deep neural network (DNN) architectures for processing and enhancing sound. This session will focus on AI and DNN, specifically how new features based on these technologies are implemented in hearing aids for potential patient benefits.
In addition, the results of recent studies will discuss how the combination of machine learning AND “”listener intent””, via an additional user-activated AEC plus offsets to optimize speech audibility or reduce background noise in challenging listening environments. Results will compare preference and speech intelligibility results for AEC versus the user-activated “”edge”” mode, as well-as professionally-optimized dedicated memories for specific listening environments versus “edge” mode.
This session will focus on the way that hearing aid performance is on the verge of mimicking (and surpassing) human auditory performance to provide breakthrough advantages for persons with hearing loss. Audience participation is encouraged.