Investigating the Efficacy of the Automatic Classifier AutoSense Sky OS for Children with Cochlear Implants in a Virtual Reality Classroom and other Complex Acoustic Environments

Authors: Maartje Hendrikse1, André Goedegebure1, Kars Tjepkema1, Jantien Vroegop1

1Erasmus MC

Background: Modern hearing devices incorporate automatic classifiers to categorize acoustic environments and adjust device settings accordingly. Directional microphones can improve speech perception from the front in noisy conditions, but they attenuate sounds from other directions, necessitating cautious activation by classifiers. The requirements for classifiers differ between children and adults due to distinct listening situations. This study aims to evaluate the potential benefits of AutoSense Sky OS, which is a classifier designed specifically for children and is used in Advanced Bionics cochlear implants (CIs).

Methods: A technical evaluation showed how the pediatric and adult versions of AutoSense classified complex acoustic conditions. In a crossover intervention study, we included children with CIs (target N=12). They tested two programs: with AutoSense activated and deactivated (microphone in omnidirectional mode). Using a new listening test, speech intelligibility from the front and sound detection from other directions were measured in a virtual reality classroom setting with both programs. During a two-week take-home period for each program, parents and children rated performance differences using questionnaires.

Results: The technical evaluation revealed no differences between pediatric and adult versions of AutoSense. The first results from the intervention study based on 6 participants showed differences in speech intelligibility from the front and detecting stimuli from other directions with AutoSense active versus inactive. Children were enthusiastic about performing tests in virtual reality and were able to perform the combined tasks.

Conclusion: The new listening test in the virtual reality classroom effectively quantified the impact of AutoSense on speech intelligibility from the front and speech detection from other directions.