Quarterly update Q1 2021

Newsroom Computational Audiology, April 4

Author: Jan-Willem Wasmann, audiologist at the ENT department of the Radboud University Medical Center Nijmegen in the Netherlands.

Below you find the latest news and developments in computational audiology: VCCA2021 call for abstracts has been opened!, ARO symposium recording now available, Winner Computational Audiology Award 2020 announced, an initiative for online resources has started, and an overview of 2021 first quarter’s publications related to computational audiology.

Meet the heroes powering Computational Audiology. Design by Sigrid Polspoel.

 

VCCA2021 update

Abstract submission has been opened for the 2nd Virtual Conference on Computational Audiology (VCCA)  on 25 June 2021.

Abstract submission deadline: 1 May 2021

We anticipate strong interest in the field due to an outstanding scientific program with world-leading keynote speakers (Prof Brian CJ Moore, University of Cambridge; Prof Josh McDermott, MIT; Prof Mounya Elhilali, Johns Hopkins; Prof Nick Lesica, UCL) and featured talks by early-career researchers on hot topics.

ARO Symposium ‘Emerging Capabilities for Evaluating Human Hearing

The recording of the ARO Symposium ‘Emerging Capabilities for Evaluating Human Hearing’, is now freely accessible to visitors of computationalaudiology.com via the ARO forum and an online hearing test demonstration.

Computational Audiology Award 2020

Leontien Pragt has won the Computational Audiology award!  She received the prize (a hearX Self Test Kit) for running an innovative AI speech2text program for the deaf, her contributions to the computational audiology website and her creative presentation including a live-demonstration at the VCCA2020 conference. Leontien told us she plans to use the hearX mobile audiometer for experimenting remote care and if traveling is possible again she hopes she can contribute in Tanzania to AfriKNO, a Dutch-based, independent non‐profit organization that structurally stimulates, supports, educates, and as such improves public ENT (Ear Nose Throat) departments on the African continent. We thank hearX Group for sponsoring this award.
Winner Computational Audiology Award: Leontien Pragt (right)

A hub to share online resources

We created a central hub to share resources related to computational audiology including resources that are published on general platforms such as OSFZenodo or GitHub among others. Also we are collecting dedicated auditory toolboxes such as the Auditory Modeling Toolbox (AMT). The basic idea is to use computationalaudiology.com as a central hub to share resources that are useful for researchers and clinicians.

Purpose:

  • sharing of research software, tools, and models
  • sharing best practices (data policies, software licensing), inspire peers, and increase transparency
  • facilitating cooperation across centers increase sample sizes and strengthen the robustness of experimental evaluations
  • building a community that fosters effective collaboration and uses similar tools and data sharing pipelines
Please contact us (via resources@computationalaudiology.com) if you want to publish a blog about your project or if you want to share an online resource (e.g. a demonstration).

 

Other developments

  • Simone Graetzer and Trevor Cox launched the clarity challenge
  • The CI Hackathon sparked interest in cochlear implant research and innovation for cochlear implant recipients. The organizing team intends to share the results of the hackathon. We are looking forward to hearing more from them.

 

Recent publications related to computational audiology

Did we miss a publication? Please send your suggestion to resources@computationalaudiology.com

Below publications were found using  [(Computational Audiology) OR ((Machine Learning) AND audiology)] in Google Scholar.

Pitathawatchai et al. (2021) used a deep neural network to impute missing data in audiograms collected in Thailand. A nice example that state of the art algorithm can now be applied anywhere. Wasmann et al. (2021) described a vision of how computational audiology can address the global burden of hearing loss, but also raise concerns about the need for new policy for AI, big data, and audiology. Sundgaard et al. (2021) used deep metric learning to diagnose otitis media based on otoscopy images. Their algorithm is on par with clinical experts. The hearing aid research community can now use an open-source software platform that stimulates sustainable and reproducible research to improve assistive hearing systems. The open community software platform (OpenMHA) is developed by Kayser et al. (2021) and is available via Zenodo.

These and many more developments are published below.

Bibliography Q1 2021

Barbour, D. L., & Wasmann, J.-W. A. (2021). Performance and Potential of Machine Learning Audiometry. The Hearing Journal, 74(3), 40–43. https://doi.org/10.1097/01.HJ.0000737592.24476.88

Dotan, A., & Shriki, O. (2021). Tinnitus-like “hallucinations” elicited by sensory deprivation in an entropy maximization recurrent neural network. BioRxiv, 2021.01.11.426188. https://doi.org/10.1101/2021.01.11.426188

Hülsmeier, D., Schädler, M. R., & Kollmeier, B. (2021). DARF: A data-reduced FADE version for simulations of speech recognition thresholds with real hearing aids. Hearing Research, 404, 108217. https://doi.org/10.1016/j.heares.2021.108217

Jones, S. A., & Noppeney, U. (2021). Ageing and multisensory integration: A review of the evidence, and a computational perspective. Cortex, 138, 1–23. https://doi.org/10.1016/j.cortex.2021.02.001

Kayser, H., Herzke, T., Maanen, P., Zimmermann, M., Grimm, G., & Hohmann, V. (2021). Open community platform for hearing aid algorithm research: Open Master Hearing Aid (openMHA). ArXiv:2103.02313 [Cs, Eess]. https://doi.org/10.5281/zenodo.4601604

Kim, S. (n.d.). AI-Driven Insights from AI-Driven Data—Hearing Review. Retrieved April 5, 2021, from https://www.hearingreview.com/hearing-products/hearing-aids/ai

Melgarejo-Nagata, H., & Cabanillas-Carbonell, M. (2021). Analysis of emerging technologies for the social inclusion of people with hearing disabilities: A review of the scientific literature from 2005—2020. Clinical Medicine, 08(03), 14.

Mepani, A. M., Verhulst, S., Hancock, K. E., Garrett, M., Vasilkov, V., Bennett, K., … & Maison, S. F. (2021). Envelope following responses predict speech-in-noise performance in normal hearing listeners. Journal of Neurophysiology. https://doi.org/10.1152/jn.00620.2020

Pitathawatchai, P., Chaichulee, S., & Kirtsreesakul, V. (2021). Robust machine learning method for imputing missing values in audiograms collected in children. International Journal of Audiology, 0(0), 1–12. https://doi.org/10.1080/14992027.2021.1884909

Ratnanather, J. T., Bhattacharya, R., Heston, M. B., Song, J., Fernandez, L. R., Lim, H. S., Lee, S.-W., Tam, E., Yoo, S., Bae, S.-H., Lam, I., Jeon, H. W., Chang, S. A., & Koo, J.-W. (2021). An mHealth App (Speech Banana) for Auditory Training: App Design and Development Study. JMIR MHealth and UHealth, 9(3), e20890. https://doi.org/10.2196/20890

Sanchez Lopez, R., Fereczkowski, M., Santurette, S., Dau, T., & Neher, T. (2021). Towards Auditory Profile-Based Hearing-Aid Fitting: Fitting Rationale and Pilot Evaluation. Audiology Research, 11(1), 10–21. https://doi.org/10.3390/audiolres11010002

Sanchez-Lopez, R., Dau, T., & Whitmer, W. M. (2021). Audiometric profiles and patterns of benefit: a data-driven analysis of subjective hearing difficulties and handicaps. International Journal of Audiology, 1–10. https://doi.org/10.1080/14992027.2021.1905890

Shekar, R. C. M. C., Belitz, C., & Hansen, J. H. L. (2021). Development of CNN-Based Cochlear Implant and Normal Hearing Sound Recognition Models Using Natural and Auralized Environmental Audio. 2021 IEEE Spoken Language Technology Workshop (SLT), 728–733. https://doi.org/10.1109/SLT48900.2021.9383550

Sundgaard, J. V., Harte, J., Bray, P., Laugesen, S., Kamide, Y., Tanaka, C., Paulsen, R. R., & Christensen, A. N. (2021). Deep metric learning for otitis media classification. Medical Image Analysis, 102034. https://doi.org/10.1016/j.media.2021.102034

Uhler, K., Hunter, S., & Gilley, P. M. (2021). Mismatched response predicts behavioral speech discrimination outcomes in infants with hearing loss and normal hearing. Infancy, 26(2), 327–348. https://doi.org/10.1111/infa.12386

Wang, Q., Qian, M., Yang, L., Shi, J., Hong, Y., Han, K., … & Wu, H. (2021). Audiometric Phenotypes of Noise-Induced Hearing Loss by Data-Driven Cluster Analysis and Their Relevant Characteristics. Frontiers in Medicine8. 10.3389/fmed.2021.662045

Wasmann, J.-W. A., & Barbour, D. L. (2021). Emerging Hearing Assessment Technologies for Patient Care. The Hearing Journal, 74(3), 44. https://doi.org/10.1097/01.HJ.0000737596.12888.22

Wasmann, J.-W. A., Lanting, C. P., Huinck, W. J., Mylanus, E. A. M., van der Laak, J. W. M., Govaerts, P. J., Swanepoel, D. W., Moore, D. R., & Barbour, D. L. (2021). Computational Audiology: New Approaches to Advance Hearing Health Care in the Digital Age. Ear and Hearing, Publish Ahead of Print. https://doi.org/10.1097/AUD.0000000000001041

Wimalarathna, H., Ankmnal-Veeranna, S., Allan, C., Agrawal, S. K., Allen, P., Samarabandu, J., & Ladak, H. M. (2021). Comparison of machine learning models to classify Auditory Brainstem Responses recorded from children with Auditory Processing Disorder. Computer Methods and Programs in Biomedicine, 200, 105942. https://doi.org/10.1016/j.cmpb.2021.105942

Zhang, Q., Wang, D., Zhao, R., & Yu, Y. (2021). SoundLip: Enabling Word and Sentence-level Lip Interaction for Smart Devices. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 5(1), 43:1-43:28. https://doi.org/10.1145/3448087

Last edited April 16.