Newsroom Computational Audiology, April 23
Below you find the latest news and developments in computational audiology: VCCA2022 abstract deadline extended to May 1rst, VCCA2022 registration has opened, CAN podcast launched, upcoming events, rumors, and an overview of 2022 first quarter’s publications related to computational audiology.
VCCA2022 Abstract Submission Deadline extended
The VCCA 2022 submission deadline is extended to May 1rst. Please check the call for abstracts or directly submit here.
VCCA2022 program and registration
Registration for the VCCA2022 conference is open! The program is taking shape now, please check the program highlights. In addition to keynotes, featured talks and contributed talks/posters, we will have four special sessions, which cover highly relevant aspects of current research in Computational Audiology: Remote audiology Chaired by Prof. Erick Gallun, Dr. Ellen Peng, and Prof. De Wet Swanepoel Predictive coding Chaired by Dr. Bernhard Englitz, Dr. Emma Holmes, and Prof. Floris de Lange Machine learning for hearing devices Chaired by Zehai Tu, Prof. Jon Barker, and Prof. Trevor Cox Virtual reality for hearing research and auditory modeling in realistic environments Chaired by Dr. Axel Ahrens, Dr. Maartje Hendrikse, and Prof. Lorenzo Picinali
Computational Audiology Network podcast
We just launched the first two episodes of the computational audiology network podcast. The show is hosted by Dennis Barbour and Jan-Willem Wasmann. We will cover relevant topics for researchers, clinicians, and engineers. The audiovisual version is published on Computational Audiology TV, while the audio-only version can be found on all major podcast platforms (iTunes, Spotify). Feel free to leave a rating or to post a comment. Suggestions and questions can be directed to podcast@computationalaudiology.com.
Upcoming Events
6th International Conference on Cognitive Hearing Science for Communication
Full lists of events in 2022
Rumors
In conjunction with the podcast, we launch a series of challenges inspired by the novel ‘Ready Player One’ by Ernest Cline. We hope it will be serious fun. Rumor has it that the prize is magnificent. The first challenge is to find all auditory effects in the intro sound. The second challenge is to pass the adopted ASR (audio) Turing test. Here you can find the rules of the game and monitor the scoreboard.
Recent publications related to computational audiology
It is nice to see so many VCCA talks turn into papers and pre-prints. There have been multiple publications in the media about Machine Learning and special topics on computational audiology.
Machine Learning in hearing healthcare
Jessica Monaghan and David Allen discuss in a paper in ENT&Audiology news how machine learning can be applied to large collections of data collected via hearing aids to further optimize fittings but also to extract information from distinct data sources or enhance remote hearing healthcare.




Collecting data via automated audiometry
An example of big data collection with personal devices (>300 million of hours of recordings) is the Apple Hearing Study meant to better understand the impact of nonoccupational sound exposure on health (Neitzel et al., 2022). The study also includes a number of app-based hearing tests. Many more self-administered approaches to assess pure-tone thresholds on par with clinical procedures have been collected in a scoping review about automated audiometry (Wasmann et al., 2021). An overview of current automated audiometry approaches is listed here. An alternative approach to determine hearing threshold using objective automated audiometry based on EEG and support vector machines was developed by (Djemai & Guerti, 2022).Â
Predicting and assessing human speech recognition and machine speech recognition
Based on hearing thresholds from a large group (>12 thousand subjects) one can predict supra-threshold behavior (speech reception) using machine learning as was done by (Kim et al., 2021). Meanwhile, Roßbach et al. (2022) used deep learning-based models for automated speech recognition (ASR) to predict the speech recognition performance of hearing impaired listeners.




Tu et al., (2022a) used ASR to predict speech recognition by humans. Their method exploited the hidden representation of a convolutional neural network and was experimentally validated using signals from the CPC1 database.




Pragt et al., (2022) tried to assess the performance of ASR apps for the deaf based on conventional clinical hearing tests for human listeners. They concluded that conventional performance metrics including the word error rate are not sufficient to assess the benefit of ASR for the deaf. Amin et al., (2022) used BERT to determine the relative importance of specific words in transcripts of conversations and compared this with how the relative importance of specific words was rated by human impaired listeners. Such studies can be used to determine relevant performance metrics.
ASR has much potential to support communication between people as was demonstrated in the second episode of the Computational Audiology Network podcast in which Dimitri Kanevsky, Jessica Monaghan, and Nicky Chong-White were interviewed using a prototype ASR system specifically trained on Dimitri’s voice. All participants were impressed by the way how ASR facilitated a group conversation.
Below publications were found using  [(Computational Audiology) OR ((Machine Learning) AND audiology)] in Google Scholar. Did we miss a publication? Please send your suggestion to resources@computationalaudiology.com
These and many more developments are published below.
Amin, A. A., Hassan, S., Alm, C., & Huenerfauth, M. (2022). Using BERT Embeddings to Model Word Importance in Conversational Transcripts for Deaf and Hard of Hearing Users. https://doi.org/10.13140/RG.2.2.28272.33289
Calado, A., Roselli, P., Errico, V., Magrofuoco, N., Vanderdonckt, J., & Saggio, G. (2022). A Geometric Model-Based Approach to Hand Gesture Recognition. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 1–11. https://doi.org/10.1109/TSMC.2021.3138589
Cieśla, K., Wolak, T., Lorens, A., Mentzel, M., Skarżyński, H., & Amedi, A. (2022). Effects of training and using an audio-tactile sensory substitution device on speech-in-noise understanding. Scientific Reports, 12(1), 3206. https://doi.org/10.1038/s41598-022-06855-8
de Bruijn, S. E. (n.d.). BROADENING THE GENOMIC LANDSCAPE OF SENSORY DISORDERS. 354.
Djemai, M., & Guerti, M. (2022). A genetic algorithm-based support vector machine model for detection of hearing thresholds. Australian Journal of Electrical and Electronics Engineering, 0(0), 1–8. https://doi.org/10.1080/1448837X.2021.2023080
D’Orazio, D., Salvio, D. D., & Garai, M. (n.d.). Tagging noise sources in offices through Machine-Learning techniques. 11.
Edvall, N. K., Mehraei, G., Claeson, M., Lazar, A., Bulla, J., Leineweber, C., Uhlén, I., Canlon, B., & Cederroth, C. R. (2022). Alterations in auditory brain stem response distinguish occasional and constant tinnitus. Journal of Clinical Investigation, 132(5), e155094. https://doi.org/10.1172/JCI155094
Gößwein, J. A., Rennies, J., Huber, R., Bruns, T., Hildebrandt, A., & Kollmeier, B. (2022). Evaluation of a semi-supervised self-adjustment fine-tuning procedure for hearing aids. International Journal of Audiology, 0(0), 1–13. https://doi.org/10.1080/14992027.2022.2028022
Kim, H., Park, J., Choung, Y.-H., Jang, J. H., & Ko, J. (2021). Predicting speech discrimination scores from pure-tone thresholds—A machine learning-based approach using data from 12,697 subjects. PLOS ONE, 16(12), e0261433. https://doi.org/10.1371/journal.pone.0261433
Lladó, P., Hyvärinen, P., & Pulkki, V. (2022). Auditory model-based estimation of the effect of head-worn devices on frontal horizontal localisation. Acta Acustica, 6, 1. https://doi.org/10.1051/aacus/2021056
Michaud, S., Moffett, B., Rousiouk, A. T., Duda, V., & Grondin, F. (2022). SmartBelt: A Wearable Microphone Array for Sound Source Localization with Haptic Feedback. ArXiv:2202.13974 [Cs]. http://arxiv.org/abs/2202.13974
Monaghan, J. J., & Allen, D. (2022). Machine learning to support audiology. 30(6), 2. https://www.entandaudiologynews.com/features/audiology-features/post/machine-learning-to-support-audiology
Neitzel, R. L., Smith, L., Wang, L., Green, G., Block, J., Carchia, M., Mazur, K., DePalma, G., Azimi, R., & Villanueva, B. (2022). Toward a better understanding of nonoccupational sound exposures and associated health impacts: Methods of the Apple Hearing Study. The Journal of the Acoustical Society of America, 151(3), 1476–1489. https://doi.org/10.1121/10.0009620
Podusenko, A., van Erp, B., Koudahl, M., & de Vries, B. (2022). AIDA: An Active Inference-based Design Agent for Audio Processing Algorithms. Frontiers in Signal Processing, 2, 842477. https://doi.org/10.3389/frsip.2022.842477
Pragt, L., van Hengel, P., Grob, D., & Wasmann, J.-W. (2022). Preliminary Evaluation of Automated Speech Recognition Apps for the Hearing Impaired and Deaf. Frontiers in Digital Health, 4, 806076. https://doi.org/10.3389/fdgth.2022.806076
Rastgoo, R., Kiani, K., Escalera, S., Athitsos, V., & Sabokrou, M. (2022). All You Need In Sign Language Production. ArXiv:2201.01609 [Cs]. http://arxiv.org/abs/2201.01609
Roßbach, J., Kollmeier, B., & Meyer, B. T. (2022). A model of speech recognition for hearing-impaired listeners based on deep learning. The Journal of the Acoustical Society of America, 151(3), 1417–1427. https://doi.org/10.1121/10.0009411
Schilling, A., Sedley, W., Gerum, R., Metzner, C., Tziridis, K., Maier, A., Schulze, H., Zeng, F.-G., Friston, K. J., & Krauss, P. (2022). Predictive Coding and Stochastic Resonance: Towards a Unified Theory of Auditory (Phantom) Perception. ArXiv:2204.03354 [Cs, q-Bio]. http://arxiv.org/abs/2204.03354
Schilling, A., Tziridis, K., Schulze, H., & Krauss, P. (2021). The stochastic resonance model of auditory perception: A unified explanation of tinnitus development, Zwicker tone illusion, and residual inhibition. In Progress in Brain Research (Vol. 262, pp. 139–157). Elsevier. https://doi.org/10.1016/bs.pbr.2021.01.025
Tu, Z., Ma, N., & Barker, J. (2022a). Exploiting Hidden Representations from a DNN-based Speech Recogniser for Speech Intelligibility Prediction in Hearing-impaired Listeners. ArXiv:2204.04287 [Cs, Eess, q-Bio]. http://arxiv.org/abs/2204.04287
Tu, Z., Ma, N., & Barker, J. (2022b). Unsupervised Uncertainty Measures of Automatic Speech Recognition for Non-intrusive Speech Intelligibility Prediction. ArXiv:2204.04288 [Cs, Eess]. http://arxiv.org/abs/2204.04288
Wasmann, J.-W., Pragt, L., Eikelboom, R., & Swanepoel, D. (2021). Digital Approaches to Automated and Machine Learning Assessments of Hearing: Scoping Review. Journal of Medical Internet Research. https://doi.org/10.2196/32581
Zedan, A., Jürgens, T., Williges, B., Hülsmeier, D., & Kollmeier, B. (2022). Modelling speech reception thresholds and their improvements due to spatial noise reduction algorithms in bimodal cochlear implant users. Hearing Research, 108507. https://doi.org/10.1016/j.heares.2022.108507
Last edited April 25.