Quarterly update Q4 2021

Newsroom Computational Audiology, December 29

Below you find the latest news and developments in computational audiology: 2021 in numbers, events in 2022, awards, our suggestion to previous VCCA presenters, and an overview of 2021 fourth quarter’s publications related to computational audiology. We also would like to thank everybody who contributed this year to the VCCA and to the website.

2021 in numbers

Events in 2022

Awards 2021

  • Manon Revel (Video pitch), Emma Holmes (Video pitch), Liepollo Ntlhakana (VCCA Special Award), Dalia Tsimpida (Winner VCCA Young Scientist Award), and Tim Brochier (Winner VCCA Young Scientist Award) received awards for their achievements at the VCCA2021. Their work is highlighted here: VCCA2021 award winners.
  • Elle O’Brien received a grant from the Sloan Foundation to study the adoption of data science/ML methods in basic science research! The hearing science community inspired her as a case study in how old and new research traditions converge.

Tip for VCCA presenters

If you have any updates on the work you have presented at the VCCA, e.g. a publication, please share a link to your publication by adding a comment to your abstract page. See for instance the new publication by Alessia Paglialonga and her team about the Preliminary evaluation of the Speech Reception Threshold measured using a new language-independent screening test as a predictor of hearing loss (Zanet et al., 2021).

Acknowledgments

We would like to thank everybody who contributed to the website, shared software or datasets, or presented at the VCCA. A special thanks to:

François Guérit, Tobias Goehring, Josh McDermott, Elle O’Brien, Adam Bosen, Tilak J. Ratnanather, Frederick J. Gallun, Valeriy Shafiro Brian C.J. Moore, Volker Hohmann, and Raul Sanchez-Lopez.

Recent publications related to computational audiology

Did we miss a publication? Please send your suggestion to resources@computationalaudiology.com

Below publications were found using  [(Computational Audiology) OR ((Machine Learning) AND audiology)] in Google Scholar.

Jenny & Reuter (2021) discussed what perceptual quality features are important to make spatial awareness more realistic in Virtual Reality applications. Auditory research may benefit from a virtual acoustic rendering of complex acoustic environments (van de Par et al., 2021). Ecologically valid complex auditory-visual environments that simulate everyday settings presumably provide opportunities to assess the cognitive factors involved in processing speech information by listeners with hearing loss in daily life.

Scatter plots of speech assessment predictions of MOSA-Net, Quality- Net, and STOI-Net (Zezario et al., 2021).

This quarter, two groups created improved speech intelligibility assessment models based on deep learning (Chen & Tsao, 2021; Zezario et al., 2021). Zezario et al. ( 2021) proposed a novel cross-domain speech assessment metric called MOSA-Net. Chen & Tsao, 2021) in contrast, proposed InQSS, a speech intelligibility assessment model that uses both spectrogram and scattering coefficients as input features.

Schematic of the proposed enhancement system. First, the time-signal is processed by an STFT to transform the input signal to the time-frequency domain. The magnitude is fed to the separation network, which predicts a time-frequency mask for speech and noise components. The masks are multiplied with the magnitude of the input mixture to create estimations of the speech and noise signal. These signals are remixed with a reduced level of the noise component and in the last step transformed back to the time domain (Westhausen et al., 2021).

Meanwhile, Westhausen et al. (2021) applied recurrent neural networks to separate speech from background signals and remix the separated sounds at a higher signal-to-noise ratio in order to achieve lower listening effort for the end-user.

Rodrigo et al. (2021) used exploratory data mining techniques to identify the variables associated with the treatment success of internet-based cognitive behavioral therapy for tinnitus. Instead of a fingerprint, EarNet: uses signals from the ear, Transients Evoked Otoacoustic Emission, to identify people (Varugeese et al., 2021). It raises the question of how widely applicable this identification technique can become given the absence of otoacoustic emissions when one experiences hearing loss? Perugia et al. (2021) attempted to assess benefits from hearing aids (HAs) based on speech evoked Auditory Brainstem Responses (ABRs) instead of self-reported questionnaires and speech-in-noise (SIN) tests. They showed that relations between features of speech-ABR and behavioral measures were present only for a small subset of subjects. Overall, the most relevant feature to predict behavioral measures was the severity of hearing loss.

Questions from patients about the benefits of ASR lead to a project to evaluate ASR apps for the deaf (Pragt et al., 2021). One of the lessons learned is that it’s not obvious how to evaluate the performance of ASR apps. To learn more about ASR for Deaf and hard of hearing listeners, see this helpful video. Finally, Nalley (2021) interviewed experts to sketch how computational audiology could advance hearing healthcare.

Bibliography Q4 2021

These and many more developments are published below.

Chen, Y.-W., & Tsao, Y. (2021). INQSS: A SPEECH INTELLIGIBILITY ASSESSMENT MODEL USING A MULTI-TASK LEARNING NETWORK. 5.

Frithioff, A., Frendø, M., Mikkelsen, P. T., Sørensen, M. S., & Andersen, S. A. W. (2021). Cochlear implantation: Exploring the effects of 3D stereovision in a digital microscope for virtual reality simulation training – A randomized controlled trial. Cochlear Implants International, 0(0), 1–7. https://doi.org/10.1080/14670100.2021.1997026

Jenny, C., & Reuter, C. (2021). Can I trust my ears in VR? Literature review of head-related transfer functions and valuation methods with descriptive attributes in virtual reality. International Journal of Virtual Reality, 21(2), 29–43. https://doi.org/10.20870/IJVR.2021.21.2.4831

Kroll, N., Claridge, R., & Teakle, N. (n.d.). The Rise of Social Media and Digital Content in Auditory Rehabilitation: Quantifying the Reach and Effectiveness of New Distribution Channels. Perspectives of the ASHA Special Interest Groups. https://doi.org/10.1044/2021_PERSP-21-00079

Nalley, C. (2021). The Promise of Computational Audiology. The Hearing Journal, 74(12), 16. https://doi.org/10.1097/01.HJ.0000804824.85381.d0

Perugia, E., BinKhamis, G., Schlittenlacher, J., & Kluk, K. (2021). On prediction of aided behavioural measures using speech auditory brainstem responses and decision trees. PLOS ONE, 16(11), e0260090. https://doi.org/10.1371/journal.pone.0260090

Posnaik, K., Panigrahi, T., Sabat, S. L., & Dash, M. (2021). A Simplified Deep Learning model for Acoustic Feedback Cancellation in Digital Hearing Aid. 2021 International Symposium of Asian Control Association on Intelligent Robotics and Industrial Automation (IRIA), 432–436.

Pragt, L., Hengel, P. van, Grob, D., & Wasmann, J.-W. (2021). Preliminary Evaluation of Automated Speech Recognition Apps for the Hearing Impaired and Deaf. PsyArXiv. https://doi.org/10.31234/osf.io/m7q2b

Rodrigo, H., Beukes, E. W., Andersson, G., & Manchaiah, V. (2021). Exploratory Data Mining Techniques (Decision Tree Models) for Examining the Impact of Internet-Based Cognitive Behavioral Therapy for Tinnitus: Machine Learning Approach. Journal of Medical Internet Research, 23(11), e28999. https://doi.org/10.2196/28999

Theodoroff, S. M., McMillan, G. P., Schmidt, C. J., Dann, S. M., Hauptmann, C., Goodworth, M.-C., Leibowitz, R. Q., Random, C., & Henry, J. A. (2021). Randomised controlled trial of interventions for bothersome tinnitus: DesyncraTM versus cognitive behavioural therapy. International Journal of Audiology, 0(0), 1–10. https://doi.org/10.1080/14992027.2021.2004325

van de Par, S., Ewert, S. D., Hladek, L., Kirsch, C., Schütze, J., Grimm, G., Hendrikse, M. M. E., Kollmeier, B., & Seeber, B. U. (2021). Auditory-visual scenes for hearing research. 22.

Varugeese, A., Shahina, A., Nawas, K., & Khan, A. N. (2021). EarNet: Biometric Embeddings for End to End Person Authentication System Using Transient Evoked Otoacoustic Emission Signals. Neural Processing Letters. https://doi.org/10.1007/s11063-021-10546-2

Westhausen, N. L., Huber, R., Baumgartner, H., Sinha, R., Rennies, J., & Meyer, B. T. (2021). Reduction of Subjective Listening Effort for TV Broadcast Signals With Recurrent Neural Networks. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29, 3541–3550. https://doi.org/10.1109/TASLP.2021.3126931

Zanet, M., Polo, E. M., Lenatti, M., van Waterschoot, T., Mongelli, M., Barbieri, R., & Paglialonga, A. (2021). Evaluation of a Novel Speech-in-Noise Test for Hearing Screening: Classification Performance and Transducers’ Characteristics. IEEE Journal of Biomedical and Health Informatics, 25(12), 4300–4307. https://doi.org/10.1109/JBHI.2021.3100368

Zezario, R. E., Fuh, C.-S., & Wang, H.-M. (2021). Deep Learning-based Non-Intrusive Multi-Objective Speech Assessment Model with Cross-Domain Features. 13.

Last edited December 31.