Quarterly update Q2 2021

Newsroom Computational Audiology, July 11

Author: Jan-Willem Wasmann, a Computational Audiology Network initiative

Below you find the latest news and developments in computational audiology: VCCA2021 award winners, Computational Audiology TV, all VCCA2021 abstracts, and an overview of 2021 second quarter’s publications related to computational audiology.

 

VCCA2021 awards

We would like to congratulate Manon Revel, Emma Holmes, Liepollo Ntlhakana, Dalia Tsimpida, and Tim Brochier for their achievements at the VCCA! Their research is featured below.

Winners Video pitches

Video by Emma Holmes

Video by Manon Revel

 

Winners VCCA Young Scientist Award 

Tim Brochier

Dalia Tsimpida

 

Winner VCCA Special Award 

Liepollo Ntlhakana

 

 

Computational Audiology TV

We launched Computational Audiology TV! On this youtube channel, you can find all VCCA2021 2-minutes video pitches and other videos related to computational audiology or the VCCA. We are releasing many of the pre-recorded talks via this channel so please subscribe to stay in the loop.

Other developments

Interested in machine learning & big data for audiology, hearing tech, or auditory neuroscience? The Computational Audiology Network is starting a Slack channel for folks involved in:
– basic science & translational research
– clinical practice
– industry
– start ups
– public health
– health policy and advocacy
Join our Slack Channel

Recent publications related to computational audiology

Did we miss a publication? Please send your suggestion to resources@computationalaudiology.com

Below publications were found using  [(Computational Audiology) OR ((Machine Learning) AND audiology)] in Google Scholar.

Crowson et al. (2021) predicted PHQ-9 depression scale scores from National Health and Nutrition Examination Survey data using Machine learning algorithms. Interestingly, they found that the most influential audiometric predictor was the social dynamics of hearing loss. Nie et al. (2021) trained Convolutional neural networks (CNN) on wideband tympanometry to help classify outcomes. Potentially such networks could support clinicians in diagnosing otosclerosis. Hülsmeier et al. (2021) created a model to predict the distortion component of hearing impairment which is a limiting factor to the benefits of sound amplification. Another overview of current development in computational audiology are the VCCA2021 abstracts.

These and many more developments are published below.

Bibliography Q2 2021

Chiea, R. A., Costa, M. H., & Cordioli, J. A. (2021). An Optimal Envelope-Based Noise Reduction Method for Cochlear Implants: An Upper Bound Performance Investigation. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29, 1729–1739. https://doi.org/10.1109/TASLP.2021.3076363

Crowson, M. G., Franck, K. H., Rosella, L. C., & Chan, T. C. Y. (2021). Predicting Depression From Hearing Loss Using Machine Learning. Ear and Hearing, 42(4), 982–989. https://doi.org/10.1097/AUD.0000000000000993

Falconer, L., Coy, A., & Barker, J. (2021). Modelling the Effects of Hearing Aid Algorithms on Speech and Speaker Intelligibility as Perceived by Listeners with Simulated Sensorineural Hearing Impairment. SoutheastCon 2021, 1–8. https://doi.org/10.1109/SoutheastCon45413.2021.9401882

Fischer, T., Caversaccio, M., & Wimmer, W. (2021). Speech signal enhancement in cocktail party scenarios by deep learning based virtual sensing of head-mounted microphones. Hearing Research, 408, 108294. https://doi.org/10.1016/j.heares.2021.108294

Goodman, S. M., Liu, P., Jain, D., McDonnell, E. J., Froehlich, J. E., & Findlater, L. (2021). Toward User-Driven Sound Recognizer Personalization with People Who Are d/Deaf or Hard of Hearing. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 5(2), 63:1-63:23. https://doi.org/10.1145/3463501

Hasannezhad, M., Zhu, W.-P., & Champagne, B. (2021). A Novel Low-Complexity Attention-Driven Composite Model for Speech Enhancement. 2021 IEEE International Symposium on Circuits and Systems (ISCAS), 1–5. https://doi.org/10.1109/ISCAS51556.2021.9401385

Hülsmeier, D., Buhl, M., Wardenga, N., Warzybok, A., Schädler, M. R., & Kollmeier, B. (2021). Inference of the distortion component of hearing impairment from speech recognition by predicting the effect of the attenuation component. International Journal of Audiology, 0(0), 1–15. https://doi.org/10.1080/14992027.2021.1929515

Jijomon, C. M., & Vinod, A. P. (2021). Person-identification using familiar-name auditory evoked potentials from frontal EEG electrodes. Biomedical Signal Processing and Control, 68, 102739. https://doi.org/10.1016/j.bspc.2021.102739

Lewkowicz, D. J., Schmuckler, M., & Agrawal, V. (2021). The multisensory cocktail party problem in adults: Perceptual segregation of talking faces on the basis of audiovisual temporal synchrony. Cognition, 214, 104743. https://doi.org/10.1016/j.cognition.2021.104743

Millán, C. L., Higuera-Trujillo, J. L., Aviñó, A. M. i, Torres, J., & Sentieri, C. (2021). The influence of classroom width on attention and memory: Virtual-reality-based task performance and neurophysiological effects. Building Research & Information, 0(0), 1–14. https://doi.org/10.1080/09613218.2021.1899798

Nie, L., Li, C., Marzani, F., Wang, H., Thibouw, F., & Bozorg Grayeli, A. (2021). Classification of Wideband Tympanometry by Deep Transfer Learning with Data Augmentation for Automatic Diagnosis of Otosclerosis. IEEE Journal of Biomedical and Health Informatics, 1–1. https://doi.org/10.1109/JBHI.2021.3093007

Ras, Z. W., Wieczorkowska, A. A., & Tsumoto, S. (2021). Recommender Systems for Medicine and Music. Springer Nature.

Ren, E., Ornelas, G. C., & Loeliger, H.-A. (2021). Real-Time Interaural Time Delay Estimation via Onset Detection. ICASSP 2021 – 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 4555–4559. https://doi.org/10.1109/ICASSP39728.2021.9414632

Rodrigo, H., Beukes, E., Andersson, G., & Manchaiah, V. (2021). Exploratory data mining to examine the impact of internet-based cognitive behavioral therapy for tinnitus: Application of decision tree models (Preprint). https://doi.org/10.2196/preprints.28999

Sánchez Fernández, L. P. (2021). Environmental noise indicators and acoustic indexes based on fuzzy modelling for urban spaces. Ecological Indicators, 126, 107631. https://doi.org/10.1016/j.ecolind.2021.107631

Song, J.-J., Park, J., Koo, J.-W., Lee, S.-Y., Vanneste, S., Ridder, D. D., Hong, S., & Lim, S. (n.d.). The balance between Bayesian inference and default mode determines the generation of tinnitus from decreased auditory input: A volume entropy-based study. Human Brain Mapping, n/a(n/a). https://doi.org/10.1002/hbm.25539

Tian, Y., Ding, W., Zhang, M., Zhou, T., Li, J., & Qiu, W. (2021). Analysis of correlation between window duration for kurtosis computation and accuracy of noise-induced hearing loss prediction. The Journal of the Acoustical Society of America, 149(4), 2367–2376. https://doi.org/10.1121/10.0003954

Tu, Z., Ma, N., & Barker, J. (2021). Optimising Hearing Aid Fittings for Speech in Noise with a Differentiable Hearing Loss Model. ArXiv:2106.04639 [Cs, Eess]. http://arxiv.org/abs/2106.04639

Xu, X., Deng, J., Zhang, Z., Wu, C., & Schuller, B. (2021). Identifying surgical-mask speech using deep neural networks on low-level aggregation. Proceedings of the 36th Annual ACM Symposium on Applied Computing, 580–585. https://doi.org/10.1145/3412841.3441938

Zheng, N., Shi, Y., Kang, Y., & Meng, Q. (2021). A Noise-Robust Signal Processing Strategy for Cochlear Implants Using Neural Networks. ICASSP 2021 – 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 8343–8347. https://doi.org/10.1109/ICASSP39728.2021.9413452

Zhu, Y., Li, X., Qiao, Y., Shang, R., Shi, G., Shang, Y., & Guo, H. (2021). Widespread Plasticity of Cognition-Related Brain Networks in Single-Sided Deafness Revealed by Randomized Window-Based Dynamic Functional Connectivity. Medical Image Analysis, 102163. https://doi.org/10.1016/j.media.2021.102163

Last edited July 9.