Quarterly update Q6-Q8

Newsroom Computational Audiology, January 13

In this edition, we will be covering a variety of topics including the Peer Recommender Challenge, real meetings: virtual versus in-person, Computational Audiology Network (CAN) online presence on LinkedIn, upcoming events in 2023, a report on the VCCA 2022 conference, tips for researchers, job opportunities, and an overview of 2022 (Q6-Q8) quarter’s publications related to computational audiology. We also would like to thank everybody who contributed this year to the VCCA and CAN.

Peer Recommender Challenge!

Would you benefit from an online journal club? We are excited to announce the Peer Recommender Challenge, where we invite you to share and recommend papers published between April and December 2022 relevant to our audience. A list of the papers we have compiled thus far can be found below. To participate, kindly provide a brief and informative summary of the highlights of the paper in the comments section. Our team will review and approve each comment, and with the assistance of the advanced language model, ChatGPT, we will compile all recommendations into a cohesive narrative accompanied by relevant graphics, similar to our previous efforts in Q5. Join us in this endeavor to foster a collaborative and informed Computational Audiology community.

For all of you looking for new solutions for hearing difficulties,  best wishes for 2023! And don’t forget:

Real Meetings: Virtual Versus In-Person

We have recently published a blog on the topic of virtual and in-person meetings and their impact on the growth and strength of the Computational Audiology Network. Special thanks to Seba AusiliLiepollo Ntlhakana (PhD)Bill WhitmerSoner TürüdüRobert EikelboomElle O’BrienDeniz BaşkentDennis BarbourDavid Moore and Charlotte Garcia for sharing their experiences and advice on how to navigate these types of meetings.

Computational Audiology Network (CAN presence on LinkedIn)

To further expand our online presence, we have created a public company page for CAN on LinkedIn (keeping a low profile on twitter). This page will feature updates on our activities and is meant to inform and engage all stakeholders, including researchers, audiologists, patients, and (future) funders. Follow us via (https://www.linkedin.com/company/computational-audiology-network/). Please reach out to me if you would like to contribute or reply to this email). If you become a CAN member, you can easily share your research progress and achievements via our company page.

In addition, we also have a closed discussion group on LinkedIn for more in-depth conversations among peers. https://www.linkedin.com/groups/8931734/, here you can freely express your opinion and insights.

Events in 2023

·         ARO 2023

·         ICRA 2023

·         ESPCI 2023

·         ICASSP 2023

·         VCCA2023

·         View all upcoming events

VCCA2022 Conference Report

The VCCA 2022 conference was held online by Hearing4all (University of Oldenburg and Hannover Medical School) on June 30th and July 1st. The conference featured five keynote talks, four special sessions, and over 500 registered participants, making it another successful year for the conference. Read the full conference report here.

 

 

2022 in numbers

  • >500 participants VCCA2022
  • >15.000 visitors of computationalaudiology.com
  • 5 videos released on Computational Audiology TV
  • Total number of publications in 2022
    • 6 posts
    • 81 publications found with google scholar
    • 2 repositories added to Zenodo
    • >10 tools and software packages added to the resources
    • 2 “quarterly” updates:  Q5 and Q6-Q8

Tip for researchers

If you are planning on publishing a paper related to computational audiology, consider adding the key word [Computational Audiology] to make it easier for your peers to find. The same goes for code added to GitHub or HuggingFace. In GitHub, you can make your existing repository better findable by adding a topic to your repository which acts as a ‘tag/label’. We recommend adding the topic ‘Computational Audiology’, and maybe additional label including ‘Cochlear Model’ / [specific topic] / etc to GitHub repositories you wish to share with the computational audiology community.

For finalized models and datasets, we have a Zenodo community for computational audiology. The goal of this community is to share data, code, and tools that are useful for our field and related fields such as digital hearing health care.

Job opportunities

Want to join the Deep Hearing Lab at the University of Cambridge? Or work in the Audio Algorithms team at Apple? Check out these and other job openings in computational audiology in academia and industry.

 

Acknowledgments

We would like to thank everybody who contributed to the websiteshared software or datasets, or presented at the VCCA2022. A special thanks to: Giovanni  Di Liberto, Laurel H. Carney, Inga Holube, Richard F. Lyon, Alessia Paglialonga, Marta Lenatti, Piotr Majdak, Clara Hollomey, and Raul Sanchez-Lopez for making software and data freely available on our resources page.

 

Recent publications related to computational audiology

Did we miss a publication? Please send your suggestion to resources@computationalaudiology.com.

Are there any papers you would like to recommend? Please join our peer recommender challenge! We would like to ask you to recommend papers published between April – December 2022 to your peers. Below you can find a we collected so far. Please write a short description with the highlights why you recommend the paper in the comments below. We will approve the comment and use ChatGPT to compile all comments into a running story including graphics as we did before for the papers published in Q5.   

Below publications were found using  [(Computational Audiology) OR ((Machine Learning) AND audiology)] in Google Scholar. If you are going to publish a paper related to computational audiology, consider adding as key word [Computational Audiology’] to make your paper better findable for your peers and for us.

Alohali, Y. A., Abdelsamad, Y., Mesallam, T., Almuhawas, F., Hagr, A., & Fayed, M. S. (2022). Predicting electrode array impedance after one month from cochlear implantation surgery (arXiv:2205.10021). arXiv. http://arxiv.org/abs/2205.10021

Alonso-Valerdi, L. M. (2022). Analysis of Electrophysiological Activity of the Nervous System: Towards Neural Engineering Applications. In Biometry. CRC Press.

Alonso-Valerdi, L. M., Torres-Torres, A. S., Corona-González, C. E., & Ibarra-Zárate, D. I. (2022). Clustering approach based on psychometrics and auditory event-related potentials to evaluate acoustic therapy effects. Biomedical Signal Processing and Control, 76, 103719. https://doi.org/10.1016/j.bspc.2022.103719

Bastas, G., Kaliakatsos-Papakostas, M., Paraskevopoulos, G., Kaplanoglou, P., Christantonis, K., Tsioustas, C., Mastrogiannopoulos, D., Panga, D., Fotinea, E., Katsamanis, A., Katsouros, V., Diamantaras, K., & Maragos, P. (2022). Towards a DHH Accessible Theater: Real-Time Synchronization of Subtitles and Sign Language Videos with ASR and NLP Solutions. https://doi.org/10.1145/3529190.3534770

Bellandi, V. (2023). A Big Data Infrastructure in Support of Healthy and Independent Living: A Real Case Application. In C. P. Lim, A. Vaidya, Y.-W. Chen, V. Jain, & L. C. Jain (Eds.), Artificial Intelligence and Machine Learning for Healthcare: Vol. 2: Emerging Methodologies and Trends (pp. 95–134). Springer International Publishing. https://doi.org/10.1007/978-3-031-11170-9_5

Borole, Y. D., & Raut, R. (2022). Machine‐Learning Techniques for Deaf People. Machine Learning Algorithms for Signal and Image Processing, 201–217.

Cantu, M. A., & Hohmann, V. (2022). Enhancement of Hearing Aid Processing Via Spatial Spectro-Temporal Post-Filtering with a Prototype Eyeglass-Integrated Array. 2022 International Workshop on Acoustic Signal Enhancement (IWAENC), 1–5. https://doi.org/10.1109/IWAENC53105.2022.9914762

Casolani, C., Harte, J. M., & Epp, B. (2022). Categorization of tinnitus listeners with a focus on cochlear synaptopathy. PLOS ONE, 17(12), e0277023. https://doi.org/10.1371/journal.pone.0277023

Chan, J., Glenn, A., Itani, M., Mancl, L. R., Gallagher, E., Bly, R., Patel, S., & Gollakota, S. (2022). Wireless earbuds for low-cost hearing screening (arXiv:2212.05435). arXiv. http://arxiv.org/abs/2212.05435

Diehl, P. U., Singer, Y., Zilly, H., Schönfeld, U., Meyer-Rachner, P., Berry, M., Sprekeler, H., Sprengel, E., Pudszuhn, A., & Hofmann, V. M. (2022). Restoring speech intelligibility for hearing aid users with deep learning (arXiv:2206.11567). arXiv. http://arxiv.org/abs/2206.11567

Drakopoulos, F., & Verhulst, S. (2022). A Differentiable Optimisation Framework for The Design of Individualised DNN-based Hearing-Aid Strategies. ICASSP 2022 – 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 351–355. https://doi.org/10.1109/ICASSP43922.2022.9747683

EBSCOhost | 156790805 | Auditory Evoked Potential-Based Hearing Loss Level Recognition Using Fully Convolutional Neural Networks. (n.d.). Retrieved October 18, 2022.

Fawcett, T. J., Longenecker, R. J., Brunelle, D. L., Berger, J. I., Wallace, M. N., Galazyuk, A. V., Rosen, M. J., Salvi, R. J., & Walton, J. P. (2023). Universal automated classification of the acoustic startle reflex using machine learning. Hearing Research, 428, 108667. https://doi.org/10.1016/j.heares.2022.108667

Guiraud, P., Moore, A. H., Vos, R. R., Naylor, P. A., & Brookes, M. (2022). Machine Learning for Parameter Estimation in the MBSTOI Binaural Intelligibility Metric. 2022 International Workshop on Acoustic Signal Enhancement (IWAENC), 1–5. https://doi.org/10.1109/IWAENC53105.2022.9914725

Haglund, A. (n.d.). Artificial Intelligence for Sign Language Recognition and Translation. 47.

Harris, K. C., & Bao, J. (2022). Optimizing non-invasive functional markers for cochlear deafferentation based on electrocochleography and auditory brainstem responses. The Journal of the Acoustical Society of America, 151(4), 2802–2808. https://doi.org/10.1121/10.0010317

Investigations on the Deep Learning Based Speech Enhancement Algorithms for Hearing-Impaired Population—ProQuest. (n.d.). Retrieved October 18, 2022.

Jeng, F.-C., & Jeng, Y.-S. (2022). Implementation of Machine Learning on Human Frequency-Following Responses: A Tutorial. Seminars in Hearing, 43(3), 251–274. https://doi.org/10.1055/s-0042-1756219

Jeng, F.-C., Lin, T.-H., Hart, B. N., Montgomery-Reagan, K., & McDonald, K. (2022). Non-negative matrix factorization improves the efficiency of recording frequency-following responses in normal-hearing adults and neonates. International Journal of Audiology, 0(0), 1–11. https://doi.org/10.1080/14992027.2022.2071345

Karbasi, M., & Kolossa, D. (2022). ASR-based speech intelligibility prediction: A review. Hearing Research, 108606. https://doi.org/10.1016/j.heares.2022.108606

Kassjański, M., Kulawiak, M., & Przewoźny, T. (2022). Development of an AI-based audiogram classification method for patient referral. 2022 17th Conference on Computer Science and Intelligence Systems (FedCSIS), 163–168. https://doi.org/10.15439/2022F66

Law, B. M. (2022). Reimagining the Hearing Aid in an OTC Marketplace. Leader Live. https://leader.pubs.asha.org/do/10.1044/leader.FTR1.27112022.aud-otcs-future.32/full/

Lenatti, M., Moreno, -Sánchez Pedro A., Polo, E. M., Mollura, M., Barbieri, R., & Paglialonga, A. (2022). Evaluation of Machine Learning Algorithms and Explainability Techniques to Detect Hearing Loss From a Speech-in-Noise Screening Test. American Journal of Audiology, 31(3S), 961–979. https://doi.org/10.1044/2022_AJA-21-00194

Liu, Z., Li, Y., Yao, L., Monaghan, J. J. M., & McAlpine, D. (2022). Disentangled and Side-aware Unsupervised Domain Adaptation for Cross-dataset Subjective Tinnitus Diagnosis (arXiv:2205.03230). arXiv. http://arxiv.org/abs/2205.03230

López-Caballero, F., Coffman, B., Seebold, D., Teichert, T., & Salisbury, D. F. (n.d.-a). Intensity and inter-stimulus-interval effects on human middle- and long-latency auditory evoked potentials in an unpredictable auditory context. Psychophysiology, n/a(n/a), e14217. https://doi.org/10.1111/psyp.14217

Mondol, S. I. M. M. R., Kim, H. J., Kim, K. S., & Lee, S. (2022a). Machine Learning-Based Hearing Aid Fitting Personalization Using Clinical Fitting Data. Journal of Healthcare Engineering, 2022, e1667672. https://doi.org/10.1155/2022/1667672

Müller, M., Jiang, Z., Moryossef, A., Rios, A., & Ebling, S. (2022). Considerations for meaningful sign language machine translation based on glosses (arXiv:2211.15464). arXiv. http://arxiv.org/abs/2211.15464

Neidhardt, A., Schneiderwind, C., & Klein, F. (2022). Perceptual Matching of Room Acoustics for Auditory Augmented Reality in Small Rooms—Literature Review and Theoretical Framework. Trends in Hearing, 26, 23312165221092920. https://doi.org/10.1177/23312165221092919

Orduña-Bustamante, F., Padilla-Ortiz, A. L., & Mena, C. (2023). Assessing the benefits of virtual speaker lateralization for binaural speech intelligibility over the Internet. Applied Acoustics, 202, 109146. https://doi.org/10.1016/j.apacoust.2022.109146

Paajanen, M. (n.d.). APPLYING MACHINE LEARNING METHODS TO ANALYSE ARTICULATORY DEPENDENCY OF BONE CONDUCTION TRANSFER FUNCTIONS. 58.

Pai, K. V., & Thilagam, P. S. (2022). Hearing Loss Prediction using Machine Learning Approaches: Contributions, Limitations and Issues. 2022 IEEE 3rd Global Conference for Advancement in Technology (GCAT), 1–7. https://doi.org/10.1109/GCAT55367.2022.9972110

Ramos-de-Miguel, Á., Escobar, J. M., Greiner, D., Benítez, D., Rodríguez, E., Oliver, A., Hernández, M., & Ramos-Macías, Á. (2022). A phenomenological computational model of the evoked action potential fitted to human cochlear implant responses. PLOS Computational Biology, 18(5), e1010134. https://doi.org/10.1371/journal.pcbi.1010134

Rennies, J., Röttges, S., Huber, R., Hauth, C. F., & Brand, T. (2022). A joint framework for blind prediction of binaural speech intelligibility and perceived listening effort. Hearing Research, 108598. https://doi.org/10.1016/j.heares.2022.108598

Riojas, K. E., Bruns, T. L., Granna, J., Webster, R. J., & Labadie, R. F. (2022). Robotic pullback technique of a precurved cochlear-implant electrode array using real-time impedance sensing feedback. International Journal of Computer Assisted Radiology and Surgery. https://doi.org/10.1007/s11548-022-02772-3

Sandström, J., Myburgh, H., Laurent, C., Swanepoel, D. W., & Lundberg, T. (2022). A Machine Learning Approach to Screen for Otitis Media Using Digital Otoscope Images Labelled by an Expert Panel. Diagnostics, 12(6), Article 6. https://doi.org/10.3390/diagnostics12061318

Schmitt, M. (n.d.). Bag-of-Words Representations for Computer Audition. 265.

Schröter, H., Escalante-B., A. N., Rosenkranz, T., & Maier, A. (2022). DeepFilterNet2: Towards Real-Time Speech Enhancement on Embedded Devices for Full-Band Audio (arXiv:2205.05474). arXiv. http://arxiv.org/abs/2205.05474

Schurzig, D., Repp, F., Timm, M. E., Batsoulis, C., Lenarz, T., & Kral, A. (2023). Virtual cochlear implantation for personalized rehabilitation of profound hearing loss. Hearing Research, 429, 108687. https://doi.org/10.1016/j.heares.2022.108687

Sivaraman, A., & Kim, M. (2022). Efficient Personalized Speech Enhancement through Self-Supervised Learning. IEEE Journal of Selected Topics in Signal Processing, 1–15. https://doi.org/10.1109/JSTSP.2022.3181782

Smith, S. (2022). Translational Applications of Machine Learning in Auditory Electrophysiology. Seminars in Hearing, 43(3), 240–250. https://doi.org/10.1055/s-0042-1756166

Souffi, S., Varnet, L., Zaidi, M., Bathellier, B., Huetz, C., & Edeline, J.-M. (2023). Reduction in sound discrimination in noise is related to envelope similarity and not to a decrease in envelope tracking abilities. The Journal of Physiology, 601(1), 123–149. https://doi.org/10.1113/JP283526

Svec, A., & Morgan, S. D. (2022). Virtual audiology education tools: A survey of faculty, graduate students, and undergraduate students. The Journal of the Acoustical Society of America, 151(5), 3234–3238. https://doi.org/10.1121/10.0010530

Taylor, K., & Sheikh, W. (2022). Automated Hearing Impairment Diagnosis Using Machine Learning. 2022 Intermountain Engineering, Technology and Computing (IETC), 1–6. https://doi.org/10.1109/IETC54973.2022.9796707

Use Brain-Like Audio Features to Improve Speech Recognition Performance—ProQuest. (n.d.). Retrieved October 18, 2022.

Wathour, J., Govaerts, P. J., Derue, L., Vanderbemden, S., Huaux, H., Lacroix, E., & Deggouj, N. (n.d.). Prospective Comparison Between Manual and Computer-Assisted (FOX) Cochlear Implant Fitting in Newly Implanted Patients. Ear and Hearing, 10.1097/AUD.0000000000001314. https://doi.org/10.1097/AUD.0000000000001314

Wijewickrema, S., Bester, C., Gerard, J.-M., Collins, A., & O’Leary, S. (2022). Automatic analysis of cochlear response using electrocochleography signals during cochlear implant surgery. PLOS ONE, 17(7), e0269187. https://doi.org/10.1371/journal.pone.0269187

Zhong, L., Ricketts, T. A., Roberts, R. A., & Picou, E. M. (n.d.). Benefits of Text Supplementation on Sentence Recognition and Subjective Ratings With and Without Facial Cues for Listeners With Normal Hearing. Ear and Hearing, 10.1097/AUD.0000000000001316. https://doi.org/10.1097/AUD.0000000000001316

 

Last edited January 13.

  • Dear all,
    Let’s uplift each other and expand our knowledge by sharing the valuable papers we’ve come across! Which papers listed in this quarterly update resonated with you the most? Leave a brief (e.g. 2-5 line) description of the paper in the comments below and use the power of our community to build a peer-recommended reading list. Together, let’s continue to grow and learn!
    I will collect the comments and merge them using ChatGPT. You can also suggest publications we missed to our list. Looking forward to the outcome of this novel peer recommender system 😉
    Best regards,
    Jan-Willem Wasmann

  • Lenatti, et al., evaluated multiple machine learning algorithms to predict audiogram thresholds (specifically, the pure-tone average) from demographic information and speech-in-noise test results. The accuracy of these algorithms at correctly determining the presence or absence of hearing loss ranged between approximately 80% and 90%. Explainability techniques were used to explore the relative contributions of various predictors to the success of the algorithms. By far age was the biggest predictor.

  • I am thinking on a paper to recommend- but wanted to add a note of caution on combining many paper summaries with ChatGPT.

    ChatGPT and other similar large language models are known to produce authoritative-sounding but incorrect statements. Recently, Meta released a large language model to summarize scientific papers and assist with writing new ones, which had to be removed from the internet in a matter of days. One critical issue is that it generated a high volume of completely false statements that sounded correct to many readers (https://www.technologyreview.com/2022/11/18/1063487/meta-large-language-model-ai-only-survived-three-days-gpt-3-science/). Stack Overflow has also experienced issues with confident, but incorrect, responses to users seeking programming help (https://stackoverflow.com/help/gpt-policy).

    It seems very plausible that ChatGPT could ingest our hand-written paper summaries and rewrite them with errors- for example, misstating the type of measurements reported in a paper, the study population, statistical results, etc. People make errors too, of course, but are capable of producing far fewer because we are limited by our writing speed! And we can intentionally signal when we are not confident, to alert readers that our understanding should be double-checked. And, our errors tend to follow certain semi-predictable patterns, whereas far less is known about the sort of errors made by large language models.

    For these reasons, I would have difficulty trusting anything about science generated by ChatGPT unless I knew it had been vetted and fact-checked by a person with appropriate expertise (which I am sure you will do! But I wanted to log my thoughts anyway).

    Appendix: An example from ChatGPT. I prompted it to summarize Dennis’ writeup of the Lenatti paper in his comment (“Summarize this text: …”) and it generates the following:

    “Lenatti and colleagues used machine learning to predict hearing thresholds from demographic information and speech-in-noise test results with an accuracy of 80-90%. They found that age was the strongest predictor of success.”

    The generated text incorrectly states the modeling target (it was a binary classification of hearing impaired or not, not predicting the thresholds themselves). And the text incorrectly describes how age is used by the models (it does not predict “success”, as success is not a measure in the dataset; it predicts whether a person is in the hearing impaired group or not).

    • Hi Elle, thank you for sharing your thoughts and warning for systems like ChatGPT. Here is a paper in Nature about how scientist are already using LLM:
      Start queote: “I think I use GPT-3 almost every day,” says computer scientist Hafsteinn Einarsson at the University of Iceland, Reykjavik. He uses it to generate feedback on the abstracts of his papers. In one example that Einarsson shared at a conference in June, some of the algorithm’s suggestions were useless, advising him to add information that was already included in his text. But others were more helpful, such as “make the research question more explicit at the beginning of the abstract”. It can be hard to see the flaws in your own manuscript, Einarsson says. “Either you have to sleep on it for two weeks, or you can have somebody else look at it. And that ‘somebody else’ can be GPT-3.” end quote.
      https://www.nature.com/articles/d41586-022-03479-w

    • We have found that ChatGPT is very good at explore functions but less good for exploit functions. In other words, if you learn its quirks (takes a few hours in our experience) you can pretty rapidly home in on a solution to a particular problem. This solution can then be verified. Another way to say this is that it seems to be a much better hypothesis rejection tool than a hypothesis formation tool. IOW, it has great promise to be the next generation of search tools. It is definitely amazing for that use case but less good for others.

      Here is one nugget we have learned: if after a ChatGPT response you simply ask “Are you sure?”, you open up a richer set of responses. It seems that it is coded to deliver a single answer to every prompt and sound equally confident about all of them. But once you ask it explicitly to represent the relative confidence of its answers, it can provide more nuanced information.