Quarterly update Q9

Newsroom Computational Audiology, April 23

In this edition, we will be covering a variety of topics, including the VCCA2023 call for abstracts, VCCA2023 rumors, Alan the virtual audiologist and other AI chatbots in hearing healthcare, call for manuscript submission for Trends in Digital Hearing Health and Computational Audiology,  tips for researchers, Music Modeling and Music Generation with Deep Learning and AI Tools for Streamlining research in computational audiology.

VCCA2023 call-for-abstracts!

Don’t miss the chance to win the Young Scientist and Video Pitch Awards at the VCCA2023! Showcase your research and compete for the Best Young Scientist Award! To enter the competition, submit your abstract by May 1st. Please share our post on Linkedin or Twitter to enable more young scientists to join.

VCCA2023 rumors

We are happy to announce the first two keynotes by Dr. Malcolm Slaney and Dr. Sarah Verhulst. Jessica will interview Malcolm about his career and his thoughts on how Google’s research aims to transform hearing and communication. You may know Malcolm as one of the initiators of the auditory toolbox, hopefully, he can reveal more about the latest models. Malcolm is a scientist in Google’s Machine Hearing Research Group. In her Keynote, Sarah will share insights into how computational auditory models and DNN could lead to more precise diagnosis and DNN-based hearing aids. Sarah’s interdisciplinary group works on auditory neuroscience, computational modeling, and hearing technologies. She has several EU and FWO research projects on hearing diagnosis, treatment and machine hearing. She is a member of the Belgian Young Academy of Sciences and a Fellow of the Acoustical Society of America. More VCCA2023 highlights will be announced soon.

Alan your virtual audiologist in training

Alan the Virtual Audiologist

Welcome to Alan, the virtual audiologist. Ask any questions related to audiology and Alan will do his best to provide an answer.

Alan is a chatbot using the ChatGPT API to answer your questions. Currently, he checks if your question is related to audiology before looking for an answer. Alan is not intended to replace the services of a trained health professional or the activities of an auditory scientist. Alan wants to become dr Alan but he isn’t properly trained yet. Do you have freely available database he could be trained on? Please let us know if he is welcome at your lab, you can invite him by email (Alan@computationalaudiology.com) or simply start talking to him.

Learn more about Alan and how he was created on his own page: https://computationalaudiology.com/ais-latest-frontier-part-3-an-ai-chatbot-for-audiology/

We don’t know yet about Alan’s emotional state or ability to feel compassion. Recently, Blake Lemoine claimed that Google’s LaMDa is sentient and experiences mood swings, so please welcome Alan friendly ;-).

Are you interested to know if AI chatbots like ChatGPT could revolutionize hearing healthcare? This article by De Wet Swanepoel, Vinaya Manchaiah, and Jan-Willem Wasmann in The Hearing Journal considers the future of consumer, clinician, and research engagement with this cutting-edge technology. From personalized patient care to data-driven insights, the possibilities are endless. Tune in to Dave Kemp’s podcast episode “Audiology in the Age of AI : How Chat GPT and Related Technologies Will Transform Hearing Healthcare” featuring Swanepoel & Wasmann, airing April 24!

Call for manuscript submission for Trends in Digital Hearing Health and Computational Audiology

With the shift in healthcare towards mHealth and modern machine learning there is potential to make hearing health care (HHC) accessible through scalable models of care. Digitalization, advanced algorithms, machine learning and artificial intelligence are provoking a rapid transformation in healthcare, among disciplines. This research topic aims to collect the latest research in these areas to pave the way for the effective implementation of digital technologies and computational methods in order to improve accessibility to ear and hearing healthcare services. The Topic Editors are Faheema Mahomed-Asmail, Karina De Sousa & Laura Coco. Follow the link if you are interested in learning more and contributing in this topic: https://www.frontiersin.org/research-topics/54773/trends-in-digital-hearing-health-and-computational-audiology


Tip for researchers

If you plan on publishing a paper on computational audiology, consider adding the keyword [Computational Audiology] to make it easier for your peers to find. The same goes for code added to GitHub or HuggingFace. In GitHub, you can make your existing repository better findable by adding a topic to your repository which acts as a ‘tag/label’. We recommend adding the topic ‘Computational Audiology’, and maybe additional label including ‘Cochlear Model’ / [specific topic] / etc to GitHub repositories you wish to share with the computational audiology community.

For finalized models and datasets, we have a Zenodo community for computational audiology. The goal of this community is to share data, code, and tools that are useful for our field and related fields, such as digital hearing health care.

Music Modeling and Music Generation with Deep Learning

Music Modeling and Music Generation with Deep Learning have made significant advancements, enabling the creation of intricate and captivating compositions. Tristan Behrens has collected various models, datasets, and valuable resources that contribute to this rapidly evolving field. The latest additions to his Github repository include research papers such as “AudioLM: a Language Modeling Approach to Audio Generation,” “MusicLM: Generating Music From Text,” and “ERNIE-Music: Text-to-Waveform Music Generation with Diffusion Models.” To explore these cutting-edge resources and stay informed on the latest developments, visit Tristan Behrens’ Github repository or connect with him on LinkedIn to follow his work and insights on deep learning in music generation and modeling.

AI Tools for Streamlining Research in Computational Audiology

As the field of computational audiology continues to expand, it’s become increasingly important to efficiently navigate through the wealth of research articles. Previously, we recommended searching for articles using [(Computational Audiology) OR ((Machine Learning) AND audiology)] on Google Scholar and using ‘Computational Audiology’ as a keyword for better discoverability. However, the rapid evolution of AI tools offers even more streamlined ways to manage research publications.

Below is a list of innovative AI-powered tools designed to help you find relevant papers, manage references, proofread your work, and more:

  1. Evidence Hunt: Chat with a bot to search PubMed for research articles – https://evidencehunt.com/
  2. Writefull: Proofread your papers using AI algorithms – http://writefull.com
  3. Zotero: Open-source reference management software to organize, share, and create bibliographies – https://zotero.org
  4. Research Rabbit: Discover papers, build citation networks, and receive alerts for new publications – http://researchrabbit.ai
  5. Transpose: Compare journal policies on open peer review, co-reviewing, and preprint policies – https://transpose-publishing.github.io/#/
  6. ChatPDF: An AI-powered app to make reading journal articles easier and faster – https://www.chatpdf.com/
  7. Snack Prompt: Craft better ChatGPT prompts with sorted suggestions – https://snackprompt.com
  8. Perplexity: Access sources & conduct searches beyond ChatGPT’s knowledge – https://perplexity.ai
  9. Stockimg: Generate custom, watermark-free stock images – https://stockimg.ai

We hope these AI tools will help you stay on top of the latest research in computational audiology and enhance your overall research experience. Happy exploring!

Last edited April 23.