Celebrating a milestone of the Computational Audiology Network
Written by Jan-Willem Wasmann, using Jenni AI and Grammarly. - I couldn't find a way to create a more modest tone ;-). Afterwards, I ran it typing Mind (that uses…
Written by Jan-Willem Wasmann, using Jenni AI and Grammarly. - I couldn't find a way to create a more modest tone ;-). Afterwards, I ran it typing Mind (that uses…
How might people with hearing loss (patients) or clinicians (Audiologist and ENT-specialist) in hearing healthcare use ChatGPT or other AI Chatbots?
We used ChatGPT a AI system that can help writing text and code or answer questions but that cannot take responsibility in a way a human writer can do.
In this edition, we will be covering a variety of topics, including the Peer Recommender Challenge.
December 17 Written and edited by Jan-Willem Wasmann with contributions from Seba Ausili, Liepollo Ntlhakana, Bill Whitmer, Soner Türüdü, Rob Eikelboom, Elle O’Brien, Deniz Başkent, Dennis Barbour, Dave Moore &…
With five keynote talks, four special sessions, and over 500 registered participants, the VCCA keeps growing.
n this episode, Brent Edwards from NAL and Stefan Launer from Sonova take us through their careers and share lessons and perspectives on the development of hearing technology. We discuss how the development of technology becomes more holistic, design thinking, standardization, and what's needed to get to new service models and innovation.
Here you find the latest news and developments in computational audiology
Automated Speech Recognition (ASR) for the deaf and communication on equal terms regardless of hearing status. Episode 2 with Dimitri Kanevsky, Jessica Monaghan and Nicky Chong-White. Moderator Jan-Willem Wasmann You…
Active Learning in the auditory domain A round table with Bert de Vries, Josef Schlittenlacher and Dennis Barbour. Moderator Jan-Willem Wasmann Audio version: Audiovisual version on Youtube: Josef Schlittenlacher is…
Here you find the latest news and developments in computational audiology
Here you find the latest news and developments in computational audiology
This post covers the VCCA discussions on hearing devices of the future, and hearing diagnostics and services of the future. Both specifically discussed solutions and ideas for low-income countries.
Here you find the latest news and developments in computational audiology
With great pleasure, we look back at a successful VCCA2021.
Over the years Hannover Medical School has build a comprehensive data pool for patients with implantable hearing devices, which serves as a basis for answering various research questions and big data analyses.
Connectivity is transforming the hearing care model significantly.
Patient-centric medical research benefits from the sharing and integration of complex and diverse data from different sources such as care, clinical research, and novel emerging data types.
There is no doubt that the move away from paper-based health records to electronic health records has been important for many reasons.
In the Clarity project, the machine learning approach is applied to the problem of hearing aid processing of speech-in-noise.
Machine learning has various applications in auditory modelling and the approaches combined will transform individual testing and processing in hearing devices.
Research in the last decades with the audiological data led to many important discoveries, and today, as the area of data emerges the focus turns to maturing those discoveries along the dimensions of coverage, applicability, bias, and privacy into solutions that improve the lives for people with hearing problems.
herethe concept and principles of knowledge discovery from databases are revisited and applied on the scope of two recently published studies about auditory profiling.
In our group at Oldenburg University, we developed virtual audiovisual environments representing everyday-life listening situations.
The University of Michigan School of Public Health has partnered with Apple Inc. to use advances in smart device and wearable technology to evaluate the levels of sound at which iPhone users listen to music and other media, as well as how long and how often they listen.
We present an Alexa skill that performs a speech-in-noise listening test with matrix sentences. The skill is evaluated with four subject groups and in three different acoustic conditions.
The model should conduct the experiment because it knows best which condition is going to be the most informative for confining its free parameters, at least in theory
Here we explore the potential for using machine learning to detect hearing loss from children's speech.
Harnessing spatial big data to estimate patterns and trends of hearing loss
Based on the results of an online survey, we developed a decision tree to classify somatosensory tinnitus patients with an accuracy of over 80%.
The association of standard threshold shifts for occupational hearing loss among miners exposed to noise and platinum mine dust at a large-scale platinum mine in South Africa
A new online hearing screening procedure integrated with artificial intelligence for the identification of slight/mild hearing loss in older adults.
According to the recent WHO World Report on Hearing, there are approximately 500 million people worldwide with disabling hearing loss, the vast majority of whom receive no treatment. The consequences of this unmet need are dire: hearing loss is a top-5 contributor to the global burden of disability; the leading modifiable risk factor for dementia; and costs nearly 1 trillion dollars per year.
Here you find the latest news and developments in computational audiology
The Clarity project is running a series of machine learning challenges to revolutionise signal processing in hearing aids.
This forum is used to facilitate Q&A and provide additional resources at the ARO symposium 'Emerging Capabilities for Evaluating Human Hearing'
Here you can watch the ISA webinar about Computational Audiology held on 9 December 2020 Computational Audiology from…
Dynamically masked audiograms achieve accurate true threshold estimates and reduce test time compared to current clinical masking procedures.
The rise of new digital tools for collecting data on scales never before seen in our field coupled with new modeling techniques from deep learning requires us to think about what computational infrastructure we need in order to fully enjoy the benefits and mitigate associated barriers.