
Prevalence statistics of hearing loss in adults: Harnessing spatial big data to estimate patterns and trends
Harnessing spatial big data to estimate patterns and trends of hearing loss
Novel data collection tools lead to richer datasets that e.g. allow for data-mining
Harnessing spatial big data to estimate patterns and trends of hearing loss
Statistical analysis of how fitting parameters relate to speech recognition scores finds meaningful differences between the highest- and lowest-scoring tertiles of recipients.
Using a Naive Bayes classifier, we could show that twelve different activities were classified above chance.
Exploiting spontaneous messages of Reddit users discussing tinnitus, this work identifies the main topics of interest, their heterogeneity, and how they relate to one another based on cooccurrence in users' discussions; to enhance patient-centered support.
Based on the results of an online survey, we developed a decision tree to classify somatosensory tinnitus patients with an accuracy of over 80%.
The association of standard threshold shifts for occupational hearing loss among miners exposed to noise and platinum mine dust at a large-scale platinum mine in South Africa
Speech perception by hearing aid (HA) users has been evaluated in a database that includes up to 45 hours of testing their aided abilities to recognize syllabic constituents of speech, and words in meaningful sentences, under both masked (eight-talker babble) and quiet conditions.
The U.S. National Hearing Test has now been taken by over 150,000 people and this extensive database provides reliable estimates of the distribution of hearing loss for people who voluntarily take a digits-in-noise test by telephone.
Towards the development of a diagnostic supporting tool in audiology, the Common Audiological Functional Parameters (CAFPAs) were shown to be similarly suitable for audiological finding classification as combinations of typical audiological measurements, and thereby provide the potential to combine different audiological databases.
Diotic and antiphasic digits-in-noise to detect and classify types of hearing loss
This study used machine learning methods to predict bone conduction abnormalities from air conduction pure tone audiometric thresholds.
This study used machine learning models trained on otoacoustic emissions and audiometric thresholds to predict self-reported difficulty hearing in noise in normal hearers.
How much audiological data is needed for convergence? One year!
A machine learning model is trained on real-world fitting data to predict the user's individual gain based on audiometric and further subject-related data, such as age, gender, and the acoustic environments.
The Automatic LAnguage-independent Development of the Digits-In-Noise test (Aladdin)-project aims to create a fully automatic test development procedure for digit-in-noise hearing tests in various languages and for different target populations.
The rise of new digital tools for collecting data on scales never before seen in our field coupled with new modeling techniques from deep learning requires us to think about what computational infrastructure we need in order to fully enjoy the benefits and mitigate associated barriers.
Computational audiology, the augmentation of traditional hearing health care by digital methods, has potential to dramatically advance audiological precision and efficiency to address the global burden of hearing loss.