Novel data collection tools lead to richer datasets that e.g. allow for data-mining

Read more about the article Prevalence statistics of hearing loss in adults: Harnessing spatial big data to estimate patterns and trends
Fig.1 Map of England by Government Office Regions, showing prevalence rates of self-reported hearing loss in eight Waves of the English Longitudinal Study of Ageing (ELSA). This work by Dialechti Tsimpida is licensed under a Creative Commons Attribution 4.0 International License.

Prevalence statistics of hearing loss in adults: Harnessing spatial big data to estimate patterns and trends

Harnessing spatial big data to estimate patterns and trends of hearing loss

Continue ReadingPrevalence statistics of hearing loss in adults: Harnessing spatial big data to estimate patterns and trends
Read more about the article How variation in cochlear implant performance relates to differences in MAP parameters
Graphic representation of how differences in PCA components are reflected in MAPs.

How variation in cochlear implant performance relates to differences in MAP parameters

Statistical analysis of how fitting parameters relate to speech recognition scores finds meaningful differences between the highest- and lowest-scoring tertiles of recipients.

Continue ReadingHow variation in cochlear implant performance relates to differences in MAP parameters
Read more about the article Automatic detection of human activities from accelerometer sensors integrated in hearables
Figure 1 A: Data streamed from the hearables to a PC running a recording script. B. Data from two different activities, recorded over 5 minutes each. C. Results from a classification using Naïve Bayes classification and a 5-fold cross-validation procedure.

Automatic detection of human activities from accelerometer sensors integrated in hearables

Using a Naive Bayes classifier, we could show that twelve different activities were classified above chance.

Continue ReadingAutomatic detection of human activities from accelerometer sensors integrated in hearables
Read more about the article What can we learn about tinnitus from social media posts?
Topics found with Latent Dirichlet Allocation on 100k Reddit posts. The top rectangle of each block is the topic's name assessed through the algorithm's output: the lemmatised words shown in the middle rectangle. The proportion of messages that mention the topic is displayed in the lower rectangle.

What can we learn about tinnitus from social media posts?

Exploiting spontaneous messages of Reddit users discussing tinnitus, this work identifies the main topics of interest, their heterogeneity, and how they relate to one another based on cooccurrence in users' discussions; to enhance patient-centered support.

Continue ReadingWhat can we learn about tinnitus from social media posts?
Read more about the article A data-driven decision tree for diagnosing somatosensory tinnitus
Overview of the decision tree to diagnose somatosensory tinnitus

A data-driven decision tree for diagnosing somatosensory tinnitus

Based on the results of an online survey, we developed a decision tree to classify somatosensory tinnitus patients with an accuracy of over 80%.

Continue ReadingA data-driven decision tree for diagnosing somatosensory tinnitus

Examining the association of standard threshold shifts for occupational hearing loss among miners exposed to noise and platinum mine dust at a large-scale platinum mine in South Africa

The association of standard threshold shifts for occupational hearing loss among miners exposed to noise and platinum mine dust at a large-scale platinum mine in South Africa

Continue ReadingExamining the association of standard threshold shifts for occupational hearing loss among miners exposed to noise and platinum mine dust at a large-scale platinum mine in South Africa

Prediction of speech recognition by hearing-aid users: the syllable-constituent, contextual model of speech perception

Speech perception by hearing aid (HA) users has been evaluated in a database that includes up to 45 hours of testing their aided abilities to recognize syllabic constituents of speech, and words in meaningful sentences, under both masked (eight-talker babble) and quiet conditions.

Continue ReadingPrediction of speech recognition by hearing-aid users: the syllable-constituent, contextual model of speech perception

Administration by telephone of US National Hearing Test to 150,000+ persons

The U.S. National Hearing Test has now been taken by over 150,000 people and this extensive database provides reliable estimates of the distribution of hearing loss for people who voluntarily take a digits-in-noise test by telephone.

Continue ReadingAdministration by telephone of US National Hearing Test to 150,000+ persons

Audiological classification performance based on audiological measurements and Common Audiological Functional Parameters (CAFPAs)

Towards the development of a diagnostic supporting tool in audiology, the Common Audiological Functional Parameters (CAFPAs) were shown to be similarly suitable for audiological finding classification as combinations of typical audiological measurements, and thereby provide the potential to combine different audiological databases.

Continue ReadingAudiological classification performance based on audiological measurements and Common Audiological Functional Parameters (CAFPAs)

Predicting abnormal hearing difficulty in noise in ‘normal’ hearers using standard audiological measures

This study used machine learning models trained on otoacoustic emissions and audiometric thresholds to predict self-reported difficulty hearing in noise in normal hearers.

Continue ReadingPredicting abnormal hearing difficulty in noise in ‘normal’ hearers using standard audiological measures

Predicting Hearing Aid Fittings Based on Audiometric and Subject-Related Data: A Machine Learning Approach

A machine learning model is trained on real-world fitting data to predict the user's individual gain based on audiometric and further subject-related data, such as age, gender, and the acoustic environments.

Continue ReadingPredicting Hearing Aid Fittings Based on Audiometric and Subject-Related Data: A Machine Learning Approach

Aladdin: Automatic LAnguage-independent Development of the Digits-In-Noise test

The Automatic LAnguage-independent Development of the Digits-In-Noise test (Aladdin)-project aims to create a fully automatic test development procedure for digit-in-noise hearing tests in various languages and for different target populations.

Continue ReadingAladdin: Automatic LAnguage-independent Development of the Digits-In-Noise test
Read more about the article The critical role of computing infrastructure in computational audiology
The nine stages of the machine learning workflow.

The critical role of computing infrastructure in computational audiology

The rise of new digital tools for collecting data on scales never before seen in our field coupled with new modeling techniques from deep learning requires us to think about what computational infrastructure we need in order to fully enjoy the benefits and mitigate associated barriers.

Continue ReadingThe critical role of computing infrastructure in computational audiology
Read more about the article Computational Audiology: new ways to address the global burden of hearing loss
Source: https://www.stripepartners.com/our_writing_article/the-age-of-the-ear/

Computational Audiology: new ways to address the global burden of hearing loss

Computational audiology, the augmentation of traditional hearing health care by digital methods, has potential to dramatically advance audiological precision and efficiency to address the global burden of hearing loss.

Continue ReadingComputational Audiology: new ways to address the global burden of hearing loss