Audiology services provided via connected health care

Read more about the article Hearing test using smart speakers: Speech audiometry with Alexa
Illustration of the differences between the clinical test and speech audiometry at home, which can be performed with a smart speaker.

Hearing test using smart speakers: Speech audiometry with Alexa

We present an Alexa skill that performs a speech-in-noise listening test with matrix sentences. The skill is evaluated with four subject groups and in three different acoustic conditions.

Continue ReadingHearing test using smart speakers: Speech audiometry with Alexa
Read more about the article Sound-Level monitoring earphones with smartphone feedback as an intervention to promote healthy listening behaviors in young adults
dbTrack (Westone) technology

Sound-Level monitoring earphones with smartphone feedback as an intervention to promote healthy listening behaviors in young adults

In a first of its kind study, we aimed to determine the accuracy and reliability of sound-level monitoring earphones and the effect of smartphone feedback as an intervention to encourage safe listening use among young people.

Continue ReadingSound-Level monitoring earphones with smartphone feedback as an intervention to promote healthy listening behaviors in young adults
Read more about the article Automatic detection of human activities from accelerometer sensors integrated in hearables
Figure 1 A: Data streamed from the hearables to a PC running a recording script. B. Data from two different activities, recorded over 5 minutes each. C. Results from a classification using Naïve Bayes classification and a 5-fold cross-validation procedure.

Automatic detection of human activities from accelerometer sensors integrated in hearables

Using a Naive Bayes classifier, we could show that twelve different activities were classified above chance.

Continue ReadingAutomatic detection of human activities from accelerometer sensors integrated in hearables
Read more about the article Visualization of speech perception errors through phoneme alignment
Results for V1 with both cochlear implant (CI) and hearing aid (HA) (top), CI only (middle) and HA only (bottom) in response to a set of 30 sentences extracted from Speech Banana auditory training app

Visualization of speech perception errors through phoneme alignment

We present an automated program for aligning stimulus-response phonemes collected from speech testing in order to visualize speech perception errors in individuals with hearing loss

Continue ReadingVisualization of speech perception errors through phoneme alignment
Read more about the article Systematic monitoring of Meniere’s disease: A smartphone-based approach for the periodical assessment of audiometric measures and fluctuating symptoms
A) Flowchart of the "Meniere Calendar" application, B) Final psychometric function fitted to data from a normal hearing participant (left) and a Meniere's patient (right). C) Screenshots of the application's environment

Systematic monitoring of Meniere’s disease: A smartphone-based approach for the periodical assessment of audiometric measures and fluctuating symptoms

This study developed an application software that periodically records the course of Meniere's disease patients through the performance of automated, binaural audiometric tests.

Continue ReadingSystematic monitoring of Meniere’s disease: A smartphone-based approach for the periodical assessment of audiometric measures and fluctuating symptoms
Read more about the article What can we learn about tinnitus from social media posts?
Topics found with Latent Dirichlet Allocation on 100k Reddit posts. The top rectangle of each block is the topic's name assessed through the algorithm's output: the lemmatised words shown in the middle rectangle. The proportion of messages that mention the topic is displayed in the lower rectangle.

What can we learn about tinnitus from social media posts?

Exploiting spontaneous messages of Reddit users discussing tinnitus, this work identifies the main topics of interest, their heterogeneity, and how they relate to one another based on cooccurrence in users' discussions; to enhance patient-centered support.

Continue ReadingWhat can we learn about tinnitus from social media posts?
Read more about the article Outcomes and experiences of delivering an internet-based intervention for tinnitus during the COVID-19 pandemic
Outline of the RCT

Outcomes and experiences of delivering an internet-based intervention for tinnitus during the COVID-19 pandemic

A RCT was undertaken on a US populations indicating the value of an CBT internet-intervention for both reducing tinnitus distress, tinnitus comorbidities and managing the anxiety associated with the pandemic.

Continue ReadingOutcomes and experiences of delivering an internet-based intervention for tinnitus during the COVID-19 pandemic

Preliminary evaluation of the Speech Reception Threshold measured using a new language-independent screening test as a predictor of hearing loss

We developed a new, automated, language-independent speech in noise screening test, we evaluated its performance in 150 subjects against the WHO criteria for slight/mild and moderate hearing loss, and we observed an accuracy >80%, with an area under the ROC curves equal to 0.83 and 0.89, respectively.

Continue ReadingPreliminary evaluation of the Speech Reception Threshold measured using a new language-independent screening test as a predictor of hearing loss
Read more about the article Dynamically Masked Audiograms with Machine Learning Audiometry
Final masked AMLAG results for one participant (127) with a left cochlear implant and no residual hearing. Red diamonds denote unheard tones and blue pluses denote heard tones. The most intense tones at lower frequencies in the left ear were effectively masked.

Dynamically Masked Audiograms with Machine Learning Audiometry

Dynamically masked audiograms achieve accurate true threshold estimates and reduce test time compared to current clinical masking procedures.

Continue ReadingDynamically Masked Audiograms with Machine Learning Audiometry

Aladdin: Automatic LAnguage-independent Development of the Digits-In-Noise test

The Automatic LAnguage-independent Development of the Digits-In-Noise test (Aladdin)-project aims to create a fully automatic test development procedure for digit-in-noise hearing tests in various languages and for different target populations.

Continue ReadingAladdin: Automatic LAnguage-independent Development of the Digits-In-Noise test

Speech recognition apps for the hearing impaired and deaf

Speech recognition software has become increasingly sophisticated and accurate due to progress in information technology. This project aims to examine the performance of speech recognition apps and to explore which audiological tests are a representative measure of the ability of these apps to convert speech into text.

Continue ReadingSpeech recognition apps for the hearing impaired and deaf
Read more about the article Computational Audiology: new ways to address the global burden of hearing loss
Source: https://www.stripepartners.com/our_writing_article/the-age-of-the-ear/

Computational Audiology: new ways to address the global burden of hearing loss

Computational audiology, the augmentation of traditional hearing health care by digital methods, has potential to dramatically advance audiological precision and efficiency to address the global burden of hearing loss.

Continue ReadingComputational Audiology: new ways to address the global burden of hearing loss