Authors: James D. Miller1, Charles S. Watson1, Marjorie R. Leek2, David J. Wark3, Pamela E. Souza4, Sandra Gordon-Salant5, Jayne B. Ahlstrom6, Judy R. Dubno6, and Gary R. Kidd1
1Communication Disorders Technology, Incorporated, Bloomington, Indiana 47408, USA
2VA Loma Linda Healthcare System, Loma Linda, California 92357, USA
3Communication Sciences and Disorders, University of Memphis, Memphis, Tennessee 38105, USA
4Communication Sciences and Disorders, Northwestern University, Evanston, Illinois 60208, USA
5Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
6Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina 29425, USA
Background: Speech perception by hearing aid (HA) users has been evaluated in a database that includes up to 45 hours of testing their aided abilities to recognize syllabic constituents of speech, and words in meaningful sentences, under both masked (eight-talker babble) and quiet conditions. Extensive data from 113 HA users were used to evaluate the association between recognition in the quiet and in the presence of masking. These data were also used to test a predictive model based on the assumption that speech recognition is a joint function of the ability to identify the fundamental elements of speech and to use contextual cues.
Method: Data were collected on 113 HA users, with the Speech Perception Assessment and Training System (SPATS, Miller et al., 2007), at four US universities and at a center operated by the US Veterans Administration. Listeners were trained to recognize syllabic constituents (onsets, nuclei, and codas) and brief sentences, at five levels of masking and in the quiet. Following 30 hours of training they were retested on constituents and sentences. A mathematical model was fitted to the data, based on a two-parameter assumption. One parameter is the ability to recognize syllable constituents and the other is the efficiency of the use of context.
Results:
- Results based on 113 HA users confirm earlier findings that the same underlying relations between recognition of constituents and sentences are maintained after extensive training.
- The ability to identify syllable constituents when combined with the ability to utilize context is highly predictive of the ability to identify words in sentences.
- A model based on the premise that sentence recognition is controlled in large part by a client’s ability resolve relevant phonetic details predicts the observed results.
- Performance following 30 hours of training shows small but significant improvements in both constituent and sentence identification.
Conclusion: The crucial measure of a hearing aid is how well the aided listener can correctly identify the relevant phonetic details of speech. Non-auditory factors such as the use of context are important but are not directly influenced by hearing-aid design.