Computational audiology, the augmentation of traditional hearing health care by digital methods including artificial intelligence and machine learning, has the potential to dramatically advance audiological precision and efficiency to address the global burden of hearing loss. Hearing loss can be considered the most prevalent health impairment worldwide, with adequate diagnosis and treatment lacking for many. The increasing availability of quantitative multisource data for clinical care makes audiology ideally positioned for all-encompassing computational assistance, which in turn can serve as a template for other fields. On this website, we strive to highlight recent examples that illustrate the potential of a computational approach to audiology. We envision using computationalaudiology.com as a central hub to share resources that are useful for researchers and clinicians.
- sharing of research software, tools and models
- sharing best practices (data policies, software licensing), inspire peers, and increase transparency
- facilitating cooperation across centers increase sample sizes and strengthen the robustness of experimental evaluations
- building a community that fosters effective collaboration and uses similar tools and data sharing pipelines
Also, we wish to raise awareness for risks associated with big-data processing and emerging AI applications, both assistive and decisive. Within 5 to 10 years, computational audiology could catalyze the democratization of audiology and help millions of people that suffer from the disabling effects of hearing loss with information devices carried by billions across the world in their pockets. Now is the time to discuss medical, legal, and ethical requirements and considerations that need to be addressed in the near future to ensure a fair, affordable, and safe global hearing health care system that can address a rapidly growing need.
For those who wish to learn more about Artificial intelligence (AI) and machine learning (ML) we have compiled some resources that provide a good introduction to audiologists who want to learn more about these new data analysis tools.
- The national AI course – A free course. ‘Because artificial intelligence (AI) is essential, now and for the future.’
- Artificial Intelligence and Machine Learning in Hearing Care – A free 60 minute course made by Oliver Townend and Jens Nielsen. The course is sponsored by Widex. The purpose of short course is to give provide an overview of Artificial Intelligence and Machine Learning applications in hearing care. The course will discuss AI types, benefits, and applications as well as review findings and future directions.
- AI for everyone – A free course by Andrew Ng from Stanford University and Coursera. ‘AI is not only for engineers. If you want your organization to become better at using AI, this is the course to tell everyone–especially your non-technical colleagues–to take.’
- Neural networks and Deep Learning – A free course by Andrew Ng from Stanford University and Coursera. ‘If you want to break into cutting-edge AI, this course will help you do so. Deep learning engineers are highly sought after, and mastering deep learning will give you numerous new career opportunities. Deep learning is also a new “superpower” that will let you build AI systems that just weren’t possible a few years ago.’
- Machine Learning – A free advanced course by Andrew Ng from Stanford University. ‘Machine learning is the science of getting computers to act without being explicitly programmed. In the past decade, machine learning has given us self-driving cars, practical speech recognition, effective web search, and a vastly improved understanding of the human genome. Machine learning is so pervasive today that you probably use it dozens of times a day without knowing it. Many researchers also think it is the best way to make progress towards human-level AI. In this class, you will learn about the most effective machine learning techniques, and gain practice implementing them and getting them to work for yourself. More importantly, you’ll learn about not only the theoretical underpinnings of learning but also gain the practical know-how needed to quickly and powerfully apply these techniques to new problems. Finally, you’ll learn about some of Silicon Valley’s best practices in innovation as it pertains to machine learning and AI. This course provides a broad introduction to machine learning, data mining, and statistical pattern recognition. Topics include: (i) Supervised learning (parametric/non-parametric algorithms, support vector machines, kernels, neural networks). (ii) Unsupervised learning (clustering, dimensionality reduction, recommender systems, deep learning). (iii) Best practices in machine learning (bias/variance theory; innovation process in machine learning and AI). The course will also draw from numerous case studies and applications so that you’ll also learn how to apply learning algorithms to building smart robots (perception, control), text understanding (web search, anti-spam), computer vision, medical informatics, audio, database mining, and other areas.’
- The open-access book, ‘Fundamentals of Clinical Data Science‘, edited by Pieter Kubben, Michel Dumontier and Andre Dekker, is a book aimed at clinicians. ‘Topics covered in the first section on data collection include: data sources, data at scale (big data), data stewardship (FAIR data) and related privacy concerns.’
For an introduction and overview of how you can apply deep learning please have a look at the presentation “Tensorflow and deep learning – without a PhD” by Martin Görner. He makes deep learning look easy and provides a very cool example of a recurrent neural network that learned to write like Shakespeare.
Here is the code for the RNN he trains to create Shakespeare Hallucinated:
Below you can watch the Webinar given by Tobias Goehring about how you can apply machine learning to speech processing in hearing devices, on May 13. The webinar is part of the series organized by the Acoustic Network UK and the Basic Auditory Science Seminar.
Audiology is the science concerned with diagnosis, treatment and rehabilitation of hearing and balance. Scientifically, many breakthroughs in the field of auditory research emerged in the 20th century, including the clinical audiogram, evoked potentials and rehabilitation with hearing aids and cochlear implants. Auditory research offers a wider perspective and has provided fundamental insights into perception and behavior, which slowly make the transition into clinical practice. For those AI experts and digital health professionals who wish to learn more about audiology, we have compiled some resources that provide a good introduction to non-audiologist who want to learn more about the field of hearing.
- Audiology 101: An Introduction to Audiology for Non-audiologists. The chapter in this book explains the basic concepts of pediatric audiology.
Above you can watch ‘Auditory Transduction’, a video made by Brendon Pletch. ‘This 7-minute video by Brandon Pletsch takes viewers on a step-by-step voyage through the inside of the ear, to the acoustic accompaniment of classical music. Pletsch, a former medical illustration student at the Medical College of Georgia, first built a physical ear model and mapped which frequency ranges hit which parts of the inner ear. He then created digital renderings of each part of the hearing pathway using several software packages.
Here are examples of research projects and activities related to computational audiology:
adapted to healthcare, derived from the financial sector (Schueffel, 2017), a paper on data-mining in audiology (Mellor et al., 2018) and a blog (Bajwa, 2018) and a perspective paper on computational audiology (Wasmann et al., 2020).
|Algorithm (Schueffel, 2017)||An algorithm is a set of rules or a procedure to be followed for solving a mathematical problem.|
|Artificial intelligence (AI) (Schueffel, 2017)||Artificial intelligence is the ability of computer systems to perform tasks normally associated with the aptitudes of intelligent beings, such as learning and generalizing or even reasoning and interpreting; these abilities enable systems to accomplish complex tasks such as visual perception, speech recognition, decision-making, or translation between languages.|
|Big data (Schueffel, 2017)||Big data is a term for extremely large or complex data sets that can be mined and analyzed with specific data processing software.|
|computational audiology (Wasmann et al., 2020)||The approach to diagnosis, treatment and rehabilitation in audiology that|
|Deep Learning||The subset of machine learning composed of algorithms that permit software to train itself to perform tasks, like speech and image recognition, by exposing multilayered neural networks to vast amounts of data. Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction.|
|machine learning||Machine learning is an application of artificial intelligence that automates analytical model building by using algorithms that iteratively learn from data without being explicitly programmed where to look. (Schueffel, 2017).|
a branch of artificial intelligence in which a computer learns to perform a task by being exposed to representative data. In general, large amounts of data are required to arrive at a solution with sufficient accuracy. Machine learning is to some extent the opposite of computer programming, in which software is produced that explicitly defines how to solve a particular problem. Instead, machine learning is ‘data-driven’ and learns the task by application of statistical techniques.
|Data mining (Berendt & Preibusch, 2014)||data mining in the general sense is regarded as “knowledge discovery”. Data mining includes descriptive aspects (when it is used as exploratory data analysis) as well as prescriptive aspects (when it is used for decision support, in recommender systems, etc.)|
|Open Innovation (Schueffel, 2017)||Open innovation is the use of purposive inflows and outflows of knowledge to accelerate internal innovation, and expand the markets for external use of innovation, respectively|
|Classification (Mellor et al., 2018)||Classification is the task of predicting the label or category of a new observation (from a set of labels or categories), given a training set of data containing observations (or instances) whose labels are already known.|
|Clustering (Mellor et al., 2018)||Clustering is the task of grouping observations (or instances) into groups known as clusters, given a training set of data containing observations. The goal is that instances in the same cluster should be more similar to each other than to instances in other clusters. Unlike with classification, no labels are provided beforehand.|
|Dimension (Mellor et al., 2018)||Dimension is a synonym for an attribute or feature. An example entry, or instance, in the data set will be described by a set of dimensions. Examples of dimensions are height, gender, and age, or a measure of absolute threshold at a single frequency.|
|Domain (Mellor et al., 2018)||Domain is a high-level modality, where the concept is broader in nature. For example, a person’s lifestyle may be described in a given domain, and their hearing status may be described in another. Each domain can be measured by multiple dimensions/features that may be grouped into multiple modalities.|
|Modality (Mellor et al., 2018)||Modality is a set of related dimensions/features that describe a single object or concept. For example, a clinical audiogram is typically specified by thresholds at eight different frequencies. When the dimensions together describe a single concept, such as an audiogram, we term this a modality.|
|Regression (Mellor et al., 2018)||Regression is the task of predicting the continuous response to an input variable, given a set of training data containing observations whose continuous response is already known. This prediction of a continuous response is as opposed to classification where solely a discrete label or category is predicted.|
|Subgroup discovery (Mellor et al., 2018)||Subgroup discovery is the task of finding a subset of instances in a data set for which some relationship or dependency holds. This is as opposed to classification, regression, and clustering that provide some prediction or description of the whole data set.|
|Overfitting (Bajwa, 2018)||Overfitting — The phenomenon in which a particular dataset is fit too closely by a model, resulting in poor generalizability for that model. Mitigated with regularization.|
|Regularization (Bajwa, 2018)||Regularization — The usual method to address overfitting by introducing additional optimization constraints. Mathematically, a common form of regularization is to penalize the norm of the parameter (weight) vector.|
Bajwa, A. (2018, August 17). What We Talk About When We Talk About Bias (A guide for everyone). Medium. https://medium.com/@ayesharbajwa/what-we-talk-about-when-we-talk-about-bias-a-guide-for-everyone-3af55b85dcdc
Berendt, B., & Preibusch, S. (2014). Better decision support through exploratory discrimination-aware data mining: Foundations and empirical evidence. Artificial Intelligence and Law, 22(2), 175–209. https://doi.org/10.1007/s10506-013-9152-0
Mellor, J. C., Stone, M. A., & Keane, J. (2018). Application of Data Mining to “Big Data” Acquired in Audiology: Principles and Potential. Trends in Hearing, 22. https://doi.org/10.1177/2331216518776817
Schueffel, P. (2017). The concise Fintech compendium. Fribourg, Switzerland.
Wasmann JW, Lanting C, Huinck W, Mylanus E, van der Laak J, Govaerts P, Swanepoel D, Moore DR, Barbour DL. “Computational Audiology: New Approaches to Advance Hearing Health Care in the Digital Age”, (2020, in review) DOI 10.31234/osf.io/hu8eg.