“Ear in the Clouds”– A web app supporting computational models for auditory-nerve and midbrain responses

Laurel H. Carney1,2, Ava E. Giorgianni1, Douglas M. Schwarz2

1 Biomedical Engineering, University of Rochester, Rochester, NY, USA; 2 Del Monte Institute of Neuroscience, University of Rochester, Rochester, NY, USA

Background: Complex sounds are encoded in population responses that support discrimination, identification, and detection. Models that simulate neural responses are widely available; however, they often require access to and familiarity with computer programming environments, such as MATLAB or Python. Additionally, models often focus on single neurons – users interested in population coding must extend the models to visualize responses of tonotopic arrays of neurons. We developed a cloud-based web app to make simulation and visualization of population responses more accessible.

Methods: The MATLAB-based App Designer was used to implement our graphical user interface (UR_EAR, University of Rochester, Envisioning Auditory Responses). The interface displays responses of auditory-nerve (AN) (Zilany et al., 2014, and Bruce et al., 2018) and midbrain models (Mao et al., 2013; Nelson & Carney, 2004; Carney & McDonough., 2018). Midbrain models include two major types of inferior colliculus (IC) neurons, with band-enhanced and band-suppressed modulation transfer functions.  Stimuli include user-uploaded audio files or standard psychophysical stimuli with adjustable parameters. Time-varying rate functions for a selectable range of frequency channels are displayed, alongside average-rate responses computed over an adjustable analysis window. 

Results: The web app is at https://urhear.urmc.rochester.edu, with links to a User Manual, FAQs, and a contact button for the authors. Open source code is available at https://osf.io/6bsnt/, including executable versions for Win10, Mac, and Linux.

Conclusions: The current Web App supports visualizations of AN and IC model population responses for several standard psychophysical stimuli, as well as responses to user-provided audio files. Future efforts will be focus on efficient visualization of responses to longer stimuli (music, speech) and apps that estimate psychophysical thresholds based on model responses. (NIH-R01-001641)

Carney et al