VCCA2021

DRAFT PROGRAM SCHEDULE on Friday, 25 June 2021

8:00 AM (London), 9:00 AM (Amsterdam), 5:00 PM (Sydney), 3:00 AM (New York)

MAIN SESSION 1

8:10 (London), 9:10 (Amsterdam), 17:10 (Sydney), 3:10 (New York) 50 min

ZoomRoom A

Chair: Tobias Goehring

KEYNOTE 1: Prof Brian CJ Moore Time-efficient hearing tests and their use in the fitting of hearing aids 30 min

FEATURED 1: Dr Maartje Hendrikse Virtual audiovisual environments for hearing aid evaluation (and fitting) 20 min

SWITCH-OVER BREAK >>> Join Parallel session 15 min

PARALLEL SESSION 1 & 2

9:15 (London), 10:15 (Amsterdam), 18:15 (Sydney), 4:15 (New York) 90 min

P1: Hearing loss detection, monitoring & prevalence

ZoomRoom A

Chair P1: Karina de Sousa

P1-1: Monaghan et al
Detecting hearing loss from children’s speech using machine learning

P1-2: Ooster et al
Hearing test using smart speakers: Speech audiometry with Alexa

P1-3: Lenatti et al
Evaluation of multivariate classification algorithms for hearing loss detection through a speech-in-noise test

P1-4: Herrmann et al
Model-based selection of most informative diagnostic tests and test parameters

P1-5: Liepollo et al
Examining the association of standard threshold shifts for occupational hearing loss among miners exposed to noise and platinum mine dust at a large-scale platinum mine in South Africa

P1-6: Tsimpida et al
Prevalence statistics of hearing loss in adults: Harnessing spatial big data to estimate patterns and trends

P2: Listening effort, behaviour & intervention
90 min

ZoomRoom B

Chair P2: Joaquin Valderrama-Valenzuela

P2-1: Plain et al
A classification approach to listening effort: combining features from the pupil and cardiovascular system

P2-2: Beckers et al
Assessing listening effort, using EEG and pupillometry, in response to adverse listening conditions and memory load

P2-3: Hoogestrat & Illiger et al
Automatic detection of human activities from accelerometer sensors integrated in hearables

P2-4: Knoetze et al
Sound-level monitoring earphones with smartphone feedback as an intervention to promote healthy listening behaviors in young adults

P2-5: Migliorini et al
How variation in cochlear implant performance relates to differences in MAP parameters

P2-6: Picinali et al
Designing the BEARS (Both Ears) virtual reality training suite for improving spatial hearing abilities in teenage bilateral cochlear implantees

SWITCH-OVER BREAK >>> Join Main session 15 min

SPECIAL SESSION 1: GLOBAL BURDEN OF HEARING LOSS

11:00 (London), 12:00 (Amsterdam), 20:00 (Sydney), 6:00 (New York) 60 min

ZoomRoom A

Chair: Jan-Willem Wasmann

Discussion on devices: Alan Archer-Boyd

Discussion on services: Saima Rajasingam

KEYNOTE 2: Prof Nicholas Lesica Harnessing the power of AI to combat the global burden of hearing loss: opportunities and challenges 30 min

DISCUSSION 1 (Break-out room):

Hearing devices of the future 30 min
Overcoming barriers of stigma, logistics, costs and efficacy

DISCUSSION 2 (Break-out room):

Hearing services of the future 30 min
Ensuring wide and equitable access to hearing healthcare

LUNCH-BREAK /// Relax & Chat in Main session 60 min

12:00 (London), 13:00 (Amsterdam), 21:00 (Sydney), 07:00 (New York)

13:00 (London), 14:00 (Amsterdam), 22:00 (Sydney), 08:00 (New York) 120 min

ZoomRoom A

Chair: Prof Waldo Nogueira

Introduction by Prof Waldo Nogueira

FEATURED 2: Dr Raul Lopez-Sanchez Hearing deficits and auditory profiling: data-driven approaches towards personalized audiology 20 min

FEATURED 3: Dr Niels Pontoppidan Learning from audiological data collected in the lab and the real world 20 min


EXPERT PANEL TALKS 5×10 min
T1: Dr Rob Eikelboom

T2: Sarah Nee and Prof Michael Marschollek

T3: Dr Filiep Vanpoucke

T4: Dr Eugen Kludt

T5: Prof Rick Neitzel

PANEL DISCUSSION 30 min

COFFEE BREAK >>> Join Main session 30 min

16:30 (London), 17:30 (Amsterdam), 1:30 (Sydney), 11:30 (New York) 50 min

ZoomRoom A

Chair: tbd

KEYNOTE 3: Prof Josh McDermott New models of human hearing via deep learning 30 min

FEATURED 4: Dr Josef Schlittenlacher Machine learning for models of auditory perception 20 min

BREAK >>> Join Parallel session 10 min

PARALLEL SESSION 2

16:30 (London), 17:30 (Amsterdam), 1:30 (Sydney), 11:30 (New York) 75 min

 

ZoomRoom A

Chair P1: tbd

ZoomRoom B

Chair P2: tbd

P3: Deep learning applications and models 75 min

P3-1: Brochier et al
Comparing phonemic information transmission with cochlear implants between human listeners and an end-to-end computational model of speech perception

P3-2: Saddler et al
Hearing-impaired artificial neural networks replicate speech recognition deficits of hearing-impaired humans

P3-3: Huelsmeier et al
Estimating the distortion component of hearing impairment from attenuation-based model predictions using machine learning

P3-4: Rossbach et al
Binaural prediction of speech intelligibility based on a blind model using automatic phoneme recognition

P3-5: Keshavarzi et al
Use of a deep recurrent neural network to reduce transient noise: Effects on subjective speech intelligibility and comfort

P4: Interventions and diagnosis of tinnitus 75 min

P4-1: Beukes et al
Outcomes and experiences of delivering an internet-based intervention for tinnitus during the COVID-19 pandemic

P4-2: Cardon et al
A data-driven decision tree for diagnosing somatosensory tinnitus

P4-3: Revel et al
What can we learn about tinnitus from social media posts?

P4-4: Erinc et al
Behavioral and electrophysiological evaluation of loudness growth in clinically normal hearing tinnitus patients with and without hyperacusis

P4-5: Stylou et al
Systematic monitoring of Meniere’s disease: A smartphone-based approach for the periodical assessment of audiometric measures and fluctuating symptoms

DINNER-BREAK /// Relax & chat >>> Join Main session 45 min
17:45 (London), 18:45 (Amsterdam), 2:45 (Sydney), 12:45 (New York)

MAIN SESSION 3

18:30 (London), 19:30 (Amsterdam), 3:30 (Sydney), 13:30 (New York) 50 min

Chair: Alexander J Billig

KEYNOTE 4: Prof Mounya Elhilali Auditory salience 30 min

FEATURED 5: Dr Simone Graetzer Clarity: machine learning challenges for improving hearing aid processing of speech in noise 20 min

SWITCH-OVER BREAK >>> Join Parallel session 10 min

Parallel SESSION 3

19:30 (London), 20:30 (Amsterdam), 4:30 (Sydney), 14:30 (New York) 90 min

ZoomRoom A

Chair P5: Alexander J Billig

ZoomRoom B

Chair P6: Gerard Encina-Llamas

P5: Auditory attention and processes 90 min

P5-1: Holmes et al
Using active inference to model selective attention during cocktail party listening

P5-2: Keshavarzi et al
Cortical tracking of a distractor speaker modulates the comprehension of a target speaker

P5-3: Kegler et al
Correlates of linguistic processing in the frequency following response to naturalistic speech

P5-4: Zhang et al
The effect of selective loss of auditory nerve fibers on temporal envelope processing: a simulation study

P5-5: Grant et al
Functional hearing and communication deficits (FHCD) in blast-exposed service members with normal to near-normal hearing thresholds

P5-6: Ratnanather et al
Visualization of speech perception errors through phoneme alignment

P6: Computational auditory modelling 90 min

P6-1: Carney et al
“Ear in the Clouds”– A web app supporting computational models for auditory-nerve and midbrain responses

P6-2: Fan et al
Predicting fusion of dichotic vowels in normal hearing listeners with a physiologically-based model

P6-3: Kipping et al
A computational single-fiber model of electric-acoustic stimulation

P6-4: Mitchell et al
A computational model of fast spectrotemporal chirp sensitivity in the inferior colliculus

P6-5: Yayli et al
Modeling the effects of inhibition and gap junctions on synchrony enhancement in bushy cells of the ventral cochlear nucleus

P6-6: Leong et al
Modeling formant-frequency discrimination based on auditory-nerve and midbrain responses: normal hearing and sensorineural hearing loss

21:00 (London), 22:00 (Amsterdam), 6:00 (Sydney), 16:00 (New York) 15 min

21:15 (London), 22:15 (Amsterdam), 6:15 (Sydney), 16:15 (New York) 90 min

Bringing AI and Audiology together
>