Searching for individual differences in audiovisual integration of speech in noise

A new, single parameter model of audiovisual integration (the Multi-Stage Noise model) suggests that differences in unisensory processing account for ~90% of the variance of individual differences in audiovisual speech perception, when applied to a large dataset of speech perception across 3 modalities (audio, visual, audiovisual).

Continue ReadingSearching for individual differences in audiovisual integration of speech in noise