Understanding and improving neural tracking of audiovisual speech in adults with cochlear implants: the principle of cooperation and competition between modalities
Year of award: 2025
Grantholders
Dr Guangting Mai
University College London, United Kingdom
Project summary
Cochlear implants (CI) are neuro-prostheses that help hearing-impaired listeners access hearing but have not restored speech perception as for normally-hearing individuals. CI listeners face day-to-day communicative challenges in complex listening environments and often use visual cues (e.g., talker’s lip-movements) to aid speech understanding. While visual speech helps compensate for impoverished hearing, mechanisms by which it may cooperate and compete with auditory processing are poorly understood. This fellowship aims to understand this brain process in CI listeners. Using novel neuroimaging techniques (EEG and fNIRS), I will disentangle neural substrates of audiovisual cooperation and competition by measuring cross-modal ‘neural tracking’ (neural activity synchronising with essential auditory and visual speech features) in CI listeners' brains. I will perform a series of cross-sectional and longitudinal neuroimaging experiments to elucidate how different cross-modal tracking signatures support (cooperation) or impede (competition) audiovisual integration and explain variable success at accessing higher-level, modality-independent linguistic representations. Finally, I will use neurofeedback to determine whether regulating neural tracking causally improves CI listeners’ auditory and audiovisual speech performance. This project will advance our knowledge of mechanisms underlying CI audiovisual speech perception and provide an essential theoretical basis for future CI rehabilitation that benefits a wide range of scientific and clinical stakeholders.