University of Cambridge, United Kingdom
How do we understand what we see? The way we recognise objects depends on dynamic transformations of information from vision to semantics. Our understanding of what we see is shaped by the environment. When we see an object, we are already in a complex and rich environment and this leads to expectations about the things we are likely to see.
My proposed research will test how the environment changes the dynamics of visual and semantic activity in the brain. I will combine lab-based and real-world studies using a multimodal brain imaging framework using fMRI, MEG, EEG and mobile EEG, with emerging methodologies including augmented reality, computational modelling, multivariate analyses, neural oscillations and brain connectivity.
This research will advance models of object recognition, which currently have little to say on the dynamic neural mechanisms of how the world interacts with our perception of objects.