I am an independent senior research fellow at the Wellcome Centre for Human Neuroimaging, at University College London. Before joining UCL, I obtained my PhD at the Donders Institute, under the supervision of Dr. Floris de Lange, and completed a postdoctoral fellowship in the lab of Professor Nicholas Turk-Browne, first at Princeton University and then at Yale University. I am interested in how prior knowledge and expectations influence how we perceive the world, and how this is realised by the brain.
In my work, I am interested in what determines how we perceive the world around us. What I find particularly fascinating is the fact that our visual perception is not simply a product of the light that hits our eyes, but is instead strongly influenced by our prior knowledge and expectations.
For instance, our knowledge that light usually comes from above leads us to perceive flat circles as either convex or concave dimples, depending on whether the 'shadow' is on the top or the bottom of the circle. And we have such a strong expectation that faces are convex rather than concave that we perceive the inside of a hollow mask as convex despite all cues to the contrary. See this video for a striking demonstration.
In other words, perception can be seen as a process of inference, trying to arrive at the most likely explanation for our sensory inputs given our knowledge of the world. For example, we have a strong impression of a white square in the image above right, possibly because our visual system deems a square surface occluding parts of the circular orange and lime slices a likely cause of this image. Note that this is not a conscious, delibarate process, as I may appear to be suggesting here, but rather an automatic property of the way our brain processes information.
Prior knowledge is not always as static as in the examples above; in daily life we often form expectations on the fly, based on associations we have learned. For instance, if we hear the sound of an ice cream van approaching, we have a very clear expectation of what we will see (and possibly taste) once it comes around the corner. When such expectations are valid, they help us process sensory inputs quickly and efficiently. When they are wrong, they can impair and bias our perception, especially when sensory inputs are noisy, such as in a rain storm or heavy fog. Many of us will know how eerily like a monster a bunch of clothes on a chair can look in a dark bedroom.
When our brain relies on (abberant) prior expectations too much, at the expense of real sensory inputs, this will lead to hallucinations, as is the case in for instance psychosis and schizophrenia. At the other end of the spectrum, diminished use of prior knowledge has been implied in autism. Understanding the mechanisms by which the brain integrates prior knowledge and sensory inputs will likely shed light on many aspects of brain function and cognition.
If you'd like to read about these topics more in depth, have a look at this review paper.
In my research, I use advanced neuroimaging methods to study the neural mechanisms underlying perceptual expectations in humans.
In one line of research, I have induced expectations by having people learn that the presentation of a certain tone predicts which image they will see next. Usually, this prediction would come true, but sometimes we presented an unexpected image instead. We found that valid expectations led to less brain activity in the visual cortex, but that this activity contained more information about the presented image. (For details, see the paper.) We even found that expecting a specific image to appear can lead to the activation of a template of that image in visual cortex (read the paper), before any image is presented at all (read the paper). In other words, the brain prepares for expected inputs by pre-activating a template of those inputs. Finally, using a slightly different experimental setup with very noisy visual stimuli, we found that invalid expectations can bias what people see, as well as what is represented in the activity patterns in the visual cortex (see this paper). Together, these findings show that prior expectations have a profound influence on sensory processing.
As mentioned above, expectations induced by prior knowledge can also explain many visual illusions. In another set of studies, I investigated the neural mechanism of this phenomenon in the context of the famous Kanizsa illusion (see below). Using a novel method to reconstruct the image represented in the visual cortex, we revealed a striking effect of this visual illusion: brain activity was increased at the location of the illusory triangle, but decreased at the locations of the ‘Pac-Men’ that induced the illusion. See the paper for details of the reconstruction method and a detailed interpretation of how this effect may be explained by perceptual expectations.
In order to obtain a mechanistic interpretation of these phenomena, we need to know more about the neural circuits involved. The human cortex consists of multiple layers, which serve different functions in information processing. For example, information from the eyes arrives predominantly in the middle layers, while feedback information from higher-order cortical regions mostly affects processing in the deep and superficial layers. In humans the activity in the different layers is almost impossible to tease apart because the layers are so thin. However, using high-resolution fMRI scanning in combination with a novel analysis technique, we were able to show that the increased activity evoked by the illusory triangle was specific to the deep layers of the primary visual cortex. This suggests that the effect is the result of feedback signals from higher-order cortical regions. For details, see the paper.
Finally, one of the main questions I aim to answer is what the source of these top-down expectation effects in visual cortex is. Based on its reciprocal connections with sensory cortices of all modalities (left panel in the figure below), as well as its ability to quickly learn arbitrary relationships between stimuli, the hippocampus is likely to play an important role in this process. In a recent study, we investigated this by exposing people to auditory cues that predicted the appearance of specific abstract shapes. We found that, while visual cortex represented the shapes that were presented to people's eyes, the hippocampus represented the shapes predicted by the auditory cues, regardless of what was actually presented. This dissociation demonstrates that these two neural systems have distinct roles in perceptual processing, and hints at the possibility of the hippocampus being a source of prediction signals. See the paper for a more detailed discussion.
In sum, my work has shown that neural activity in visual cortex is strongly influenced by prior knowledge and expectations, to a much further extent than previously thought. This is line with perception being a process of inference, combining information from the eyes and our prior knowledge to guess the most likely cause of our sensory inputs.
In addition to the research lines described above, some other projects I have been involved in include:
For a full list of my publications, please see my Google Scholar profile.
p.kok at ucl.ac.uk
University College London
Wellcome Centre for Human Neuroimaging
12 Queen Square
London WC1N 3BG