We are constantly immersed into a world with massive and fast-changing information, yet we are able to selectively prioritize the part that we need to handle at a given moment, and perform various kinds of tasks. How do we maintain and update cognitive processes in such a dynamic fashion with limited capacity and reasonable amount of effort?

To answer this question, researchers not only need to study cognitive functions in isolation, but also should understand how those functions interact in dynamic contexts.

Task-irrelevant information contribute to priority maps

In attention studies, it is common methodology to give participants some cognitive tasks to perform (e.g., look for a letter T among distractor Ls), while manipulating task-relevant (e.g., location of target T) and task-irrelevant conditions (e.g., color of all letters) in order to investigate how attention is deployed. However, this definition of task-relevancy arbitrarily depends on how a researcher designs the experiment; if the task is changed to finding the uniquely-colored letter, color of all letters becomes task-relevant. More importantly, these manipulations are often unknown to participants, raising a critical question: are those “designed-to-be” task-irrelevant factors really treated as irrelevant in natural cognitions?

Neural representations of task-irrelevant object identities are modulated by task-relevant spatial certainty

When an object is presented on the screen, it can be processed by the brain to some extent even if it is not relevant to current task. However, in an on-going fMRI study, pilot data show that this neural representation of task-irrelevant object identities can be modulated by whether there is explicit spatial certainty for attention to deploy for the current task. Specifically, when there is no spatial certainty, the decoding accuracy is higher than when there is higher spatial certainty in early visual cortex. This pattern seems to indicate that task-irrelevant objects are better processed when there is no strategic spatial certainty to use for the task.

Cognitive processes update across eye movements

Every time we move our eyes, a new image is recorded at the back of our eye (i.e., the retina). However, we seldom perceive this change in the visual world despite frequent eye movements or saccades. How we maintain stable perception of the world is then a critical yet complex question which is fundamental to understanding our visual system and cognitive processes.

Target localization across saccades

Spatial information is initially processed on the retina as retinotopic (eye-centered) location. On the other hand, our daily cognitive tasks (e.g., finding keys on a table) often require spatiotopic (world-centered) location. This means that we frequently need to translate retinotopic coordinates into spatiotopic coordinates across saccades. There are many ways we use to link the retinal information before and after a saccade to get stable perception, and nontarget objects as “landmarks” are one of them. My research has shown that the presence of nontargets both facilitate and biases target localization across saccades, by providing relational information in space.

Covert attention across saccades compared to during sustained fixations

We can direct our visual spatial attention by covertly attending to relevant locations, moving our eyes, or both simultaneously. Attention shifting has been found to share underlying brain mechanisms with saccades. It is then an interesting question how shifting versus holding covert attention during fixations compares with maintaining covert attention across saccades; that is, how retinotopic and spatiotopic attention compares with shifting versus holding covert attention at fixations, regarding their brain representations.

Decoding 3D spatial locations across saccades

The signals processed on our retina is two-dimentional. But our world has a third dimension, depth, which also needs to be reconstructed from retinal inputs. How is our perception of depth maintained across saccades? By recording participants brain activities while viewing a stimulus that appeared in one of the 3D screen locations, we are able to examine the representations of 3D spatial locations in the brain, and investigate whether/how they change across saccades.