Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yaoda Xu is active.

Publication


Featured researches published by Yaoda Xu.


Nature | 2006

Dissociable neural mechanisms supporting visual short-term memory for objects.

Yaoda Xu; Marvin M. Chun

Using visual information to guide behaviour requires storage in a temporary buffer, known as visual short-term memory (VSTM), that sustains attended information across saccades and other visual interruptions. There is growing debate on whether VSTM capacity is limited to a fixed number of objects or whether it is variable. Here we report four experiments using functional magnetic resonance imaging that resolve this controversy by dissociating the representation capacities of the parietal and occipital cortices. Whereas representations in the inferior intra-parietal sulcus (IPS) are fixed to about four objects at different spatial locations regardless of object complexity, those in the superior IPS and the lateral occipital complex are variable, tracking the number of objects held in VSTM, and representing fewer than four objects as their complexity increases. These neural response patterns were observed during both VSTM encoding and maintenance. Thus, multiple systems act together to support VSTM: whereas the inferior IPS maintains spatial attention over a fixed number of objects at different spatial locations, the superior IPS and the lateral occipital complex encode and maintain a variable subset of the attended objects, depending on their complexity. VSTM capacity is therefore determined both by a fixed number of objects and by object complexity.


Trends in Cognitive Sciences | 2009

Selecting and perceiving multiple visual objects

Yaoda Xu; Marvin M. Chun

To explain how multiple visual objects are attended and perceived, we propose that our visual system first selects a fixed number of about four objects from a crowded scene based on their spatial information (object individuation) and then encode their details (object identification). We describe the involvement of the inferior intra-parietal sulcus (IPS) in object individuation and the superior IPS and higher visual areas in object identification. Our neural object-file theory synthesizes and extends existing ideas in visual cognition and is supported by behavioral and neuroimaging results. It provides a better understanding of the role of the different parietal areas in encoding visual objects and can explain various forms of capacity-limited processing in visual cognition such as working memory.


Journal of Experimental Psychology: Human Perception and Performance | 2002

Limitations of object-based feature encoding in visual short-term memory.

Yaoda Xu

The present study investigated object-based feature encoding in visual short-term memory for 2 features within the same dimension that occur on different parts of an object. Using the change-detection paradigm, this experiment studied objects with 2 colors and objects with 2 orientations. Participants found it easier to monitor 1 rather than both features of such objects, even when decision noise was properly controlled for. However, no object-based benefit was observed for encoding the 2 features of each object that were of the same dimension. When similar stimuli were used but the 2 features of each object were from different dimensions (color and orientation), an object-based benefit was observed. These results thus impose a major constraint on object-based feature encoding theories by showing that only features from different dimensions can benefit from object-based encoding.


Proceedings of the National Academy of Sciences of the United States of America | 2007

Visual grouping in human parietal cortex

Yaoda Xu; Marvin M. Chun

To efficiently extract visual information from complex visual scenes to guide behavior and thought, visual input needs to be organized into discrete units that can be selectively attended and processed. One important such selection unit is visual objects. A crucial factor determining object-based selection is the grouping between visual elements. Although human lesion data have pointed to the importance of the parietal cortex in object-based representations, our understanding of these parietal mechanisms in normal human observers remains largely incomplete. Here we show that grouped shapes elicited lower functional MRI (fMRI) responses than ungrouped shapes in inferior intraparietal sulcus (IPS) even when grouping was task-irrelevant. This relative ease of representing grouped shapes allowed more shape information to be passed onto later stages of visual processing, such as information storage in superior IPS, and may explain why grouped visual elements are easier to perceive than ungrouped ones after parietal brain lesions. These results are discussed within a neural object file framework, which argues for distinctive neural mechanisms supporting object individuation and identification in visual perception.


Attention Perception & Psychophysics | 2002

Encoding color and shape from different parts of an object in visual short-term memory

Yaoda Xu

Can we find an object-based encoding benefit in visual short-term memory (VSTM) when the features to be remembered are from different parts of an object? Using object parts defined by either figure-ground separation or negative minima of curvature, results from five experiments in which the visual change detection paradigm was used showed that the object-based encoding benefit in VSTM is modulated by how features are assigned to parts of an object: Features are best retained when the color and shape features to be remembered belong to the same part of an object.Although less well retained in comparison, features from different parts of an object are still better remembered than features from spatially separated objects.An object-based feature binding therefore exists even when the color and shape features to be remembered are from different parts of an object.


Journal of Cognitive Neuroscience | 2009

Distinctive neural mechanisms supporting visual object individuation and identification

Yaoda Xu

Many everyday activities, such as driving on a busy street, require the encoding of distinctive visual objects from crowded scenes. Given resource limitations of our visual system, one solution to this difficult and challenging task is to first select individual objects from a crowded scene (object individuation) and then encode their details (object identification). Using functional magnetic resonance imaging, two distinctive brain mechanisms were recently identified that support these two stages of visual object processing. While the inferior intraparietal sulcus (IPS) selects a fixed number of about four objects via their spatial locations, the superior IPS and the lateral occipital complex (LOC) encode the features of a subset of the selected objects in great detail (object shapes in this case). Thus, the inferior IPS individuates visual objects from a crowded display and the superior IPS and higher visual areas participate in subsequent object identification. Consistent with the prediction of this theory, even when only object shape identity but not its location is task relevant, this study shows that object individuation in the inferior IPS treats four identical objects similarly as four objects that are all different, whereas object shape identification in the superior IPS and the LOC treat four identical objects as a single unique object. These results provide independent confirmation supporting the dissociation between visual object individuation and identification in the brain.


The Journal of Neuroscience | 2007

The Role of the Superior Intraparietal Sulcus in Supporting Visual Short-Term Memory for Multifeature Objects

Yaoda Xu

Everyday objects can vary in a number of feature dimensions, such as color and shape. To identify and recognize a particular object, often times we need to encode and store multiple features of an object simultaneously. Previous studies have highlighted the role of the superior intraparietal sulcus (IPS) in storing single object features in visual short-term memory (VSTM), such as color, orientation, shape outline, and shape topology. The role of this brain area in storing multiple features of an object together in VSTM, however, remains mostly unknown. In this study, using an event-related functional magnetic resonance imaging design and an independent region-of-interest-based approach, how an objects color and shape may be retained together in the superior IPS during VSTM was investigated. Results from four experiments indicate that the superior IPS holds neither integrated whole objects nor the total number of objects (both whole and partial) stored in VSTM. Rather, it represents the total amount of feature information retained in VSTM. The ability to accumulate information acquired from different visual feature dimensions suggests that the superior IPS may be a flexible information storage device, consistent with the involvement of the parietal cortex in a variety of other cognitive tasks. These results also bring new understanding to the object benefit reported in behavioral VSTM studies and provide new insights into solving the binding problem in the brain.


The Journal of Neuroscience | 2007

Dissociating Task Performance from fMRI Repetition Attenuation in Ventral Visual Cortex

Yaoda Xu; Nicholas B. Turk-Browne; Marvin M. Chun

Repeated visual stimuli elicit reduced neural responses compared with novel stimuli in various brain regions (repetition attenuation). This effect has become a powerful tool in fMRI research, allowing researchers to investigate the stimulus-specific neuronal representations underlying perception and cognition. Repetition attenuation is also commonly associated with behavioral priming, whereby response accuracy and speed increase with repetition. This raises the possibility that repetition attenuation merely reflects decreased processing time. Here, we report a full dissociation between repetition attenuation and behavioral performance by varying the task performed on identical visual stimuli. In the scene task, observers judged whether two photographs came from the same scene, and in the image task, they judged whether the two photographs were identical pixel for pixel. The two tasks produced opposite patterns of behavioral performance: in the scene task, responses were faster and more accurate when the photographs were very similar, whereas, in the image task, responses were faster and more accurate when the photographs were less similar. However, in the parahippocampal place area (PPA), a scene-selective region of ventral cortex, identical repetition attenuation was observed in both tasks: lower neural responses for the very similar pairs relative to the less similar pairs. Whereas the PPA was impervious to task modulation, responses from two frontal regions mirrored behavioral performance, consistent with their role in decision-making. Thus, although repetition attenuation and performance are often correlated, they can be dissociated, suggesting that attenuation in ventral visual areas reflects stimulus-specific processing independent of task demands.


Nature Neuroscience | 2016

Decoding the content of visual short-term memory under distraction in occipital and parietal areas

Katherine C Bettencourt; Yaoda Xu

Recent studies have provided conflicting accounts regarding where in the human brain visual short-term memory (VSTM) content is stored, with strong univariate fMRI responses being reported in superior intraparietal sulcus (IPS), but robust multivariate decoding being reported in occipital cortex. Given the continuous influx of information in everyday vision, VSTM storage under distraction is often required. We found that neither distractor presence nor predictability during the memory delay affected behavioral performance. Similarly, superior IPS exhibited consistent decoding of VSTM content across all distractor manipulations and had multivariate responses that closely tracked behavioral VSTM performance. However, occipital decoding of VSTM content was substantially modulated by distractor presence and predictability. Furthermore, we found no effect of target–distractor similarity on VSTM behavioral performance, further challenging the role of sensory regions in VSTM storage. Overall, consistent with previous univariate findings, our results indicate that superior IPS, but not occipital cortex, has a central role in VSTM storage.


The Journal of Neuroscience | 2010

The Neural Fate of Task-Irrelevant Features in Object-Based Processing

Yaoda Xu

Objects are one of the most fundamental units in visual attentional selection and information processing. Studies have shown that, during object-based processing, all features of an attended object may be encoded together, even when these features are task irrelevant. Some recent studies, however, have failed to find this effect. What determines when object-based processing may or may not occur? In three experiments, observers were asked to encode object colors and the processing of task-irrelevant object shapes was evaluated by measuring functional magnetic resonance imaging responses from a brain area involved in shape representation. Whereas object-based task-irrelevant shape processing was present at low color-encoding load, it was attenuated or even suppressed at high color-encoding load. Moreover, such object-based processing was short-lived and was not sustained over a long delay period. Object-based processing for task-irrelevant features of attended objects thus does exist, as reported previously; but it is transient and its magnitude is determined by the encoding demand of the task-relevant feature.

Collaboration


Dive into the Yaoda Xu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jia Liu

Beijing Normal University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge