Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yuhong V. Jiang is active.

Publication


Featured researches published by Yuhong V. Jiang.


Cognitive Psychology | 1998

Contextual Cueing: Implicit Learning and Memory of Visual Context Guides Spatial Attention ☆ ☆☆

Marvin M. Chun; Yuhong V. Jiang

Global context plays an important, but poorly understood, role in visual tasks. This study demonstrates that a robust memory for visual context exists to guide spatial attention. Global context was operationalized as the spatial layout of objects in visual search displays. Half of the configurations were repeated across blocks throughout the entire session, and targets appeared within consistent locations in these arrays. Targets appearing in learned configurations were detected more quickly. This newly discovered form of search facilitation is termed contextual cueing. Contextual cueing is driven by incidentally learned associations between spatial configurations (context) and target locations. This benefit was obtained despite chance performance for recognizing the configurations, suggesting that the memory for context was implicit. The results show how implicit learning and memory of visual context can guide spatial attention towards task-relevant aspects of a scene.


Journal of Experimental Psychology: Learning, Memory and Cognition | 2000

Organization of visual short-term memory.

Yuhong V. Jiang; Ingrid R. Olson; Marvin M. Chun

The authors examined the organization of visual short-term memory (VSTM). Using a change-detection task, they reported that VSTM stores relational information between individual items. This relational processing is mediated by the organization of items into spatial configurations. The spatial configuration of visual objects is important for VSTM of spatial locations, colors, and shapes. When color VSTM is compared with location VSTM, spatial configuration plays an integral role because configuration is important for color VSTM, whereas color is not important for location VSTM. The authors also examined the role of attention and found that the formation of configuration is modulated by both top-down and bottom-up attentional factors. In summary, the authors proposed that VSTM stores the relational information of individual visual items on the basis of global spatial configuration.


Psychological Science | 1999

Top-Down Attentional Guidance Based on Implicit Learning of Visual Covariation

Marvin M. Chun; Yuhong V. Jiang

The visual environment is extremely rich and complex, producing information overload for the visual system. But the environment also embodies structure in the form of redundancies and regularities that may serve to reduce complexity. How do perceivers internalize this complex informational structure? We present new evidence of visual learning that illustrates how observers learn how objects and events covary in the visual world. This information serves to guide visual processes such as object recognition and search. Our first experiment demonstrates that search and object recognition are facilitated by learned associations (covariation) between novel visual shapes. Our second experiment shows that regularities in dynamic visual environments can also be learned to guide search behavior. In both experiments, learning occurred incidentally and the memory representations were implicit. These experiments show how top-down visual knowledge, acquired through implicit learning, constrains what to expect and guides where to attend and look.


Journal of Experimental Psychology: Learning, Memory and Cognition | 2003

Implicit, long-term spatial contextual memory.

Marvin M. Chun; Yuhong V. Jiang

Learning and memory of novel spatial configurations aids behaviors such as visual search through an implicit process called contextual cuing (M. M. Chun & Y. Jiang, 1998). The present study provides rigorous tests of the implicit nature ofcontextual cuing. Experiment 1 used a recognition test that closely matched the learning task, confirming that memory traces of predictive spatial context were not accessible to conscious retrieval. Experiment 2 gave explicit instructions to encode visual context during learning, but learning was not improved and conscious memory remained undetectable. Experiment 3 illustrates that memory traces for spatial context may persist for at least 1 week, suggesting along-term component of contextual cuing. These experiments indicate that the learning and memory of spatial context in the contextual cuing task are indeed implicit. The results have implications for understanding the neural substrate of spatial contextual learning, which may depend on an intact medial temporal lobe system that includes the hippocampus (Mi. M. Chun & E. A. Phelps, 1999).


Quarterly Journal of Experimental Psychology | 2001

Selective attention modulates implicit learning

Yuhong V. Jiang; Marvin M. Chun

The effect of selective attention on implicit learning was tested in four experiments using the “contextual cueing” paradigm (Chun & Jiang, 1998, 1999). Observers performed visual search through items presented in an attended colour (e.g., red) and an ignored colour (e.g., green). When the spatial configuration of items in the attended colour was invariant and was consistently paired with a target location, visual search was facilitated, showing contextual cueing (Experiments 1, 3, and 4). In contrast, repeating and pairing the configuration of the ignored items with the target location resulted in no contextual cueing (Experiments 2 and 4). We conclude that implicit learning is robust only when relevant, predictive information is selectively attended.


Psychonomic Bulletin & Review | 2005

Visual Working Memory for Simple and Complex Visual Stimuli

Hing Yee Eng; Yuhong V. Jiang

Does the magical number four characterize our visual working memory (VWM) capacity for all kinds of objects, or is the capacity of VWM inversely related to the perceptual complexity of those objects? To find out how perceptual complexity affects VWM, we used a change detection task to measure VWM capacity for six types of stimuli of different complexity: colors, letters, polygons, squiggles, cubes, and faces. We found that the estimated capacity decreased for more complex stimuli, suggesting that perceptual complexity was an important factor in determining VWM capacity. However, the considerable correlation between perceptual complexity and VWM capacity declined significantly if subjects were allowed to view the sample memory display longer. We conclude that when encoding limitations are minimized, perceptual complexity affects, but does not determine, VWM capacity.


Journal of Experimental Psychology: Learning, Memory and Cognition | 2008

Orienting Attention in Visual Working Memory Reduces Interference From Memory Probes

Tal Makovski; Rachel S. Sussman; Yuhong V. Jiang

Given a changing visual environment, and the limited capacity of visual working memory (VWM), the contents of VWM must be in constant flux. Using a change detection task, the authors show that VWM is subject to obligatory updating in the face of new information. Change detection performance is enhanced when the item that may change is retrospectively cued 1 s after memory encoding and 0.5 s before testing. The retro-cue benefit cannot be explained by memory decay or by a reduction in interference from other items held in VWM. Rather, orienting attention to a single memory item makes VWM more resistant to interference from the test probe. The authors conclude that the content of VWM is volatile unless it receives focused attention, and that the standard change detection task underestimates VWM capacity.


Attention Perception & Psychophysics | 2002

Is visual short-term memory object based? Rejection of the " strong-object" hypothesis

Ingrid R. Olson; Yuhong V. Jiang

Is the capacity of visual short-term memory (VSTM) limited by the number of objects or by the number of features? VSTM for objects with either one feature or two color features was tested. Results show that capacity was limited primarily by the number of colors to be memorized, not by the number of objects. This result held up with variations in color saturation, blocked or mixed conditions, duration of memory image, and absence or presence of verbal load. However, conjoining features into objects improved VSTM capacity when size-orientation and color-orientation conjunctions were tested. Nevertheless, the number of features still mattered. When feature heterogeneity was controlled, VSTM for conjoined objects was worse than VSTM for objects made of single features. Our results support a weak-object hypothesis of VSTM capacity that suggests that VSTM is limited by both the number of objects and the feature composition of those objects.


Journal of Vision | 2005

Setting up the target template in visual search.

Timothy Vickery; Li-Wei King; Yuhong V. Jiang

Top-down knowledge about the target is essential in visual search. It biases visual attention to information that matches the target-defining criteria. Extensive research in the past has examined visual search when the target is defined by fixed criteria throughout the experiment, with few studies investigating how subjects set up the target. To address this issue, we conducted five experiments using random polygons and real-world objects, allowing the target criteria to change from trial to trial. On each trial, subjects first see a cue informing them about the target, followed 200-1000 ms later by the search array. We find that when the cue matches the target exactly, search speed increases and the slope of response time-set size function decreases. Deviations from the exact match in size or orientation slow down search speed, although they lead to faster speed compared with a neutral cue or a semantic cue. We conclude that the template set-up process uses detailed visual information, rather than schematic or semantic information, to find the target.


Social Neuroscience | 2006

Reading minds versus following rules: Dissociating theory of mind and executive control in the brain

Rebecca Saxe; Laura Schulz; Yuhong V. Jiang

Abstract The false belief task commonly used in the study of theory of mind (ToM) requires participants to select among competing responses and inhibit prepotent responses, giving rise to three possibilities: (1) the false belief tasks might require only executive function abilities and there may be no domain-specific component; (2) executive control might be necessary for the emergence of ToM in development but play no role in adult mental state inferences; and (3) executive control and domain-specific ToM abilities might both be implicated. We used fMRI in healthy adults to dissociate these possibilities. We found that non-overlapping brain regions were implicated selectively in response selection and belief attribution, that belief attribution tasks recruit brain regions associated with response selection as much as well-matched control tasks, and that regions associated with ToM (e.g., the right temporo-parietal junction) were implicated only in the belief attribution tasks. These results suggest that both domain-general and domain-specific cognitive resources are involved in adult ToM.

Collaboration


Dive into the Yuhong V. Jiang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tal Makovski

Open University of Israel

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nancy Kanwisher

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bo Yeong Won

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge