Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marvin M. Chun is active.

Publication


Featured researches published by Marvin M. Chun.


Cognitive Psychology | 1998

Contextual Cueing: Implicit Learning and Memory of Visual Context Guides Spatial Attention ☆ ☆☆

Marvin M. Chun; Yuhong V. Jiang

Global context plays an important, but poorly understood, role in visual tasks. This study demonstrates that a robust memory for visual context exists to guide spatial attention. Global context was operationalized as the spatial layout of objects in visual search displays. Half of the configurations were repeated across blocks throughout the entire session, and targets appeared within consistent locations in these arrays. Targets appearing in learned configurations were detected more quickly. This newly discovered form of search facilitation is termed contextual cueing. Contextual cueing is driven by incidentally learned associations between spatial configurations (context) and target locations. This benefit was obtained despite chance performance for recognizing the configurations, suggesting that the memory for context was implicit. The results show how implicit learning and memory of visual context can guide spatial attention towards task-relevant aspects of a scene.


Nature | 2006

Dissociable neural mechanisms supporting visual short-term memory for objects.

Yaoda Xu; Marvin M. Chun

Using visual information to guide behaviour requires storage in a temporary buffer, known as visual short-term memory (VSTM), that sustains attended information across saccades and other visual interruptions. There is growing debate on whether VSTM capacity is limited to a fixed number of objects or whether it is variable. Here we report four experiments using functional magnetic resonance imaging that resolve this controversy by dissociating the representation capacities of the parietal and occipital cortices. Whereas representations in the inferior intra-parietal sulcus (IPS) are fixed to about four objects at different spatial locations regardless of object complexity, those in the superior IPS and the lateral occipital complex are variable, tracking the number of objects held in VSTM, and representing fewer than four objects as their complexity increases. These neural response patterns were observed during both VSTM encoding and maintenance. Thus, multiple systems act together to support VSTM: whereas the inferior IPS maintains spatial attention over a fixed number of objects at different spatial locations, the superior IPS and the lateral occipital complex encode and maintain a variable subset of the attended objects, depending on their complexity. VSTM capacity is therefore determined both by a fixed number of objects and by object complexity.


Trends in Cognitive Sciences | 2000

Contextual cueing of visual attention

Marvin M. Chun

Visual context information constrains what to expect and where to look, facilitating search for and recognition of objects embedded in complex displays. This article reviews a new paradigm called contextual cueing, which presents well-defined, novel visual contexts and aims to understand how contextual information is learned and how it guides the deployment of visual attention. In addition, the contextual cueing task is well suited to the study of the neural substrate of contextual learning. For example, amnesic patients with hippocampal damage are impaired in their learning of novel contextual information, even though learning in the contextual cueing task does not appear to rely on conscious retrieval of contextual memory traces. We argue that contextual information is important because it embodies invariant properties of the visual environment such as stable spatial layout information as well as object covariation information. Sensitivity to these statistical regularities allows us to interact more effectively with the visual world.


Annual Review of Psychology | 2011

A Taxonomy of External and Internal Attention

Marvin M. Chun; Julie Golomb; Nicholas B. Turk-Browne

Attention is a core property of all perceptual and cognitive operations. Given limited capacity to process competing options, attentional mechanisms select, modulate, and sustain focus on information most relevant for behavior. A significant problem, however, is that attention is so ubiquitous that it is unwieldy to study. We propose a taxonomy based on the types of information that attention operates over--the targets of attention. At the broadest level, the taxonomy distinguishes between external attention and internal attention. External attention refers to the selection and modulation of sensory information. External attention selects locations in space, points in time, or modality-specific input. Such perceptual attention can also select features defined across any of these dimensions, or object representations that integrate over space, time, and modality. Internal attention refers to the selection, modulation, and maintenance of internally generated information, such as task rules, responses, long-term memory, or working memory. Working memory, in particular, lies closest to the intersection between external and internal attention. The taxonomy provides an organizing framework that recasts classic debates, raises new issues, and frames understanding of neural mechanisms.


Nature Neuroscience | 1999

Memory deficits for implicit contextual information in amnesic subjects with hippocampal damage

Marvin M. Chun; Elizabeth A. Phelps

The role of the hippocampus and adjacent medial temporal lobe structures in memory systems has long been debated. Here we show in humans that these neural structures are important for encoding implicit contextual information from the environment. We used a contextual cuing task in which repeated visual context facilitates visual search for embedded target objects. An important feature of our task is that memory traces for contextual information were not accessible to conscious awareness, and hence could be classified as implicit. Amnesic patients with medial temporal system damage showed normal implicit perceptual/skill learning but were impaired on implicit contextual learning. Our results demonstrate that the human medial temporal memory system is important for learning contextual information, which requires the binding of multiple cues.


Journal of Experimental Psychology: Learning, Memory and Cognition | 2000

Organization of visual short-term memory.

Yuhong V. Jiang; Ingrid R. Olson; Marvin M. Chun

The authors examined the organization of visual short-term memory (VSTM). Using a change-detection task, they reported that VSTM stores relational information between individual items. This relational processing is mediated by the organization of items into spatial configurations. The spatial configuration of visual objects is important for VSTM of spatial locations, colors, and shapes. When color VSTM is compared with location VSTM, spatial configuration plays an integral role because configuration is important for color VSTM, whereas color is not important for location VSTM. The authors also examined the role of attention and found that the formation of configuration is modulated by both top-down and bottom-up attentional factors. In summary, the authors proposed that VSTM stores the relational information of individual visual items on the basis of global spatial configuration.


Psychological Science | 1999

Top-Down Attentional Guidance Based on Implicit Learning of Visual Covariation

Marvin M. Chun; Yuhong V. Jiang

The visual environment is extremely rich and complex, producing information overload for the visual system. But the environment also embodies structure in the form of redundancies and regularities that may serve to reduce complexity. How do perceivers internalize this complex informational structure? We present new evidence of visual learning that illustrates how observers learn how objects and events covary in the visual world. This information serves to guide visual processes such as object recognition and search. Our first experiment demonstrates that search and object recognition are facilitated by learned associations (covariation) between novel visual shapes. Our second experiment shows that regularities in dynamic visual environments can also be learned to guide search behavior. In both experiments, learning occurred incidentally and the memory representations were implicit. These experiments show how top-down visual knowledge, acquired through implicit learning, constrains what to expect and guides where to attend and look.


Journal of Experimental Psychology: Learning, Memory and Cognition | 2003

Implicit, long-term spatial contextual memory.

Marvin M. Chun; Yuhong V. Jiang

Learning and memory of novel spatial configurations aids behaviors such as visual search through an implicit process called contextual cuing (M. M. Chun & Y. Jiang, 1998). The present study provides rigorous tests of the implicit nature ofcontextual cuing. Experiment 1 used a recognition test that closely matched the learning task, confirming that memory traces of predictive spatial context were not accessible to conscious retrieval. Experiment 2 gave explicit instructions to encode visual context during learning, but learning was not improved and conscious memory remained undetectable. Experiment 3 illustrates that memory traces for spatial context may persist for at least 1 week, suggesting along-term component of contextual cuing. These experiments indicate that the learning and memory of spatial context in the contextual cuing task are indeed implicit. The results have implications for understanding the neural substrate of spatial contextual learning, which may depend on an intact medial temporal lobe system that includes the hippocampus (Mi. M. Chun & E. A. Phelps, 1999).


Journal of Experimental Psychology: Learning, Memory and Cognition | 1998

Two attentional deficits in serial target search: the visual attentional blink and an amodal task-switch deficit.

Mary C. Potter; Marvin M. Chun; Bradley S. Banks; Margaret Muckenhoupt

When monitoring a rapid serial visual presentation at 100 ms per item for 2 targets among distractors, viewers have difficulty reporting the 2nd target (T2) when it appears 200-500 ms after the onset of the 1st letter target (T1): an attentional blink (AB; M. M. Chun & M. C. Potter, 1995b; J. E. Raymond, K. L. Shapiro, & K. M. Arnell, 1992). Does the same deficit occur with auditory search? The authors compared search for auditory, visual, and cross-modal targets in 2 tasks: (a) identifying 2 target letters among digits (Experiments 1-3 and 5) or digits among letters (Experiment 6), and (b) identifying 1 digit among letters and deciding whether an X occurred among the subsequent letters (Experiment 4). In the experiments using the 1st task, the standard AB was found only when both targets were visual. In the 2nd task, with a change in selective set from T1 to T2, a task-switching deficit was obtained regardless of target modality.


Psychonomic Bulletin & Review | 2005

Attentional rubbernecking: cognitive control and personality in emotion-induced blindness.

Steven B. Most; Marvin M. Chun; David Widders; David H. Zald

Emotional stimuli often attract attention, but at what cost to the processing of other stimuli? Given the potential costs, to what degree can people override emotion-based attentional biases? In Experiment 1, participants searched for a single target within a rapid serial visual presentation of pictures; an irrelevant, emotionally negative or neutral picture preceded the target by either two or eight items. At the shorter lag, negative pictures spontaneously induced greater deficits in target processing than neutral pictures did. Thus, attentional biases to emotional information induced a temporary inability to process stimuli that people actively sought. Experiment 2 revealed that participants could reduce this effect through attentional strategy, but that the extent of this reduction was related to their level of the personality traitharm avoidance. Participants lower in harm avoidance were able to reduce emotioninduced blindness under conditions designed to facilitate the ignoring of the emotional stimuli. Those higher in harm avoidance were unable to do so.

Collaboration


Dive into the Marvin M. Chun's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge