Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Melina A. Kunar is active.

Publication


Featured researches published by Melina A. Kunar.


Journal of Experimental Psychology: Human Perception and Performance | 2007

Does contextual cuing guide the deployment of attention

Melina A. Kunar; Stephen J. Flusberg; Todd S. Horowitz; Jeremy M. Wolfe

Contextual cuing experiments show that when displays are repeated, reaction times to find a target decrease over time even when observers are not aware of the repetition. It has been thought that the context of the display guides attention to the target. The authors tested this hypothesis by comparing the effects of guidance in a standard search task with the effects of contextual cuing. First, in standard search, an improvement in guidance causes search slopes (derived from Reaction Time x Set Size functions) to decrease. In contrast, the authors found that search slopes in contextual cuing did not become more efficient over time (Experiment 1). Second, when guidance was optimal (e.g., in easy feature search), they still found a small but reliable contextual cuing effect (Experiments 2a and 2b), suggesting that other factors, such as response selection, contribute to the effect. Experiment 3 supported this hypothesis by showing that the contextual cuing effect disappeared when the authors added interference to the response selection process. Overall, the data suggest that the relationship between guidance and contextual cuing is weak and that response selection can account for part of the effect.


Perspectives on Psychological Science | 2014

Registered Replication Report

V. K. Alogna; M. K. Attaya; Philip Aucoin; Štěpán Bahník; S. Birch; Angela R Birt; Brian H. Bornstein; Samantha Bouwmeester; Maria A. Brandimonte; Charity Brown; K. Buswell; Curt A. Carlson; Maria A. Carlson; S. Chu; A. Cislak; M. Colarusso; Melissa F. Colloff; Kimberly S. Dellapaolera; Jean-François Delvenne; A. Di Domenico; Aaron Drummond; Gerald Echterhoff; John E. Edlund; Casey Eggleston; B. Fairfield; G. Franco; Fiona Gabbert; B. W. Gamblin; Maryanne Garry; R. Gentry

Trying to remember something now typically improves your ability to remember it later. However, after watching a video of a simulated bank robbery, participants who verbally described the robber were 25% worse at identifying the robber in a lineup than were participants who instead listed U.S. states and capitals—this has been termed the “verbal overshadowing” effect (Schooler & Engstler-Schooler, 1990). More recent studies suggested that this effect might be substantially smaller than first reported. Given uncertainty about the effect size, the influence of this finding in the memory literature, and its practical importance for police procedures, we conducted two collections of preregistered direct replications (RRR1 and RRR2) that differed only in the order of the description task and a filler task. In RRR1, when the description task immediately followed the robbery, participants who provided a description were 4% less likely to select the robber than were those in the control condition. In RRR2, when the description was delayed by 20 min, they were 16% less likely to select the robber. These findings reveal a robust verbal overshadowing effect that is strongly influenced by the relative timing of the tasks. The discussion considers further implications of these replications for our understanding of verbal overshadowing.


Psychonomic Bulletin & Review | 2004

The affective consequences of visual attention in preview search

Mark J. Fenske; Jane E. Raymond; Melina A. Kunar

Comparisons of emotional evaluations of abstract stimuli just seen in a two-object visual search task show that prior distractors are devalued, as compared with prior targets or novel items, perhaps as a consequence of persistent attentional inhibition (Raymond, Fenske, & Tavassoli, 2003). To further explore such attention-emotion effects, we measured search response time in a preview search task and emotional evaluations of colorful, complex images just seen therein. On preview trials, the distractors appeared 1,000 msec before the remaining items. On no-preview trials, all the items were presented simultaneously. A single distractor was then rated for its emotional tone. Previewed distractors were consistently devalued, as compared with nonpreviewed distractors, despite longer exposure and being associated with an easier task. This effect was observed only in the participants demonstrating improved search efficiency with preview, but not in others, indicating that the attentional mechanisms underlying the preview benefit have persistent affective consequences in visual search.


Attention Perception & Psychophysics | 2006

Contextual cuing by global features

Melina A. Kunar; Stephen J. Flusberg; Jeremy M. Wolfe

In visual search tasks, attention can be guided to a target item—appearing amidst distractors—on the basis of simple features (e.g., finding the red letter among green). Chun and Jiangs (1998)contextual cuing effect shows that reaction times (RTs) are also speeded if the spatial configuration of items in a scene is repeated over time. In the present studies, we ask whether global properties of the scene can speed search (e.g., if the display is mostly red, then the target is at location X). In Experiment 1A, the overall background color of the display predicted the target location, and the predictive color could appear 0, 400, or 800 msec in advance of the search array. Mean RTs were faster in predictive than in nonpredictive conditions. However, there was little improvement in search slopes. The global color cue did not improve search efficiency. Experiments 1B-1F replicated this effect using different predictive properties (e.g., background orientation-texture and stimulus color). The results showed a strong RT effect of predictive background, but (at best) only a weak improvement in search efficiency. A strong improvement in efficiency was found, however, when the informative background was presented 1,500 msec prior to the onset of the search stimuli and when observers were given explicit instructions to use the cue (Experiment 2).


Visual Cognition | 2008

Time to Guide: Evidence for Delayed Attentional Guidance in Contextual Cueing

Melina A. Kunar; Stephen J. Flusberg; Jeremy M. Wolfe

Contextual cueing experiments show that, when displays are repeated, reaction times (RTs) to find a target decrease over time even when the observers are not aware of the repetition. Recent evidence suggests that this benefit in standard contextual cueing tasks is not likely to be due to an improvement in attentional guidance (Kunar, Flusberg, Horowitz, & Wolfe, 2007). Nevertheless, we ask whether guidance can help participants find the target in a repeated display, if they are given sufficient time to encode the display. In Experiment 1 we increased the display complexity so that it took participants longer to find the target. Here we found a larger effect of guidance than in a condition with shorter RTs. Experiment 2 gave participants prior exposure to the display context. The data again showed that with more time participants could implement guidance to help find the target, provided that there was something in the search stimuli locations to guide attention to. The data suggest that, although the benefit in a standard contextual cueing task is unlikely to be a result of guidance, guidance can play a role if it is given time to develop.


Attention Perception & Psychophysics | 2008

The Role of Memory and Restricted Context in Repeated Visual Search

Melina A. Kunar; Stephen J. Flusberg; Jeremy M. Wolfe

Previous studies have shown that the efficiency of visual search does not improve when participants search through the same unchanging display for hundreds of trials (repeated search), even though the participants have a clear memory of the search display. In this article, we ask two important questions. First, why do participants not use memory to help search the repeated display? Second, can context be introduced so that participants are able to guide their attention to the relevant repeated items? Experiments 1–4 show that participants choose not to use a memory strategy because, under these conditions, repeated memory search is actually less efficient than repeated visual search, even though the latter task is in itself relatively inefficient. However, when the visual search task is given context, so that only a subset of the items are ever pertinent, participants can learn to restrict their attention to the relevant stimuli (Experiments 5 and 6).


Psychonomic Bulletin & Review | 2008

Telephone conversation impairs sustained visual attention via a central bottleneck

Melina A. Kunar; Randall Carter; Michael A. Cohen; Todd S. Horowitz

Recent research has shown that holding telephone conversations disrupts one’s driving ability. We asked whether this effect could be attributed to a visual attention impairment. In Experiment 1, participants conversed on a telephone or listened to a narrative while engaged in multiple object tracking (MOT), a task requiring sustained visual attention. We found that MOT was disrupted in the telephone conversation condition, relative to single-task MOT performance, but that listening to a narrative had no effect. In Experiment 2, we asked which component of conversation might be interfering with MOT performance. We replicated the conversation and single-task conditions of Experiment 1 and added two conditions in which participants heard a sequence of words over a telephone. In the shadowing condition, participants simply repeated each word in the sequence. In the generation condition, participants were asked to generate a new word based on each word in the sequence. Word generation interfered with MOT performance, but shadowing did not. The data indicate that telephone conversation disrupts attention at a central stage, the act of generating verbal stimuli, rather than at a peripheral stage, such as listening or speaking.


Attention Perception & Psychophysics | 2003

What is "marked" in visual marking? Evidence for effects of configuration in preview search

Melina A. Kunar; Glyn W. Humphreys; Kelly Smith; Johan Hulleman

Visual search for a conjunction target is facilitated when distractor sets are segmented over time: the preview benefit. Watson and Humphreys (1997) suggested that this benefit involved inhibition of old items (visual marking, VM). We investigated whether the preview benefit is sensitive to the configuration of the old distractors. Old distractors changed their location prior to the occurrence of the new items, while also either changing or maintaining their configuration. Configuration changes disrupted search. The results are consistent with object-based VM, which is sensitive to the configuration of old stimuli.


Psychological Science | 2003

History Matters The Preview Benefit in Search Is Not Onset Capture

Melina A. Kunar; Glyn W. Humphreys; Kelly Smith

Visual search for a conjunction target is made easier when distractor items are temporally segregated over time to produce two separate old and new groups (the new group containing the target item). The benefit of presenting half the distractors first is known as the preview effect. Recently, some researchers have argued that the preview effect occurs because new stimuli capture attention. This account was tested in the present study by using a novel “top-up” condition that exploits the fact that when previews appear only briefly before the search display, there is minimal preview benefit. We show that effects of a brief preview can be “topped up” by an earlier exposure of the same items, even when the preview disappears between its first and second presentations. This top-up effect demonstrates that the history of the old stimuli is important for the preview benefit, contrary to the account favoring onset capture. We discuss alternative accounts of how the preview benefit arises.


Journal of Experimental Psychology: Human Perception and Performance | 2003

Visual Change With Moving Displays: More Evidence for Color Feature Map Inhibition During Preview Search

Melina A. Kunar; Glyn W. Humphreys; Kelly Smith

Preview search with moving stimuli was investigated. The stimuli moved in multiple directions, and preview items could change either their color or their shape before onset of the new (search) displays. In Experiments 1 and 2, the authors found that (a) a preview benefit occurred even when more than 5 moving items had to be ignored, and (b) color change, but not shape change, disrupted preview search in moving stimuli. In contrast, shape change, but not color change, disrupted preview search in static stimuli (Experiments 3 and 4). Results suggest that preview search with moving displays is influenced by inhibition of a color map, whereas preview search with static displays is influenced by inhibition of locations of old distractors.

Collaboration


Dive into the Melina A. Kunar's collaboration.

Top Co-Authors

Avatar

Jeremy M. Wolfe

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stephen J. Flusberg

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar

Todd S. Horowitz

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kelly Smith

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar

Barbara Hidalgo-Sotelo

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

M. Van Wert

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge