Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Steven Franconeri is active.

Publication


Featured researches published by Steven Franconeri.


Attention Perception & Psychophysics | 2003

Moving and looming stimuli capture attention

Steven Franconeri; Daniel J. Simons

Attention capture is often operationally defined as speeded search performance when an otherwise nonpredictive stimulus happens to be the target of a visual search. That is, if a stimulus captures attention, it should be searched with priority even when it is irrelevant to the task. Given this definition, only the abrupt appearance of a new object (see, e.g., Jonides & Yantis, 1988) and one type of luminance contrast change (Enns, Austen, Di Lollo, Rauschenberger, & Yantis, 2001) have been shown to strongly capture attention. We show that translating and looming stimuli also capture attention. This phenomenon does not occur for all dynamic events: We also show that receding stimuli do not attract attention. Although the sorts of dynamic events that capture attention do not fit neatly into a single category, we speculate that stimuli that signal potentially behaviorally urgent events are more likely to receive attentional priority.


Journal of Vision | 2007

How many objects can you track?: Evidence for a resource-limited attentive tracking mechanism

George A. Alvarez; Steven Franconeri

Much of our interaction with the visual world requires us to isolate some currently important objects from other less important objects. This task becomes more difficult when objects move, or when our field of view moves relative to the world, requiring us to track these objects over space and time. Previous experiments have shown that observers can track a maximum of about 4 moving objects. A natural explanation for this capacity limit is that the visual system is architecturally limited to handling a fixed number of objects at once, a so-called magical number 4 on visual attention. In contrast to this view, Experiment 1 shows that tracking capacity is not fixed. At slow speeds it is possible to track up to 8 objects, and yet there are fast speeds at which only a single object can be tracked. Experiment 2 suggests that that the limit on tracking is related to the spatial resolution of attention. These findings suggest that the number of objects that can be tracked is primarily set by a flexibly allocated resource, which has important implications for the mechanisms of object tracking and for the relationship between object tracking and other cognitive processes.


Psychological Science | 2005

Do New Objects Capture Attention

Steven Franconeri; Andrew Hollingworth; Daniel J. Simons

The visual system relies on several heuristics to direct attention to important locations and objects. One of these mechanisms directs attention to sudden changes in the environment. Although a substantial body of research suggests that this capture of attention occurs only for the abrupt appearance of a new perceptual object, more recent evidence shows that some luminance-based transients (e.g., motion and looming) and some types of brightness change also capture attention. These findings show that new objects are not necessary for attention capture. The present study tested whether they are even sufficient. That is, does a new object attract attention because the visual system is sensitive to new objects or because it is sensitive to the transients that new objects create? In two experiments using a visual search task, new objects did not capture attention unless they created a strong local luminance transient.


Perception | 2000

Change Blindness in the Absence of a Visual Disruption

Daniel J. Simons; Steven Franconeri; Rebecca L Reimer

Findings from studies of visual memory and change detection have revealed a surprising inability to detect large changes to scenes from one view to the next (‘change blindness’). When some form of disruption is introduced between an original and modified display, observers often fail to notice the change. This disruption can take many forms (eg an eye movement, a flashed blank screen, a blink, a cut in a motion picture, etc) with similar results. In all cases, the changes are sufficiently large that, were they to occur instantaneously, they would consistently be detected. Prior research on change blindness was predicated on the assumption that, in the absence of a visual disruption, the signal caused by the change would draw attention, leading to detection. In two experiments, we demonstrate that change blindness can occur even in the absence of a visual disruption. In one experiment, subjects actually detected more changes with a disruption than without one. When changes are sufficiently gradual, the visible change signal does not seem to draw attention, and large changes can go undetected. The findings are discussed in the context of metacognitive beliefs about change detection and the strategic decisions those beliefs entail.


Psychological Science | 2010

Tracking Multiple Objects Is Limited Only by Object Spacing, Not by Speed, Time, or Capacity

Steven Franconeri; Sumeeth Jonathan; Jason M. Scimeca

In dealing with a dynamic world, people have the ability to maintain selective attention on a subset of moving objects in the environment. Performance in such multiple-object tracking is limited by three primary factors—the number of objects that one can track, the speed at which one can track them, and how close together they can be. We argue that this last limit, of object spacing, is the root cause of all performance constraints in multiple-object tracking. In two experiments, we found that as long as the distribution of object spacing is held constant, tracking performance is unaffected by large changes in object speed and tracking time. These results suggest that barring object-spacing constraints, people could reliably track an unlimited number of objects as fast as they could track a single object.


Journal of Experimental Psychology: Human Perception and Performance | 2007

How many locations can be selected at once

Steven Franconeri; George A. Alvarez; James T. Enns

The visual system uses several tools to select only the most relevant visual information for further processing, including selection by location. In the present study, the authors explored how many locations can be selected at once. Although past evidence from several visual tasks suggests that the visual system can operate on a fixed number of 4 objects or locations at once, the authors found that this capacity varies widely in response to the precision of selection required by the task. When the authors required precise selection regions, only 2-3 locations could be selected. But when the selection regions could be coarser, up to 6-7 locations could be selected. The authors discuss potential mechanisms underlying the selection of multiple locations and review the evidence for fixed limits in visual attention.


Psychonomic Bulletin & Review | 2008

Evidence against a speed limit in multiple-object tracking

Steven Franconeri; Jeffrey Y. Lin; Zenon W. Pylyshyn; Brian D. Fisher; James T. Enns

Everyday tasks often require us to keep track of multiple objects in dynamic scenes. Past studies show that tracking becomes more difficult as objects move faster. In the present study, we show that this trade-off may not be due to increased speed itself but may, instead, be due to the increased crowding that usually accompanies increases in speed. Here, we isolate changes in speed from variations in crowding, by projecting a tracking display either onto a small area at the center of a hemispheric projection dome or onto the entire dome. Use of the larger display increased retinal image size and object speed by a factor of 4 but did not increase interobject crowding. Results showed that tracking accuracy was equally good in the large-display condition, even when the objects traveled far into the visual periphery. Accuracy was also not reduced when we tested object speeds that limited performance in the small-display condition. These results, along with a reinterpretation of past studies, suggest that we might be able to track multiple moving objects as fast as we can a single moving object, once the effect of object crowding is eliminated.


Emotion | 2011

Look before you regulate: differential perceptual strategies underlying expressive suppression and cognitive reappraisal.

Genna Bebko; Steven Franconeri; Kevin N. Ochsner; Joan Y. Chiao

Successful emotion regulation is important for maintaining psychological well-being. Although it is known that emotion regulation strategies, such as cognitive reappraisal and expressive suppression, may have divergent consequences for emotional responses, the cognitive processes underlying these differences remain unclear. Here we used eye-tracking to investigate the role of attentional deployment in emotion regulation success. We hypothesized that differences in the deployment of attention to emotional areas of complex visual scenes may be a contributing factor to the differential effects of these two strategies on emotional experience. Eye-movements, pupil size, and self-reported negative emotional experience were measured while healthy young adult participants viewed negative IAPS images and regulated their emotional responses using either cognitive reappraisal or expressive suppression. Consistent with prior work, reappraisers reported feeling significantly less negative than suppressers when regulating emotion as compared to a baseline condition. Across both groups, participants looked away from emotional areas during emotion regulation, an effect that was more pronounced for suppressers. Critically, irrespective of emotion regulation strategy, participants who looked toward emotional areas of a complex visual scene were more likely to experience emotion regulation success. Taken together, these results demonstrate that attentional deployment varies across emotion regulation strategies and that successful emotion regulation depends on the extent to which people look toward emotional content in complex visual scenes.


Journal of Vision | 2014

Eye movements during emotion recognition in faces.

Mark W. Schurgin; J. Nelson; S. Iida; H. Ohira; Joan Y. Chiao; Steven Franconeri

When distinguishing whether a face displays a certain emotion, some regions of the face may contain more useful information than others. Here we ask whether people differentially attend to distinct regions of a face when judging different emotions. Experiment 1 measured eye movements while participants discriminated between emotional (joy, anger, fear, sadness, shame, and disgust) and neutral facial expressions. Participant eye movements primarily fell in five distinct regions (eyes, upper nose, lower nose, upper lip, nasion). Distinct fixation patterns emerged for each emotion, such as a focus on the lips for joyful faces and a focus on the eyes for sad faces. These patterns were strongest for emotional faces but were still present when viewers sought evidence of emotion within neutral faces, indicating a goal-driven influence on eye-gaze patterns. Experiment 2 verified that these fixation patterns tended to reflect attention to the most diagnostic regions of the face for each emotion. Eye movements appear to follow both stimulus-driven and goal-driven perceptual strategies when decoding emotional information from a face.


Cognition | 2009

Object correspondence across brief occlusion is established on the basis of both spatiotemporal and surface feature cues.

Andrew Hollingworth; Steven Franconeri

The correspondence problem is a classic issue in vision and cognition. Frequent perceptual disruptions, such as saccades and brief occlusion, create gaps in perceptual input. How does the visual system establish correspondence between objects visible before and after the disruption? Current theories hold that object correspondence is established solely on the basis of an objects spatiotemporal properties and that an objects surface feature properties (such as color or shape) are not consulted in correspondence operations. In five experiments, we tested the relative contributions of spatiotemporal and surface feature properties to establishing object correspondence across brief occlusion. Correspondence operations were strongly influenced both by the consistency of an objects spatiotemporal properties across occlusion and by the consistency of an objects surface feature properties across occlusion. These data argue against the claim that spatiotemporal cues dominate the computation of object correspondence. Instead, the visual system consults multiple sources of relevant information to establish continuity across perceptual disruption.

Collaboration


Dive into the Steven Franconeri's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yangqing Xu

Northwestern University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dian Yu

Northwestern University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge