Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Patrick Cavanagh is active.

Publication


Featured researches published by Patrick Cavanagh.


Psychological Science | 2004

The Capacity of Visual Short-Term Memory is Set Both by Visual Information Load and by Number of Objects

George A. Alvarez; Patrick Cavanagh

Previous research has suggested that visual short-term memory has a fixed capacity of about four objects. However, we found that capacity varied substantially across the five stimulus classes we examined, ranging from 1.6 for shaded cubes to 4.4 for colors (estimated using a change detection task). We also estimated the information load per item in each class, using visual search rate. The changes we measured in memory capacity across classes were almost exactly mirrored by changes in the opposite direction in visual search rate (r2 = .992 between search rate and the reciprocal of memory capacity). The greater the information load of each item in a stimulus class (as indicated by a slower search rate), the fewer items from that class one can hold in memory. Extrapolating this linear relationship reveals that there is also an upper bound on capacity of approximately four or five objects. Thus, both the visual information load and number of objects impose capacity limits on visual short-term memory.


Spatial Vision | 1989

Motion: the long and short of it

Patrick Cavanagh; George Mather

Several authors have proposed that motion is analyzed by two separate processes: short-range and long-range. We claim that the differences between short-range and long-range motion phenomena are a direct consequence of the stimuli used in the two paradigms and are not evidence for the existence of two qualitatively different motion processes. We propose that a single style of motion analysis, similar to the well known Reichardt and Marr-Ullman motion detectors, underlies all motion phenomena. Although there are different detectors of this type specialized for different visual attributes (namely first-order and second-order stimuli), they all share the same mode of operation. We review the studies of second-order motion stimuli to show that they share the basic phenomena observed for first-order stimuli. The similarity across stimulus types suggests, not parallel streams of motion extraction, one short-range and passive and the other long-range and intelligent, but a concatenation of a common mode of initial motion extraction followed by a general inference process.


Cognitive Psychology | 2001

The Spatial Resolution of Visual Attention.

James Intriligator; Patrick Cavanagh

Two tasks were used to evaluate the grain of visual attention, the minimum spacing at which attention can select individual items. First, observers performed a tracking task at many viewing distances. When the display subtended less than 1 degrees in size, tracking was no longer possible even though observers could resolve the items and their motions: The items were visible but could not be individuated one from the other. The limiting size for selection was roughly the same whether tracking one or three targets, suggesting that the resolution limit acts independently of the capacity limit of attention. Second, the closest spacing that still allowed individuation of single items in dense, static displays was examined. This critical spacing was about 50% coarser in the radial direction compared to the tangential direction and was coarser in the upper as opposed to the lower visual field. The results suggest that no more than about 60 items can be arrayed in the central 30 degrees of the visual field while still allowing attentional access to each individually. Our data show that selection has a coarse grain, much coarser than visual resolution. These measures of the resolution of attention are based solely on the selection of location and are not confounded with preattentive feature interactions that may contribute to measures from flanker and crowding tasks. The results suggest that the parietal area is the most likely locus of this selection mechanism and that it acts by pointing to the spatial coordinates (or cortical coordinates) of items of interest rather than by holding a representation of the items themselves.


Attention Perception & Psychophysics | 2004

Attention and the subjective expansion of time

Peter U. Tse; James Intriligator; Josée Rivest; Patrick Cavanagh

During brief, dangerous events, such as car accidents and robberies, many people report that events seem to pass in slow motion, as if time had slowed down. We have measured a similar, although less dramatic, effect in response to unexpected, nonthreatening events. We attribute the subjective expansion of time to the engagement of attention and its influence on the amount of perceptual information processed. We term the effecttime’s subjective expansion (TSE) and examine here the objective temporal dynamics of these distortions. When a series of stimuli are shown in succession, the lowprobabilityoddball stimulus in the series tends to last subjectively longer than the high-probability stimulus even when they last the same objective duration. In particular, (1) there is a latency of at least 120 msec between stimulus onset and the onset of TSE, which may be preceded by subjective temporal contraction; (2) there is a peak in TSE at which subjective time is particularly distorted at a latency of 225 msec after stimulus onset; and (3) the temporal dynamics of TSE are approximately the same in the visual and the auditory domains. Two control experiments (in which the methods of magnitude estimation and stimulus reproduction were used) replicated the temporal dynamics of TSE revealed by the method of constant stimuli, although the initial peak was not apparent with these methods. In addition, a third, control experiment (in which the method of single stimuli was used) showed that TSE in the visual domain can occur because of semantic novelty, rather than image novelty per se. Overall, the results support the view that attentional orienting underlies distortions in perceived duration.


Attention Perception & Psychophysics | 1994

Familiarity and pop-out in visual search

Qinqin Wang; Patrick Cavanagh; Marc Green

In this paper, we report that when the low-level features of targets and distractors are held constant, visual search performance can be strongly influenced by familiarity. In the first condition, a was the target amid as distractors, and vice versa. The response time increased steeply as a function of number of distractors (82 msec/item). When the same stimuli were rotated by 90° (the second condition), however, they became familiar patterns— and—and gave rise to much shallower search functions (31 msec/item). In the third condition, when the search was for a familiar target, (or), among unfamiliar distractors, (or), the slope was about 46 msec/item. In the last condition, when the search was for an unfamiliar target, (or), among familiar distractors, s (or s), parallel search functions were found with a slope of about 1.5 msec/item. These results show that familiarity speeds visual search and that it does so principally when the distractors, not the targets, are familiar.


Psychological Science | 2005

Independent Resources for Attentional Tracking in the Left and Right Visual Hemifields

George A. Alvarez; Patrick Cavanagh

The ability to divide attention enables people to keep track of up to four independently moving objects. We now show that this tracking capacity is independently constrained in the left and right visual fields as if separate tracking systems were engaged, one in each field. Specifically, twice as many targets can be successfully tracked when they are divided between the left and right hemifields as when they are all presented within the same hemifield. This finding places broad constraints on the anatomy and mechanisms of attentive tracking, ruling out a single attentional focus, even one that moves quickly from target to target.


Neuron | 2001

Attention Response Functions: Characterizing Brain Areas Using fMRI Activation during Parametric Variations of Attentional Load

Jody C. Culham; Patrick Cavanagh; Nancy Kanwisher

We derived attention response functions for different cortical areas by plotting neural activity (measured by fMRI) as a function of attentional load in a visual tracking task. In many parietal and frontal cortical areas, activation increased with load over the entire range of loads tested, suggesting that these areas are directly involved in attentional processes. However, in other areas (FEF and parietal area 7), strong activation was observed even at the lowest attentional load (compared to a passive baseline using identical stimuli), but little or no additional activation was seen with increasing load. These latter areas appear to play a different role, perhaps supporting task-relevant functions that do not vary with load, such as the suppression of eye movements.


Journal of The Optical Society of America A-optics Image Science and Vision | 1987

Equiluminance: spatial and temporal factors and the contribution of blue-sensitive cones

Patrick Cavanagh; Donald I. A. MacLeod; Stuart Anstis

Equiluminance ratios for red/green, red/blue and green/blue sine-wave gratings were determined by using a minimum-motion heterochromatic matching technique that permitted reliable settings at temporal frequencies as low as 0.5 Hz. The red/green equiluminance ratio was influenced by temporal but not spatial frequency, the green/blue ratio was influenced by spatial but not temporal frequency, and the red/blue ratio was influenced by both. After bleaching of the blue-sensitive cones, there was no change in equiluminance ratios, indicating no contribution of the blue-sensitive cones to the luminance channel even at low temporal and spatial frequencies. The inhomogeneity of yellow pigmentation within the macular region was identified as the source of the spatial-frequency effect on the blue/green ratio.


Nature Neuroscience | 2011

Predictive remapping of attention across eye movements

Martin Rolfs; Donatas Jonikaitis; Heiner Deubel; Patrick Cavanagh

Many cells in retinotopic brain areas increase their activity when saccades (rapid eye movements) are about to bring stimuli into their receptive fields. Although previous work has attempted to look at the functional correlates of such predictive remapping, no study has explicitly tested for better attentional performance at the future retinal locations of attended targets. We found that, briefly before the eyes start moving, attention drawn to the targets of upcoming saccades also shifted to those retinal locations that the targets would cover once the eyes had moved, facilitating future movements. This suggests that presaccadic visual attention shifts serve to both improve presaccadic perceptual processing at the target locations and speed subsequent eye movements to their new postsaccadic locations. Predictive remapping of attention provides a sparse, efficient mechanism for keeping track of relevant parts of the scene when frequent rapid eye movements provoke retinal smear and temporal masking.


Nature Neuroscience | 2000

Motion distorts visual space: shifting the perceived position of remote stationary objects

David Whitney; Patrick Cavanagh

To perceive the relative positions of objects in the visual field, the visual system must assign locations to each stimulus. This assignment is determined by the objects retinal position, the direction of gaze, eye movements, and the motion of the object itself. Here we show that perceived location is also influenced by motion signals that originate in distant regions of the visual field. When a pair of stationary lines are flashed, straddling but not overlapping a rotating radial grating, the lines appear displaced in a direction consistent with that of the gratings motion, even when the lines are a substantial distance from the grating. The results indicate that motions influence on position is not restricted to the moving object itself, and that even the positions of stationary objects are coded by mechanisms that receive input from motion-sensitive neurons.

Collaboration


Dive into the Patrick Cavanagh's collaboration.

Top Co-Authors

Avatar

Stuart Anstis

University of California

View shared research outputs
Top Co-Authors

Avatar

Martin Rolfs

Humboldt University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Thérèse Collins

Paris Descartes University

View shared research outputs
Top Co-Authors

Avatar

Matteo Lisi

Paris Descartes University

View shared research outputs
Top Co-Authors

Avatar

David Whitney

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rufin VanRullen

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lorella Battelli

Istituto Italiano di Tecnologia

View shared research outputs
Researchain Logo
Decentralizing Knowledge