Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alex O. Holcombe is active.

Publication


Featured researches published by Alex O. Holcombe.


Nature | 2000

Tracking an object through feature space.

Erik Blaser; Zenon W. Pylyshyn; Alex O. Holcombe

Visual attention allows an observer to select certain visual information for specialized processing. Selection is readily apparent in ‘tracking’ tasks where even with the eyes fixed, observers can track a target as it moves among identical distractor items. In such a case, a target is distinguished by its spatial trajectory. Here we show that one can keep track of a stationary item solely on the basis of its changing appearance—specified by its trajectory along colour, orientation, and spatial frequency dimensions—even when a distractor shares the same spatial location. This ability to track through feature space bears directly on competing theories of attention, that is, on whether attention can select locations in space, features such as colour or shape, or particular visual objects composed of constellations of visual features. Our results affirm, consistent with a growing body of psychophysical and neurophysiological evidence, that attention can indeed select specific visual objects. Furthermore, feature-space tracking extends the definition of visual object to include not only items with well defined spatio-temporal trajectories, but also those with well defined featuro-temporal trajectories.


Memory & Cognition | 2004

Repetition priming in visual search: Episodic retrieval, not feature priming

Liqiang Huang; Alex O. Holcombe; Harold Pashler

Previous research has shown that when the targets of successive visual searches have features in common, response times are shorter. However, the nature of the representation underlying this priming and how priming is affected by the task remain uncertain. In four experiments, subjects searched for an odd-sized target and reported its orientation. The color of the items was irrelevant to the task. When target size was repeated from the previous trial, repetition of target color speeded the response. However, when target size was different from that in the previous trial, repetition of target color slowed responses, rather than speeding them. Our results suggest that these priming phenomena reflect the same automatic mechanism as thepriming of pop-out reported by Maljkovic and Nakayama (1994). However, the crossover interaction between repetition of one feature and another rules out Maljkovic and Nakayamas (1994) theory of independent potentiation of distinct feature representations. Instead, we suggest that the priming pattern results from contact with an episodic memory representation of the previous trial.


Cognitive Psychology | 1998

On the Lawfulness of Grouping by Proximity

Michael Kubovy; Alex O. Holcombe; Johan Wagemans

The visual system groups close things together. Previous studies of grouping by proximity have failed to measure grouping strength or to assess the effect of configuration. We do both. We reanalyze data from an experiment by Kubovy and Wagemans (1995) in which they briefly presented multi-stable dot patterns that can be perceptually organized into alternative collections of parallel strips of dots, and in which they parametrically varied the distances between dots and the angles between alternative organizations. Our analysis shows that relative strength of grouping into strips of dots of a particular orientation approximates a decreasing exponential function of the relative distance between dots in that orientation. The configural or wholistic properties that were varied--such as angular separations of the alternative organizations and the symmetry properties of the dot pattern--do not matter. Additionally, this grouping function is robust under transformations of scale in space (Experiment 1) and time (Experiment 2). Grouping of units which are themselves the result of grouping (i.e., pairs of dots; Experiment 3) also follows our nonconfigural rule.


Nature Neuroscience | 2001

Early binding of feature pairs for visual perception

Alex O. Holcombe; Patrick Cavanagh

If features such as color and orientation are processed separately by the brain at early stages, how does the brain subsequently match the correct color and orientation? We found that spatially superposed pairings of orientation with either color or luminance could be reported even for extremely high rates of presentation, which suggests that these features are coded in combination explicitly by early stages, thus eliminating the need for any subsequent binding of information. In contrast, reporting the pairing of spatially separated features required rates an order of magnitude slower, suggesting that perceiving these pairs requires binding at a slow, attentional stage.


The Journal of Neuroscience | 2005

Time and the Brain: How Subjective Time Relates to Neural Time

David M. Eagleman; Peter U. Tse; Dean V. Buonomano; Peter Janssen; Anna C. Nobre; Alex O. Holcombe

Most of the actions our brains perform on a daily basis, such as perceiving, speaking, and driving a car, require timing on the scale of tens to hundreds of milliseconds. New discoveries in psychophysics, electrophysiology, imaging, and computational modeling are contributing to an emerging picture of how the brain processes, learns, and perceives time.


Trends in Cognitive Sciences | 2009

Seeing slow and seeing fast: two limits on perception

Alex O. Holcombe

Video cameras have a single temporal limit set by the frame rate. The human visual system has multiple temporal limits set by its various constituent mechanisms. These limits seem to form two groups. A fast group comprises specialized mechanisms for extracting perceptual qualities such as motion direction, depth and edges. The second group, with coarse temporal resolution, includes judgments of the pairing of color and motion, the joint identification of arbitrary spatially separated features, the recognition of words and high-level motion. These temporally coarse percepts might all be mediated by high-level processes. Working at very different timescales, the two groups of mechanisms collaborate to create our unified visual experience.


Trends in Cognitive Sciences | 2002

Causality and the perception of time

David M. Eagleman; Alex O. Holcombe

Does our perception of when an event occurs depend on whether we caused it? A recent study suggests that when we perceive our actions to cause an event, it seems to occur earlier than if we did not cause it.


Journal of Vision | 2008

Mobile computation: spatiotemporal integration of the properties of objects in motion.

Patrick Cavanagh; Alex O. Holcombe; Wei-Lun Chou

We demonstrate that, as an object moves, color and motion signals from successive, widely spaced locations are integrated, but letter and digit shapes are not. The features that integrate as an object moves match those that integrate when the eyes move but the object is stationary (spatiotopic integration). We suggest that this integration is mediated by large receptive fields gated by attention and that it occurs for surface features (motion and color) that can be summed without precise alignment but not shape features (letters or digits) that require such alignment. Rapidly alternating pairs of colors and motions were presented at several locations around a circle centered at fixation. The same two stimuli alternated at each location with the phase of the alternation reversing from one location to the next. When observers attended to only one location, the stimuli alternated in both retinal coordinates and in the attended stream: feature identification was poor. When the observers attention shifted around the circle in synchrony with the alternation, the stimuli still alternated at each location in retinal coordinates, but now attention always selected the same color and motion, with the stimulus appearing as a single unchanging object stepping across the locations. The maximum presentation rate at which the color and motion could be reported was twice that for stationary attention, suggesting (as control experiments confirmed) object-based integration of these features. In contrast, the identification of a letter or digit alternating with a mask showed no advantage for moving attention despite the fact that moving attention accessed (within the limits of precision for attentional selection) only the target and never the mask. The masking apparently leaves partial information that cannot be integrated across locations, and we speculate that for spatially defined patterns like letters, integration across large shifts in location may be limited by problems in aligning successive samples. Our results also suggest that as attention moves, the selection of any given location (dwell time) can be as short as 50 ms, far shorter than the typical dwell time for stationary attention. Moving attention can therefore sample a brief instant of a rapidly changing stream if it passes quickly through, giving access to events that are otherwise not seen.


Vision Research | 2008

Tracking the changing features of multiple objects : Progressively poorer perceptual precision and progressively greater perceptual lag

Christina J. Howard; Alex O. Holcombe

To measure the limits on attentive tracking of continuously changing features, in our task objects constantly changed smoothly and unpredictably in orientation, spatial period or position. Observers reported the last state of one of the objects. We observed a gradual decline in performance as the number of tracked objects increased, implicating a graded processing resource. Additionally, responses were more similar to previous states of the tracked object than its final state, especially in the case of spatial frequency. Indeed for spatial frequency, this perceptual lag reached 250ms when tracking four objects. The pattern of the perceptual lags, the graded effect of set size, and the double-report performance suggest the presence of both serial and parallel processing elements.


Perspectives on Psychological Science | 2014

An Introduction to Registered Replication Reports at Perspectives on Psychological Science

Daniel J. Simons; Alex O. Holcombe; Barbara A. Spellman

Much of science, including psychology, consists of eliciting, measuring, and documenting effects. In psychology, these effects are used to test and support theories about how the mind works or why people behave the way they do. Yet, for several reasons, the first report of an effect in the published literature rarely provides enough evidence to draw firm conclusions about the actual size of the effect. First, the sample sizes typically used in psychology studies are not large enough to measure an effect with precision (Marszalek, Barber, Kohlhart, & Holmes, 2012). Second, scientific publishing favors statistically significant, novel results over inconclusive or negative ones (Fanelli, 2012). If discrepant results rarely enter the literature, then published studies will, on average, overestimate the true size of the effects they report. And third, just by chance, some published studies will be false positives (Ioannidis, 2005). Uncertainty about the true size of important effects hampers psychological theorizing. Moreover, single studies provide little evidence for the robustness of an effect across the population to which it is assumed to generalize. This issue of Perspectives on Psychological Science includes the first example of a new type of journal article, one designed to provide a more definitive measure of the size and reliability of important effects: the Registered Replication Report (RRR; see Simons & Holcombe, 2014). RRRs compile a set of studies from a variety of laboratories that all followed an identical, vetted protocol designed to reproduce the original method and finding as closely as possible. By combining the resources of multiple labs, RRRs provide the ingredients for a meta-analysis that can authoritatively establish the size and reliability of an effect.

Collaboration


Dive into the Alex O. Holcombe's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David M. Eagleman

Baylor College of Medicine

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Klaus M. Stiefel

University of Western Sydney

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge