Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Erik Blaser is active.

Publication


Featured researches published by Erik Blaser.


Nature | 2000

Tracking an object through feature space.

Erik Blaser; Zenon W. Pylyshyn; Alex O. Holcombe

Visual attention allows an observer to select certain visual information for specialized processing. Selection is readily apparent in ‘tracking’ tasks where even with the eyes fixed, observers can track a target as it moves among identical distractor items. In such a case, a target is distinguished by its spatial trajectory. Here we show that one can keep track of a stationary item solely on the basis of its changing appearance—specified by its trajectory along colour, orientation, and spatial frequency dimensions—even when a distractor shares the same spatial location. This ability to track through feature space bears directly on competing theories of attention, that is, on whether attention can select locations in space, features such as colour or shape, or particular visual objects composed of constellations of visual features. Our results affirm, consistent with a growing body of psychophysical and neurophysiological evidence, that attention can indeed select specific visual objects. Furthermore, feature-space tracking extends the definition of visual object to include not only items with well defined spatio-temporal trajectories, but also those with well defined featuro-temporal trajectories.


Vision Research | 1995

The accuracy and precision of saccades to small and large targets.

Eileen Kowler; Erik Blaser

Subjects made saccades to point and spatially-extended targets located at a randomly-selected eccentricity (3.8-4.2 deg) under conditions designed to promote best possible accuracy based only on the visual information present in a single trial. Saccadic errors to point targets were small. The average difference between mean saccade size and target eccentricity was about 1% of eccentricity. Precision was excellent (SD = 5-6% of eccentricity), rivaling the precision of relative perceptual localization. This level of performance was maintained for targets up to 3 deg in diameter. Corrective saccades were infrequent and limited almost exclusively to the point targets. We conclude that the saccadic system has access to a precise representation of a central reference position within spatially-extended targets and that, when explicitly required to do so, the saccadic system is capable of demonstrating remarkably accurate and precise performance.


Journal of Vision | 2010

Retinal blur and the perception of egocentric distance

Dhanraj Vishwanath; Erik Blaser

A central function of vision is determining the layout and size of objects in the visual field, both of which require knowledge of egocentric distance (the distance of an object from the observer). A wide range of visual cues can reliably signal relative depth relations among objects, but retinal signals directly specifying distance to an object are limited. A potential source of distance information is the pattern of blurring on the retina, since nearer fixation generally produces larger gradients of blur on the extra-foveal retina. While prior studies implicated blur as only a qualitative cue for relative depth ordering, we find that retinal blur gradients can act as a quantitative cue to distance. Surfaces depicted with blur gradients were judged as significantly closer than those without, with the size of the effect modulated by the degree of blur, as well as the availability of other extra-retinal cues to distance. Blur gradients produced substantial changes in perceived distance regardless of relative depth relations of the surfaces indicated by other cues, suggesting that it operates as a robust cue to distance, consistent with the empirical relationship between blur and fixation distance.


Vision Research | 2004

Object-based cross-feature attentional modulation from color to motion.

Wonyeong Sohn; Thomas V. Papathomas; Erik Blaser; Zoltán Vidnyánszky

Object-based theories of visual attention predict that attempting to direct attention to a particular attribute of a visual object will result in an automatic selection of the whole object, including all of its features. It has been assumed, but not critically tested, that the spreading of attention from one feature to another in this manner, i.e. cross-feature attentional (CFA) effects, takes place at object-level stages of processing as opposed to early, local stages. In the present study we disambiguated these options for color-to-motion CFA by contrasting attentions effect on bivectorial transparent versus bivectorial locally paired motion displays. We found that association between features at the global, but not at the local, stage of motion processing leads to cross-feature attentional effects. These findings provide strong psychophysical evidence that such effects are indeed object-based.


European Journal of Neuroscience | 2005

Binding of motion and colour is early and automatic

Erik Blaser; Thomas V. Papathomas; Zoltán Vidnyánszky

At what stages of the human visual hierarchy different features are bound together, and whether this binding requires attention, is still highly debated. We used a colour‐contingent motion after‐effect (CCMAE) to study the binding of colour and motion signals. The logic of our approach was as follows: if CCMAEs can be evoked by targeted adaptation of early motion processing stages, without allowing for feedback from higher motion integration stages, then this would support our hypothesis that colour and motion are bound automatically on the basis of spatiotemporally local information. Our results show for the first time that CCMAEs can be evoked by adaptation to a locally paired opposite‐motion dot display, a stimulus that, importantly, is known to trigger direction‐specific responses in the primary visual cortex yet results in strong inhibition of the directional responses in area MT of macaques as well as in area MT+ in humans and, indeed, is perceived only as motionless flicker. The magnitude of the CCMAE in the locally paired condition was not significantly different from control conditions where the different directions were spatiotemporally separated (i.e. not locally paired) and therefore perceived as two moving fields. These findings provide evidence that adaptation at an early, local motion stage, and only adaptation at this stage, underlies this CCMAE, which in turn implies that spatiotemporally coincident colour and motion signals are bound automatically, most probably as early as cortical area V1, even when the association between colour and motion is perceptually inaccessible.


Trends in Cognitive Sciences | 2002

Motion integration during motion aftereffects

Zoltán Vidnyánszky; Erik Blaser; Thomas V. Papathomas

The perceived global motion of a stimulus depends on how its different local motion-direction vectors are distributed in space and time. When they are explicitly co-localized, as in the case of locally paired motion, competitive motion integration mechanisms produce a unitary global motion direction determined by their vector average. During motion aftereffects induced by simultaneous adaptation to multiple motion directions, just as in the case of locally paired motion, different directional signals originate simultaneously from exactly the same position in space. Therefore, the perceived global motion direction during motion aftereffects results from local vector averaging of the co-localized motion-direction signals induced by adaptation.


Vision Research | 2000

Color-specific depth mechanisms revealed by a color-contingent depth aftereffect

Fulvio Domini; Erik Blaser; Carol M. Cicerone

Models of stereoscopic depth perception for both natural and random-dot images have focused mainly on the matching of achromatic features of binocular images. Recently, a growing body of research has investigated whether chromatic features can also contribute to the construction of stereoscopic depth. Here we present experiments yielding color-contingent depth aftereffects comparable in magnitude to those measured after adaptation to achromatic stimuli as evidence of neural mechanisms tuned to both color and depth. Furthermore, we report that the locus of the combined processing of color and depth is likely to lie beyond the site of binocular matching.


Vision Research | 2002

The conjunction of feature and depth information

Erik Blaser; Fulvio Domini

By inducing feature-contingent depth aftereffects, we show that the human visual system combines feature information with depth information. These contingent aftereffects were revealed through the use of a novel selective adaptation paradigm whose stimuli required the combination of feature and depth information in order to segment two interleaved, transparent surfaces. We argue that this combined processing exemplifies the remarkable resourcefulness of a visual system that has adapted to exploit conjunctions of cues that can aid in the segmentation of visual surfaces.


Developmental Science | 2011

Toddlers with Autism Spectrum Disorder are more successful at visual search than typically developing toddlers

Zsuzsa Kaldy; Catherine Kraper; Alice S. Carter; Erik Blaser


Journal of Autism and Developmental Disorders | 2016

The Mechanisms Underlying the ASD Advantage in Visual Search.

Zsuzsa Kaldy; Ivy Giserman; Alice S. Carter; Erik Blaser

Collaboration


Dive into the Erik Blaser's collaboration.

Top Co-Authors

Avatar

Zsuzsa Kaldy

University of Massachusetts Boston

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zoltán Vidnyánszky

Hungarian Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mahalakshmi Ramamurthy

University of Massachusetts Boston

View shared research outputs
Top Co-Authors

Avatar

Sylvia Guillory

University of Massachusetts Boston

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alice S. Carter

University of Massachusetts Boston

View shared research outputs
Top Co-Authors

Avatar

Hayley Smith

University of Massachusetts Boston

View shared research outputs
Top Co-Authors

Avatar

Luke Eglington

University of Massachusetts Boston

View shared research outputs
Researchain Logo
Decentralizing Knowledge