Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alan A. Stocker is active.

Publication


Featured researches published by Alan A. Stocker.


Nature Neuroscience | 2006

Noise characteristics and prior expectations in human visual speed perception

Alan A. Stocker; Eero P. Simoncelli

Human visual speed perception is qualitatively consistent with a Bayesian observer that optimally combines noisy measurements with a prior preference for lower speeds. Quantitative validation of this model, however, is difficult because the precise noise characteristics and prior expectations are unknown. Here, we present an augmented observer model that accounts for the variability of subjective responses in a speed discrimination task. This allowed us to infer the shape of the prior probability as well as the internal noise characteristics directly from psychophysical data. For all subjects, we found that the fitted model provides an accurate description of the data across a wide range of stimulus parameters. The inferred prior distribution shows significantly heavier tails than a Gaussian, and the amplitude of the internal noise is approximately proportional to stimulus speed and depends inversely on stimulus contrast. The framework is general and should prove applicable to other experiments and perceptual modalities.


Neural Computation | 2009

Is the homunculus aware of sensory adaptation

Peggy Seriès; Alan A. Stocker; Eero P. Simoncelli

Neural activity and perception are both affected by sensory history. The work presented here explores the relationship between the physiological effects of adaptation and their perceptual consequences. Perception is modeled as arising from an encoder-decoder cascade, in which the encoder is defined by the probabilistic response of a population of neurons, and the decoder transforms this population activity into a perceptual estimate. Adaptation is assumed to produce changes in the encoder, and we examine the conditions under which the decoder behavior is consistent with observed perceptual effects in terms of both bias and discriminability. We show that for all decoders, discriminability is bounded from below by the inverse Fisher information. Estimation bias, on the other hand, can arise for a variety of different reasons and can range from zero to substantial. We specifically examine biases that arise when the decoder is fixed, unaware of the changes in the encoding population (as opposed to aware of the adaptation and changing accordingly). We simulate the effects of adaptation on two well-studied sensory attributes, motion direction and contrast, assuming a gain change description of encoder adaptation. Although we cannot uniquely constrain the source of decoder bias, we find for both motion and contrast that an unaware decoder that maximizes the likelihood of the percept given by the preadaptation encoder leads to predictions that are consistent with behavioral data. This model implies that adaptation-induced biases arise as a result of temporary suboptimality of the decoder.


Nature Neuroscience | 2015

A Bayesian observer model constrained by efficient coding can explain 'anti-Bayesian' percepts

Xue-Xin Wei; Alan A. Stocker

Bayesian observer models provide a principled account of the fact that our perception of the world rarely matches physical reality. The standard explanation is that our percepts are biased toward our prior beliefs. However, reported psychophysical data suggest that this view may be simplistic. We propose a new model formulation based on efficient coding that is fully specified for any given natural stimulus distribution. The model makes two new and seemingly anti-Bayesian predictions. First, it predicts that perception is often biased away from an observers prior beliefs. Second, it predicts that stimulus uncertainty differentially affects perceptual bias depending on whether the uncertainty is induced by internal or external noise. We found that both model predictions match reported perceptual biases in perceived visual orientation and spatial frequency, and were able to explain data that have not been explained before. The model is general and should prove applicable to other perceptual variables and tasks.


Current Opinion in Neurobiology | 2010

Ambiguity and invariance: two fundamental challenges for visual processing.

Nicole C. Rust; Alan A. Stocker

The visual system is tasked with extracting stimulus content (e.g. the identity of an object) from the spatiotemporal light pattern falling on the retina. However, visual information can be ambiguous with regard to content (e.g. an object when viewed from far away), requiring the system to also consider contextual information. Additionally, visual information originating from the same content can differ (e.g. the same object viewed from different angles), requiring the system to extract content invariant to these differences. In this review, we explore these challenges from experimental and theoretical perspectives, and motivate the need to incorporate solutions for both ambiguity and invariance into hierarchical models of visual processing.


Journal of Vision | 2011

Optimal inference explains the perceptual coherence of visual motion stimuli

James H. Hedges; Alan A. Stocker; Eero P. Simoncelli

The local spatiotemporal pattern of light on the retina is often consistent with a single translational velocity which may also be interpreted as a superposition of spatial patterns translating with different velocities. Human perception reflects such interpretations, as can be demonstrated using stimuli constructed from a superposition of two drifting gratings. Depending on a variety of parameters, these stimuli may be perceived as a coherently moving plaid pattern or as two transparent gratings moving in different directions. Here, we propose a quantitative model that explains how and why such interpretations are selected. An observers percept corresponds to the most probable interpretation of noisy measurements of local image motion, based on separate prior beliefs about the speed and singularity of visual motion. This model accounts for human perceptual interpretations across a broad range of angles and speeds. With optimized parameters, its components are consistent with previous results in motion perception.


international symposium on circuits and systems | 2002

An improved 2D optical flow sensor for motion segmentation

Alan A. Stocker

A functional focal-plane implementation of a 2D optical flow system is presented that detects and preserves motion discontinuities. The system is composed of two different network layers of analog computational units arranged in a retinotopical order. The units in the first layer (the optical flow network) estimate the local optical flow field in two visual dimensions, where the strength of their nearest-neighbor connections determines the amount of motion integration. Whereas in an earlier implementation (A.A. Stocker and R.J. Douglas, 1999) the connection strength was set constant in the complete image space, it is now dynamically and locally controlled by the second network layer (the motion discontinuities network) that is recurrently connected to the optical flow network. The connection strengths in the optical flow network are modulated such that visual motion integration is ideally only facilitated within image areas that are likely to represent common motion sources. Results of an experimental aVLSI chip illustrate the potential of the approach and its functionality under real-world conditions.


Journal of Vision | 2014

A new two-alternative forced choice method for the unbiased characterization of perceptual bias and discriminability

Matjaž Jogan; Alan A. Stocker

Perception is often biased by secondary stimulus attributes (e.g., stimulus noise, attention, or spatial context). A correct quantitative characterization of perceptual bias is essential for testing hypotheses about the underlying perceptual mechanisms and computations. We demonstrate that the standard two-alternative forced choice (2AFC) method can lead to incorrect estimates of perceptual bias. We present a new 2AFC method that solves this problem by asking subjects to judge the relative perceptual distances between the test and each of two reference stimuli. Naïve subjects can easily perform this task. We successfully validated the new method with a visual motion-discrimination experiment. We demonstrate that the method permits an efficient and accurate characterization of perceptual bias and simultaneously provides measures of discriminability for both the reference and test stimulus, all from a single stimulus condition. This makes it an attractive choice for the characterization of perceptual bias and discriminability in a wide variety of psychophysical experiments.


IEEE Transactions on Circuits and Systems I-regular Papers | 2004

Analog VLSI focal-plane array with dynamic connections for the estimation of piecewise-smooth optical flow

Alan A. Stocker

An analog very large-scale integrated (aVLSI) sensor is presented that is capable of estimating optical flow while detecting and preserving motion discontinuities. The sensors architecture is composed of two recurrently connected networks. The units in the first network (the optical-flow network) collectively estimate two-dimensional optical flow, where the strength of their nearest-neighbor coupling determines the degree of motion integration. While the coupling strengths in our previous implementations were globally set and adjusted by the operator, they are now dynamically and locally controlled by a second on-chip network (the motion-discontinuity network). The coupling strengths are set such that visual motion integration is inhibited across image locations that are likely to represent motion boundaries. Results of a prototype sensor illustrate the potential of the approach and its functionality under real-world conditions.


international symposium on circuits and systems | 2004

Analog integrated 2-D optical flow sensor with programmable pixels

Alan A. Stocker; Rodney J. Douglas

We present a framework for real-time visual motion perception consisting of an analog VLSI optical flow sensor with reconfigurable pixels, connected in feedback with a controlling processor. The 2-D sensor array is composed of motion processing pixels that can be individually recruited to form dynamic ensembles that collectively compute visual motion. The flexible framework lends itself to the emulation of multi-layer recurrent network architectures for high-level processing of visual motion. In particular, attentional modulation can easily be incorporated in the visual motion processing. We show a simple example of visual tracking that demonstrates the potential of the framework.


international symposium on circuits and systems | 2003

Compact integrated transconductance amplifier circuit for temporal differentiation

Alan A. Stocker

A compact integrated CMOS circuit for temporal differentiation is presented. It consists of a high-gain inverting amplifier, an active non-linear transconductance and a capacitor and requires only 4 transistors in its minimal configuration. The circuit provides two rectified current outputs that are proportional to the temporal derivative of the input voltage signal. Besides the compactness of its design, the presented circuit is not dependent on the DC-value of the input signal, as compared with known integrated differentiator circuits. Measured chip results show that the circuit operates on a large input frequency range for which it provides near-ideal temporal differentiation. The circuit is particularly suited for focal-plane implementations of gradient-based visual motion systems.

Collaboration


Dive into the Alan A. Stocker's collaboration.

Top Co-Authors

Avatar

Eero P. Simoncelli

Howard Hughes Medical Institute

View shared research outputs
Top Co-Authors

Avatar

Xue-Xin Wei

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Adam M. Gifford

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Yale E. Cohen

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Daniel D. Lee

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Long Luu

The Catholic University of America

View shared research outputs
Top Co-Authors

Avatar

Zhuo Wang

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Matjaz Jogan

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Pedro A. Ortega

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Matjaž Jogan

University of Pennsylvania

View shared research outputs
Researchain Logo
Decentralizing Knowledge