Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Johannes Burge is active.

Publication


Featured researches published by Johannes Burge.


Journal of Vision | 2008

The statistical determinants of adaptation rate in human reaching

Johannes Burge; Marc O. Ernst; Martin S. Banks

Rapid reaching to a target is generally accurate but also contains random and systematic error. Random errors result from noise in visual measurement, motor planning, and reach execution. Systematic error results from systematic changes in the mapping between the visual estimate of target location and the motor command necessary to reach the target (e.g., new spectacles, muscular fatigue). Humans maintain accurate reaching by recalibrating the visuomotor system, but no widely accepted computational model of the process exists. Given certain boundary conditions, a statistically optimal solution is a Kalman filter. We compared human to Kalman filter behavior to determine how humans take into account the statistical properties of errors and the reliability with which those errors can be measured. For most conditions, human and Kalman filter behavior was similar: Increasing measurement uncertainty caused similar decreases in recalibration rate; directionally asymmetric uncertainty caused different rates in different directions; more variation in systematic error increased recalibration rate. However, behavior differed in one respect: Inserting random error by perturbing feedback position causes slower adaptation in Kalman filters but had no effect in humans. This difference may be due to how biological systems remain responsive to changes in environmental statistics. We discuss the implications of this work.


Journal of Vision | 2005

The combination of vision and touch depends on spatial proximity

Sergei Gepshtein; Johannes Burge; Marc O. Ernst; Martin S. Banks

The nervous system often combines visual and haptic information about object properties such that the combined estimate is more precise than with vision or haptics alone. We examined how the system determines when to combine the signals. Presumably, signals should not be combined when they come from different objects. The likelihood that signals come from different objects is highly correlated with the spatial separation between the signals, so we asked how the spatial separation between visual and haptic signals affects their combination. To do this, we first created conditions for each observer in which the effect of combination--the increase in discrimination precision with two modalities relative to performance with one modality--should be maximal. Then under these conditions, we presented visual and haptic stimuli separated by different spatial distances and compared human performance with predictions of a model that combined signals optimally. We found that discrimination precision was essentially optimal when the signals came from the same location, and that discrimination precision was poorer when the signals came from different locations. Thus, the mechanism of visual-haptic combination is specialized for signals that coincide in space.


Proceedings of the National Academy of Sciences of the United States of America | 2011

Optimal defocus estimation in individual natural images

Johannes Burge; Wilson S. Geisler

Defocus blur is nearly always present in natural images: Objects at only one distance can be perfectly focused. Images of objects at other distances are blurred by an amount depending on pupil diameter and lens properties. Despite the fact that defocus is of great behavioral, perceptual, and biological importance, it is unknown how biological systems estimate defocus. Given a set of natural scenes and the properties of the vision system, we show from first principles how to optimally estimate defocus at each location in any individual image. We show for the human visual system that high-precision, unbiased estimates are obtainable under natural viewing conditions for patches with detectable contrast. The high quality of the estimates is surprising given the heterogeneity of natural images. Additionally, we quantify the degree to which the sign ambiguity often attributed to defocus is resolved by monochromatic aberrations (other than defocus) and chromatic aberrations; chromatic aberrations fully resolve the sign ambiguity. Finally, we show that simple spatial and spatio-chromatic receptive fields extract the information optimally. The approach can be tailored to any environment–vision system pairing: natural or man-made, animal or machine. Thus, it provides a principled general framework for analyzing the psychophysics and neurophysiology of defocus estimation in species across the animal kingdom and for developing optimal image-based defocus and depth estimation algorithms for computational vision systems.


The Journal of Neuroscience | 2010

Natural-Scene Statistics Predict How the Figure–Ground Cue of Convexity Affects Human Depth Perception

Johannes Burge; Charless C. Fowlkes; Martin S. Banks

The shape of the contour separating two regions strongly influences judgments of which region is “figure” and which is “ground.” Convexity and other figure–ground cues are generally assumed to indicate only which region is nearer, but nothing about how much the regions are separated in depth. To determine the depth information conveyed by convexity, we examined natural scenes and found that depth steps across surfaces with convex silhouettes are likely to be larger than steps across surfaces with concave silhouettes. In a psychophysical experiment, we found that humans exploit this correlation. For a given binocular disparity, observers perceived more depth when the near surfaces silhouette was convex rather than concave. We estimated the depth distributions observers used in making those judgments: they were similar to the natural-scene distributions. Our findings show that convexity should be reclassified as a metric depth cue. They also suggest that the dichotomy between metric and nonmetric depth cues is false and that the depth information provided many cues should be evaluated with respect to natural-scene statistics. Finally, the findings provide an explanation for why figure–ground cues modulate the responses of disparity-sensitive cells in visual cortex.


Journal of Vision | 2014

Optimal disparity estimation in natural stereo images

Johannes Burge; Wilson S. Geisler

A great challenge of systems neuroscience is to understand the computations that underlie perceptual constancies, the ability to represent behaviorally relevant stimulus properties as constant even when irrelevant stimulus properties vary. As signals proceed through the visual system, neural states become more selective for properties of the environment, and more invariant to irrelevant features of the retinal images. Here, we describe a method for determining the computations that perform these transformations optimally, and apply it to the specific computational task of estimating a powerful depth cue: binocular disparity. We simultaneously determine the optimal receptive field population for encoding natural stereo images of locally planar surfaces and the optimal nonlinear units for decoding the population responses into estimates of disparity. The optimal processing predicts well-established properties of neurons in cortex. Estimation performance parallels important aspects of human performance. Thus, by analyzing the photoreceptor responses to natural images, we provide a normative account of the neurophysiology and psychophysics of absolute disparity processing. Critically, the optimal processing rules are not arbitrarily chosen to match the properties of neurophysiological processing, nor are they fit to match behavioral performance. Rather, they are dictated by the task-relevant statistical properties of complex natural stimuli. Our approach reveals how selective invariant tuning-especially for properties not trivially available in the retinal images-could be implemented in neural systems to maximize performance in particular tasks.


Journal of Vision | 2005

Ordinal configural cues combine with metric disparity in depth perception

Johannes Burge; Mary A. Peterson; Stephen E. Palmer

Prior research on the combination of depth cues generally assumes that different cues must be in the same units for meaningful combination to occur. We investigated whether the geometrically ordinal cues of familiarity and convexity influence depth perception when unambiguous metric information is provided by binocular disparity. We used bipartite, random dot stereograms with a central luminance edge shaped like a face in profile. Disparity specified that the edge and dots on one side were closer than the dots on the other side. Configural cues suggested that the familiar, face-shaped region was closer than the unfamiliar side. Configural cues caused an increase in perceived depth for a given disparity signal when they were consistent with disparity and a decrease in perceived depth when they were inconsistent. Thus, geometrically ordinal configural cues can quantitatively influence a metric depth cue. Implications for the combination of configural and depth cues are discussed.


The Journal of Neuroscience | 2010

Visual–Haptic Adaptation Is Determined by Relative Reliability

Johannes Burge; Ahna R. Girshick; Martin S. Banks

Accurate calibration of sensory estimators is critical for maintaining accurate estimates of the environment. Classically, it was assumed that sensory calibration occurs by one sense changing to become consistent with vision; this is visual dominance. Recently, it has been proposed that changes in estimators occur according to their relative reliabilities; this is the reliability-based model. We show that if cue combination occurs according to relative reliability, then reliability-based calibration assures minimum-variance sensory estimates over time. Recent studies are qualitatively consistent with the reliability-based model, but none have shown that the predictions are quantitatively accurate. We conducted an experiment in which the model could be assessed quantitatively. Subjects indicated whether visual, haptic, and visual–haptic planar surfaces appeared slanted positively or negatively from frontoparallel. In preadaptation, we determined the visual and haptic slants of perceived frontoparallel, and measured visual and haptic reliabilities. We varied visual reliability by adjusting the size of the viewable stimulus. Haptic reliability was fixed. During adaptation, subjects were exposed to visual–haptic surfaces with a discrepancy between the visual and haptic slants. After adaptation, we remeasured the visual and haptic slants of perceived frontoparallel. When vision was more reliable, haptics adapted to match vision. When vision was less reliable, vision adapted to match haptics. Most importantly, the ratio of visual and haptic adaptation was quantitatively predicted by relative reliability. The amount of adaptation of one sensory estimator relative to another depends strongly on the relative reliabilities of the two estimators.


Journal of Neurophysiology | 2013

Binocular integration and disparity selectivity in mouse primary visual cortex

Benjamin Scholl; Johannes Burge; Nicholas J. Priebe

Signals from the two eyes are first integrated in primary visual cortex (V1). In many mammals, this binocular integration is an important first step in the development of stereopsis, the perception of depth from disparity. Neurons in the binocular zone of mouse V1 receive inputs from both eyes, but it is unclear how that binocular information is integrated and whether this integration has a function similar to that found in other mammals. Using extracellular recordings, we demonstrate that mouse V1 neurons are tuned for binocular disparities, or spatial differences, between the inputs from each eye, thus extracting signals potentially useful for estimating depth. The disparities encoded by mouse V1 are significantly larger than those encoded by cat and primate. Interestingly, these larger disparities correspond to distances that are likely to be ecologically relevant in natural viewing, given the stereo-geometry of the mouse visual system. Across mammalian species, it appears that binocular integration is a common cortical computation used to extract information relevant for estimating depth. As such, it is a prime example of how the integration of multiple sensory signals is used to generate accurate estimates of properties in our environment.


Journal of Vision | 2011

The vertical horopter is not adaptable, but it may be adaptive

Emily A. Cooper; Johannes Burge; Martin S. Banks

Depth estimates from disparity are most precise when the visual input stimulates corresponding retinal points or points close to them. Corresponding points have uncrossed disparities in the upper visual field and crossed disparities in the lower visual field. Due to these disparities, the vertical part of the horopter--the positions in space that stimulate corresponding points--is pitched top-back. Many have suggested that this pitch is advantageous for discriminating depth in the natural environment, particularly relative to the ground. We asked whether the vertical horopter is adaptive (suited for perception of the ground) and adaptable (changeable by experience). Experiment 1 measured the disparities between corresponding points in 28 observers. We confirmed that the horopter is pitched. However, it is also typically convex making it ill-suited for depth perception relative to the ground. Experiment 2 tracked locations of corresponding points while observers wore lenses for 7 days that distorted binocular disparities. We observed no change in the horopter, suggesting that it is not adaptable. We also showed that the horopter is not adaptive for long viewing distances because at such distances uncrossed disparities between corresponding points cannot be stimulated. The vertical horopter seems to be adaptive for perceiving convex, slanted surfaces at short distances.


Nature Communications | 2015

Optimal speed estimation in natural image movies predicts human performance

Johannes Burge; Wilson S. Geisler

Accurate perception of motion depends critically on accurate estimation of retinal motion speed. Here we first analyse natural image movies to determine the optimal space-time receptive fields (RFs) for encoding local motion speed in a particular direction, given the constraints of the early visual system. Next, from the RF responses to natural stimuli, we determine the neural computations that are optimal for combining and decoding the responses into estimates of speed. The computations show how selective, invariant speed-tuned units might be constructed by the nervous system. Then, in a psychophysical experiment using matched stimuli, we show that human performance is nearly optimal. Indeed, a single efficiency parameter accurately predicts the detailed shapes of a large set of human psychometric functions. We conclude that many properties of speed-selective neurons and human speed discrimination performance are predicted by the optimal computations, and that natural stimulus variation affects optimal and human observers almost identically.

Collaboration


Dive into the Johannes Burge's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wilson S. Geisler

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arvind V Iyer

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Stephen Sebastian

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Seha Kim

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Benjamin Chin

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Jacob L. Yates

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Kathryn Bonnen

University of Texas at Austin

View shared research outputs
Researchain Logo
Decentralizing Knowledge