Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brian C. McCann is active.

Publication


Featured researches published by Brian C. McCann.


Journal of Vision | 2016

Estimating 3D tilt from local image cues in natural scenes

Johannes Burge; Brian C. McCann; Wilson S. Geisler

Estimating three-dimensional (3D) surface orientation (slant and tilt) is an important first step toward estimating 3D shape. Here, we examine how three local image cues from the same location (disparity gradient, luminance gradient, and dominant texture orientation) should be combined to estimate 3D tilt in natural scenes. We collected a database of natural stereoscopic images with precisely co-registered range images that provide the ground-truth distance at each pixel location. We then analyzed the relationship between ground-truth tilt and image cue values. Our analysis is free of assumptions about the joint probability distributions and yields the Bayes optimal estimates of tilt, given the cue values. Rich results emerge: (a) typical tilt estimates are only moderately accurate and strongly influenced by the cardinal bias in the prior probability distribution; (b) when cue values are similar, or when slant is greater than 40°, estimates are substantially more accurate; (c) when luminance and texture cues agree, they often veto the disparity cue, and when they disagree, they have little effect; and (d) simplifying assumptions common in the cue combination literature is often justified for estimating tilt in natural scenes. The fact that tilt estimates are typically not very accurate is consistent with subjective impressions from viewing small patches of natural scene. The fact that estimates are substantially more accurate for a subset of image locations is also consistent with subjective impressions and with the hypothesis that perceived surface orientation, at more global scales, is achieved by interpolation or extrapolation from estimates at key locations.


Journal of Vision | 2011

Decoding Natural Signals from the Peripheral Retina

Brian C. McCann; Mary Hayhoe; Wilson S. Geisler

Ganglion cells in the peripheral retina have lower density and larger receptive fields than in the fovea. Consequently, the visual signals relayed from the periphery have substantially lower resolution than those relayed by the fovea. The information contained in peripheral ganglion cell responses can be quantified by how well they predict the foveal ganglion cell responses to the same stimulus. We constructed a model of human ganglion cell outputs by combining existing measurements of the optical transfer function with the receptive field properties and sampling densities of midget (P) ganglion cells. We then simulated a spatial population of P-cell responses to image patches sampled from a large collection of luminance-calibrated natural images. Finally, we characterized the population response to each image patch, at each eccentricity, with two parameters of the spatial power spectrum of the responses: the average response contrast (standard deviation of the response patch) and the falloff in power with spatial frequency. The primary finding is that the optimal estimate of response contrast in the fovea is dependent on both the response contrast and the steepness of the falloff observed in the periphery. Humans could exploit this information when decoding peripheral signals to estimate contrasts, estimate blur levels, or select the most informative locations for saccadic eye movements.


Journal of Vision | 2018

Contributions of monocular and binocular cues to distance discrimination in natural scenes.

Brian C. McCann; Mary Hayhoe; Wilson S. Geisler

Little is known about distance discrimination in real scenes, especially at long distances. This is not surprising given the logistical difficulties of making such measurements. To circumvent these difficulties, we collected 81 stereo images of outdoor scenes, together with precisely registered range images that provided the ground-truth distance at each pixel location. We then presented the stereo images in the correct viewing geometry and measured the ability of human subjects to discriminate the distance between locations in the scene, as a function of absolute distance (3 m to 30 m) and the angular spacing between the locations being compared (2°, 5°, and 10°). Measurements were made for binocular and monocular viewing. Thresholds for binocular viewing were quite small at all distances (Weber fractions less than 1% at 2° spacing and less than 4% at 10° spacing). Thresholds for monocular viewing were higher than those for binocular viewing out to distances of 15–20 m, beyond which they were the same. Using standard cue-combination analysis, we also estimated what the thresholds would be based on binocular-stereo cues alone. With two exceptions, we show that the entire pattern of results is consistent with what one would expect from classical studies of binocular disparity thresholds and separation/size discrimination thresholds measured with simple laboratory stimuli. The first exception is some deviation from the expected pattern at close distances (especially for monocular viewing). The second exception is that thresholds in natural scenes are lower, presumably because of the rich figural cues contained in natural images.


ieee virtual reality conference | 2017

An immersive approach to visualizing perceptual disturbances

Grace M. Rodriguez; Marvis Cruz; Andrew Solis; Patricia Ordóñez; Brian C. McCann

Through their experience with the ICERT REU program at the Texas Advanced Computing Center (TACC), two undergraduate students from the University of Puerto Rico and the University of Florida have initiated a collaboration between their home institutions and TACC exploring the possibility of using immersion to simulate perceptual disturbances. Perceptual disturbances are subjective in nature, and difficult to communicate verbally. Often caretakers or those closest to sufferers have difficulty understanding the nature of their suffering. Immersion provides an exciting opportunity to directly communicate percepts with clinicians and loved ones. Here, we present a prototype environment meant to simulate some of the perceptual disturbances associated with seizures and epilepsy. Following further validation of our approach, we hope to promote awareness and empathy for these often jarring phenomena.


Journal of Vision | 2015

Naturalistic depth perception

Brian C. McCann; Mary Hayhoe; Wilson S. Geisler

Making inferences about the 3-dimensional spatial structure of natural scenes is a critical visual function. While spatial discrimination both in depth and on the image plane has been well characterized for simple stimuli, little is known about our ability to discriminate depth in natural scenes, particularly at far distances. To begin filling in this gap we: (i) developed a database of 80 stereoscopic images paired with the corresponding measured distance information, (ii) used these scenes as psychophysical stimuli and measured near-far discrimination acuity in 4 observers as a function of distance and the visual angle separating the targets, (iii) made additional measurements under patched-eye (monocular) viewing conditions to evaluate the importance of binocular vision in depth discrimination as a function of viewing geometries. We find that binocular thresholds are roughly a constant Weber fraction of the distance for distances ranging from 4 to 28 meters. Further, measured thresholds were around 1% for small separations, and increased to 4% for stimuli separated by 10 deg. Thus, the ability to discriminate depth in natural scenes is very good out to considerable distances. To investigate the basis of this discrimination ability, monocular thresholds were measured. We found that monocular thresholds were elevated for distances less than 15 meters, but were comparable to binocular thresholds for greater distances. Accurate depth perception depends on combining (fusing) multiple sources of sensory information. Thus binocular thresholds probably involve fusing separate monocular and disparity-derived estimates. Under the assumption of Gaussian distributed independent estimates, Bayes rule provides a simple reliability-weighted summation model of cue combination. Using disparity threshold measurements by Blakemore (1970), and the current monocular thresholds, parameter-free predictions were generated for the current binocular thresholds. These predictions largely matched the data, suggesting that the disparity and monocular cues are separable and combined optimally in natural scenes. Meeting abstract presented at VSS 2015.


The Journal of Neuroscience | 2008

Learning Stochastic Reward Distributions in a Speeded Pointing Task

Anna Seydell; Brian C. McCann; Julia Trommershäuser; David C. Knill


Journal of Vision | 2010

Learning to behave optimally in a probabilistic environment

Anna Seydell; Brian C. McCann; Julia Trommershaeuser; David C. Knill


ieee virtual reality conference | 2017

Immerj: A novel system for democratizing immersive storytelling

Sarang S. Bhadsavie; Xie Hunt Shannon Yap; Justin Segler; Rahul Jaisimha; Nishant Raman; Yengzhou Feng; Sierra J. Biggs; Micah Peoples; Robert Brenner; Brian C. McCann


Journal of Vision | 2014

3D surface tilt estimation in natural scenes from image cue gradients

Johannes Burge; Brian C. McCann; Wilson S. Geisler


Journal of Vision | 2013

Naturalistic Depth Perception: Spatial Vision Out The Window

Brian C. McCann; Mary Hayhoe; Wilson S. Geisler

Collaboration


Dive into the Brian C. McCann's collaboration.

Top Co-Authors

Avatar

Wilson S. Geisler

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Mary Hayhoe

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Johannes Burge

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew Solis

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Justin Segler

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Micah Peoples

University of Texas at Austin

View shared research outputs
Researchain Logo
Decentralizing Knowledge