Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ahna R. Girshick is active.

Publication


Featured researches published by Ahna R. Girshick.


Journal of Vision | 2008

Vergence-accommodation conflicts hinder visual performance and cause visual fatigue.

David Hoffman; Ahna R. Girshick; Kurt Akeley; Martin S. Banks

Three-dimensional (3D) displays have become important for many applications including vision research, operation of remote devices, medical imaging, surgical training, scientific visualization, virtual prototyping, and more. In many of these applications, it is important for the graphic image to create a faithful impression of the 3D structure of the portrayed object or scene. Unfortunately, 3D displays often yield distortions in perceived 3D structure compared with the percepts of the real scenes the displays depict. A likely cause of such distortions is the fact that computer displays present images on one surface. Thus, focus cues-accommodation and blur in the retinal image-specify the depth of the display rather than the depths in the depicted scene. Additionally, the uncoupling of vergence and accommodation required by 3D displays frequently reduces ones ability to fuse the binocular stimulus and causes discomfort and fatigue for the viewer. We have developed a novel 3D display that presents focus cues that are correct or nearly correct for the depicted scene. We used this display to evaluate the influence of focus cues on perceptual distortions, fusion failures, and fatigue. We show that when focus cues are correct or nearly correct, (1) the time required to identify a stereoscopic stimulus is reduced, (2) stereoacuity in a time-limited task is increased, (3) distortions in perceived depth are reduced, and (4) viewer fatigue and discomfort are reduced. We discuss the implications of this work for vision research and the design and use of displays.


international conference on computer graphics and interactive techniques | 2004

A stereo display prototype with multiple focal distances

Kurt Akeley; Simon J. Watt; Ahna R. Girshick; Martin S. Banks

Typical stereo displays provide incorrect focus cues because the light comes from a single surface. We describe a prototype stereo display comprising two independent fixed-viewpoint volumetric displays. Like autostereoscopic volumetric displays, fixed-viewpoint volumetric displays generate near-correct focus cues without tracking eye position, because light comes from sources at the correct focal distances. (In our prototype, from three image planes at different physical distances.) Unlike autostereoscopic volumetric displays, however, fixed-viewpoint volumetric displays retain the qualities of modern projective graphics: view-dependent lighting effects such as occlusion, specularity, and reflection are correctly depicted; modern graphics processor and 2-D display technology can be utilized; and realistic fields of view and depths of field can be implemented. While not a practical solution for general-purpose viewing, our prototype display is a proof of concept and a platform for ongoing vision research. The design, implementation, and verification of this stereo display are described, including a novel technique of filtering along visual lines using 1-D texture mapping.


Nature Neuroscience | 2005

Why pictures look right when viewed from the wrong place

Dhanraj Vishwanath; Ahna R. Girshick; Martin S. Banks

A picture viewed from its center of projection generates the same retinal image as the original scene, so the viewer perceives the scene correctly. When a picture is viewed from other locations, the retinal image specifies a different scene, but we normally do not notice the changes. We investigated the mechanism underlying this perceptual invariance by studying the perceived shapes of pictured objects viewed from various locations. We also manipulated information about the orientation of the picture surface. When binocular information for surface orientation was available, perceived shape was nearly invariant across a wide range of viewing angles. By varying the projection angle and the position of a stimulus in the picture, we found that invariance is achieved through an estimate of local surface orientation, not from geometric information in the picture. We present a model that explains invariance and other phenomena (such as perceived distortions in wide-angle pictures).


Journal of Vision | 2009

Probabilistic combination of slant information: weighted averaging and robustness as optimal percepts.

Ahna R. Girshick; Martin S. Banks

Depth perception involves combining multiple, possibly conflicting, sensory measurements to estimate the 3D structure of the viewed scene. Previous work has shown that the perceptual system combines measurements using a statistically optimal weighted average. However, the system should only combine measurements when they come from the same source. We asked whether the brain avoids combining measurements when they differ from one another: that is, whether the system is robust to outliers. To do this, we investigated how two slant cues-binocular disparity and texture gradients-influence perceived slant as a function of the size of the conflict between the cues. When the conflict was small, we observed weighted averaging. When the conflict was large, we observed robust behavior: perceived slant was dictated solely by one cue, the other being rejected. Interestingly, the rejected cue was either disparity or texture, and was not necessarily the more variable cue. We modeled the data in a probabilistic framework, and showed that weighted averaging and robustness are predicted if the underlying likelihoods have heavier tails than Gaussians. We also asked whether observers had conscious access to the single-cue estimates when they exhibited robustness and found they did not, i.e. they completely fused despite the robust percepts.


The Journal of Neuroscience | 2010

Visual–Haptic Adaptation Is Determined by Relative Reliability

Johannes Burge; Ahna R. Girshick; Martin S. Banks

Accurate calibration of sensory estimators is critical for maintaining accurate estimates of the environment. Classically, it was assumed that sensory calibration occurs by one sense changing to become consistent with vision; this is visual dominance. Recently, it has been proposed that changes in estimators occur according to their relative reliabilities; this is the reliability-based model. We show that if cue combination occurs according to relative reliability, then reliability-based calibration assures minimum-variance sensory estimates over time. Recent studies are qualitatively consistent with the reliability-based model, but none have shown that the predictions are quantitatively accurate. We conducted an experiment in which the model could be assessed quantitatively. Subjects indicated whether visual, haptic, and visual–haptic planar surfaces appeared slanted positively or negatively from frontoparallel. In preadaptation, we determined the visual and haptic slants of perceived frontoparallel, and measured visual and haptic reliabilities. We varied visual reliability by adjusting the size of the viewable stimulus. Haptic reliability was fixed. During adaptation, subjects were exposed to visual–haptic surfaces with a discrepancy between the visual and haptic slants. After adaptation, we remeasured the visual and haptic slants of perceived frontoparallel. When vision was more reliable, haptics adapted to match vision. When vision was less reliable, vision adapted to match haptics. Most importantly, the ratio of visual and haptic adaptation was quantitatively predicted by relative reliability. The amount of adaptation of one sensory estimator relative to another depends strongly on the relative reliabilities of the two estimators.


human vision and electronic imaging conference | 2005

Achieving near-correct focus cues in a 3D display using multiple image planes

Simon J. Watt; Kurt Akeley; Ahna R. Girshick; Martin S. Banks

Focus cues specify inappropriate 3-D scene parameters in conventional displays because the light comes from a single surface, independent of the depth relations in the portrayed scene. This can lead to distortions in perceived depth, as well as discomfort and fatigue due to the differing demands on accommodation and vergence. Here we examine the efficacy of a stereo-display prototype designed to minimize these problems by using multiple image planes to present near-correct focus cues. Each eye’s view is the sum of several images presented at different focal distances. Image intensities are assigned based on the dioptric distance of each image plane from the portrayed object, determined along visual lines. The stimulus to accommodation is more consistent with the portrayed depth than with conventional displays, but it still differs from the stimulus in equivalent real scenes. Compared to a normal, fixed-distance display, observers showed improved stereoscopic performance in different psychophysical tasks including speed of fusing stereoscopic images, precision of depth discrimination, and accuracy of perceived depth estimates. The multiple image-planes approach provides a practical solution for some shortcomings of conventional displays.


human vision and electronic imaging conference | 2005

Where should you sit to watch a movie

Martin S. Banks; Heather Rose; Dhanraj Vishwanath; Ahna R. Girshick

A picture viewed from its center of projection (CoP) generates the same retinal image as the original scene. When a picture is viewed from other locations, the retinal image specifies a different layout and shapes, but we normally do not notice the changes. The mechanism underlying this is unknown. We studied the perceived shapes of pictured ovoids and planes while varying viewing angle and the angle by which the pictures were projected. We also varied the viewer’s information about the orientation of the picture surface. Viewers compensated nearly veridically for oblique viewing when binocular information for surface orientation was available. In so doing, they used an estimate of local surface orientation and not prior information for object shape nor geometric information in the picture. We present a model that explains invariance for incorrect viewing positions, and other phenomena like perceived distortions with wide fields of view.


Journal of Vision | 2010

Variance predicts visual-haptic adaptation in shape perception

Ahna R. Girshick; Banks; Marc O. Ernst; R Cooper; Robert A. Jacobs

When people are exposed repeatedly to a conflict in visually and haptically specified shapes, they adapt and the apparent conflict is eventually eliminated. The inter-modal adaptation literature suggests that the conflict is resolved by adapting the haptic shape estimator. Another possibility is that both estimators adapt by amounts that depend on their relative variances. Thus, the visual estimator could adapt if its variance were high enough. Is relative reliability the better predictor of visual-haptic adaptation? We examined this by manipulating the variance of the visual signal during inter-modal adaptation and then measuring changes in the within-modal (vision-alone and haptics-alone) shape percepts. The stimulus was a 3D object with a rectangular front surface. It was specified visually by random-dot stereograms and haptically by PHANToM force-feedback devices. In pre- and post-tests, observers judged whether its front surface was taller or shorter than it was wide. For each modality, we found the aspect ratio that was perceptually a square. During adaptation, a conflict was created between the visually and haptically specified shapes by independently altering the visual and haptic aspect ratios of the front surface. The variance of the visual estimator (determined by dot number) was either low or high. We assessed the amount of visual and haptic adaptation by comparing pre- and post-test shape estimates. When the visual estimators variance was low, essentially all of the adaptation occurred in the haptic estimator. When the visual estimators variance was high, we observed visual and haptic adaptation. These results suggest that the relative reliability of visual and haptic estimators determines the relative amounts of visual and haptic adaptation.


Journal of Vision | 2010

Prior expectations in line orientation perception

Ahna R. Girshick; Eero P. Simoncelli; Michael S. Landy

Experiment Estimated priors Conclusions The Bayesian framework provides a good description of line orientation perception. (1) Observers use non-uniform priors. (2) The estimated priors reflect natural image statistics, peaking at horizontal & vertical.


Nature Neuroscience | 2011

Cardinal rules: visual orientation perception reflects knowledge of environmental statistics

Ahna R. Girshick; Michael S. Landy; Eero P. Simoncelli

Collaboration


Dive into the Ahna R. Girshick's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Johannes Burge

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

David Hoffman

University of California

View shared research outputs
Top Co-Authors

Avatar

Eero P. Simoncelli

Howard Hughes Medical Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gennady Erlikhman

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Heather Rose

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge