Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Emily A. Cooper is active.

Publication


Featured researches published by Emily A. Cooper.


ACM Transactions on Graphics | 2010

Using blur to affect perceived distance and size

Robert T. Held; Emily A. Cooper; James F. O'Brien; Martin S. Banks

We present a probabilistic model of how viewers may use defocus blur in conjunction with other pictorial cues to estimate the absolute distances to objects in a scene. Our model explains how the pattern of blur in an image together with relative depth cues indicates the apparent scale of the images contents. From the model, we develop a semiautomated algorithm that applies blur to a sharply rendered image and thereby changes the apparent distance and scale of the scenes contents. To examine the correspondence between the model/algorithm and actual viewer experience, we conducted an experiment with human viewers and compared their estimates of absolute distance to the models predictions. We did this for images with geometrically correct blur due to defocus and for images with commonly used approximations to the correct blur. The agreement between the experimental data and model predictions was excellent. The model predicts that some approximations should work well and that others should not. Human viewers responded to the various types of blur in much the way the model predicts. The model and algorithm allow one to manipulate blur precisely and to achieve the desired perceived scale efficiently.


Current Biology | 2012

Blur and Disparity Are Complementary Cues to Depth

Robert T. Held; Emily A. Cooper; Martin S. Banks

Estimating depth from binocular disparity is extremely precise, and the cue does not depend on statistical regularities in the environment. Thus, disparity is commonly regarded as the best visual cue for determining 3D layout. But depth from disparity is only precise near where one is looking; it is quite imprecise elsewhere. Away from fixation, vision resorts to using other depth cues-e.g., linear perspective, familiar size, aerial perspective. But those cues depend on statistical regularities in the environment and are therefore not always reliable. Depth from defocus blur relies on fewer assumptions and has the same geometric constraints as disparity but different physiological constraints. Blur could in principle fill in the parts of visual space where disparity is imprecise. We tested this possibility with a depth-discrimination experiment. Disparity was more precise near fixation and blur was indeed more precise away from fixation. When both cues were available, observers relied on the more informative one. Blur appears to play an important, previously unrecognized role in depth perception. Our findings lead to a new hypothesis about the evolution of slit-shaped pupils and have implications for the design and implementation of stereo 3D displays.


human factors in computing systems | 2016

Novel Optical Configurations for Virtual Reality: Evaluating User Preference and Performance with Focus-tunable and Monovision Near-eye Displays

Robert Konrad; Emily A. Cooper; Gordon Wetzstein

Emerging virtual reality (VR) displays must overcome the prevalent issue of visual discomfort to provide high-quality and immersive user experiences. In particular, the mismatch between vergence and accommodation cues inherent to most stereoscopic displays has been a long standing challenge. In this paper, we evaluate several adaptive display modes afforded by focus-tunable optics or actuated displays that have the promise to mitigate visual discomfort caused by the vergence-accommodation conflict, and improve performance in VR environments. We also explore monovision as an unconventional mode that allows each eye of an observer to accommodate to a different distance. While this technique is common practice in ophthalmology, we are the first to report its effectiveness for VR applications with a custom built set up. We demonstrate that monovision and other focus-tunable display modes can provide better user experiences and improve user performance in terms of reaction times and accuracy, particularly for nearby simulated distances in VR.


Journal of Vision | 2012

The perceptual basis of common photographic practice

Emily A. Cooper; Elise A. Piazza; Martin S. Banks

Photographers, cinematographers, and computer-graphics engineers use certain techniques to create striking pictorial effects. By using lenses of different focal lengths, they can make a scene look compressed or expanded in depth, make a familiar object look natural or distorted, or make a person look smarter, more attractive, or more neurotic. We asked why pictures taken with a certain focal length look natural, while those taken with other focal lengths look distorted. We found that peoples preferred viewing distance when looking at pictures leads them to view long-focal-length pictures from too near and short-focal-length pictures from too far. Perceptual distortions occur because people do not take their incorrect viewing distances into account. By following the rule of thumb of using a 50-mm lens, photographers greatly increase the odds of a viewer looking at a photograph from the correct distance, where the percept will be undistorted. Our theory leads to new guidelines for creating pictorial effects that are more effective than conventional guidelines.


Proceedings of the National Academy of Sciences of the United States of America | 2017

Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays

Nitish Padmanaban; Robert Konrad; Tal Stramer; Emily A. Cooper; Gordon Wetzstein

Significance Wearable displays are becoming increasingly important, but the accessibility, visual comfort, and quality of current generation devices are limited. We study optocomputational display modes and show their potential to improve experiences for users across ages and with common refractive errors. With the presented studies and technologies, we lay the foundations of next generation computational near-eye displays that can be used by everyone. From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one.


Journal of Vision | 2011

The vertical horopter is not adaptable, but it may be adaptive

Emily A. Cooper; Johannes Burge; Martin S. Banks

Depth estimates from disparity are most precise when the visual input stimulates corresponding retinal points or points close to them. Corresponding points have uncrossed disparities in the upper visual field and crossed disparities in the lower visual field. Due to these disparities, the vertical part of the horopter--the positions in space that stimulate corresponding points--is pitched top-back. Many have suggested that this pitch is advantageous for discriminating depth in the natural environment, particularly relative to the ground. We asked whether the vertical horopter is adaptive (suited for perception of the ground) and adaptable (changeable by experience). Experiment 1 measured the disparities between corresponding points in 28 observers. We confirmed that the horopter is pitched. However, it is also typically convex making it ill-suited for depth perception relative to the ground. Experiment 2 tracked locations of corresponding points while observers wore lenses for 7 days that distorted binocular disparities. We observed no change in the horopter, suggesting that it is not adaptable. We also showed that the horopter is not adaptive for long viewing distances because at such distances uncrossed disparities between corresponding points cannot be stimulated. The vertical horopter seems to be adaptive for perceiving convex, slanted surfaces at short distances.


Science Advances | 2015

Stereopsis is adaptive for the natural environment

William W. Sprague; Emily A. Cooper; Ivana Tosic; Martin S. Banks

The visual system allocates its limited computational resources for stereoscopic vision in a manner that exploits regularities in the binocular input from the natural environment. Humans and many animals have forward-facing eyes providing different views of the environment. Precise depth estimates can be derived from the resulting binocular disparities, but determining which parts of the two retinal images correspond to one another is computationally challenging. To aid the computation, the visual system focuses the search on a small range of disparities. We asked whether the disparities encountered in the natural environment match that range. We did this by simultaneously measuring binocular eye position and three-dimensional scene geometry during natural tasks. The natural distribution of disparities is indeed matched to the smaller range of correspondence search. Furthermore, the distribution explains the perception of some ambiguous stereograms. Finally, disparity preferences of macaque cortical neurons are consistent with the natural distribution.


Journal of Vision | 2016

Sensitivity and bias in the discrimination of two-dimensional and three-dimensional motion direction.

Emily A. Cooper; Marcus van Ginkel; Bas Rokers

Sensory systems are faced with an essentially infinite number of possible environmental events but have limited processing resources. Posed with this challenge, it makes sense to allocate these resources to prioritize the discrimination of events with the most behavioral relevance. Here, we asked if such relevance is reflected in the processing and perception of motion. We compared human performance on a rapid motion direction discrimination task, including monocular and binocular viewing. In particular, we determined sensitivity and bias for a binocular motion-in-depth (three-dimensional; 3D) stimulus and for its constituent monocular (two-dimensional; 2D) signals over a broad range of speeds. Consistent with prior work, we found that binocular 3D sensitivity was lower than monocular sensitivity for all speeds. Although overall sensitivity was worse for 3D discrimination, we found that the transformation from 2D to 3D motion processing also incorporated a pattern of potentially advantageous biases. One such bias is reflected by a criterion shift that occurs at the level of 3D motion processing and results in an increased hit rate for motion toward the head. We also observed an increase in sensitivity for 3D motion trajectories presented on crossed rather than uncrossed disparity pedestals, privileging motion trajectories closer to the observer. We used these measurements to determine the range of real-world trajectories for which rapid 3D motion discrimination is most useful. These results suggest that the neural mechanisms that underlie motion perception privilege behaviorally relevant motion and provide insights into the nature of human motion sensitivity in the real world.


Ecological Psychology | 2014

Camera Focal Length and the Perception of Pictures

Martin S. Banks; Emily A. Cooper; Elise A. Piazza

Photographers, cinematographers, and computer-graphics engineers use certain techniques to create striking pictorial effects. By using lenses of different focal lengths, they can make a scene look compressed or expanded in depth, make a familiar object look natural or distorted, or make a person look smarter, more attractive, or more neurotic. Photographers have a rule of thumb that a 50 mm lens produces natural-looking pictures. We asked why pictures taken with a 50 mm lens look natural, while those taken with other focal lengths look distorted. We found that peoples preferred viewing distance when looking at pictures leads them to view long-focal-length pictures from too near and short-focal-length pictures from too far. Perceptual distortions occur because people do not take their incorrect viewing distances into account. By following the rule of thumb of using a 50 mm lens, photographers greatly increase the odds of a viewer looking at a photograph from the correct distance, where the percept will be undistorted. Our theory leads to new guidelines for creating pictorial effects that are more effective than conventional guidelines.


ACM Transactions on Graphics | 2017

Accommodation-invariant computational near-eye displays

Robert Konrad; Nitish Padmanaban; Keenan Molner; Emily A. Cooper; Gordon Wetzstein

Although emerging virtual and augmented reality (VR/AR) systems can produce highly immersive experiences, they can also cause visual discomfort, eyestrain, and nausea. One of the sources of these symptoms is a mismatch between vergence and focus cues. In current VR/AR near-eye displays, a stereoscopic image pair drives the vergence state of the human visual system to arbitrary distances, but the accommodation, or focus, state of the eyes is optically driven towards a fixed distance. In this work, we introduce a new display technology, dubbed accommodation-invariant (AI) near-eye displays, to improve the consistency of depth cues in near-eye displays. Rather than producing correct focus cues, AI displays are optically engineered to produce visual stimuli that are invariant to the accommodation state of the eye. The accommodation system can then be driven by stereoscopic cues, and the mismatch between vergence and accommodation state of the eyes is significantly reduced. We validate the principle of operation of AI displays using a prototype display that allows for the accommodation state of users to be measured while they view visual stimuli using multiple different display modes.

Collaboration


Dive into the Emily A. Cooper's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert T. Held

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bas Rokers

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge