Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kurt Akeley is active.

Publication


Featured researches published by Kurt Akeley.


international conference on computer graphics and interactive techniques | 2004

A stereo display prototype with multiple focal distances

Kurt Akeley; Simon J. Watt; Ahna R. Girshick; Martin S. Banks

Typical stereo displays provide incorrect focus cues because the light comes from a single surface. We describe a prototype stereo display comprising two independent fixed-viewpoint volumetric displays. Like autostereoscopic volumetric displays, fixed-viewpoint volumetric displays generate near-correct focus cues without tracking eye position, because light comes from sources at the correct focal distances. (In our prototype, from three image planes at different physical distances.) Unlike autostereoscopic volumetric displays, however, fixed-viewpoint volumetric displays retain the qualities of modern projective graphics: view-dependent lighting effects such as occlusion, specularity, and reflection are correctly depicted; modern graphics processor and 2-D display technology can be utilized; and realistic fields of view and depths of field can be implemented. While not a practical solution for general-purpose viewing, our prototype display is a proof of concept and a platform for ongoing vision research. The design, implementation, and verification of this stereo display are described, including a novel technique of filtering along visual lines using 1-D texture mapping.


Optics Express | 2011

Creating effective focus cues in multi-plane 3D displays

Sowmya Ravikumar; Kurt Akeley; Martin S. Banks

Focus cues are incorrect in conventional stereoscopic displays. This causes a dissociation of vergence and accommodation, which leads to visual fatigue and perceptual distortions. Multi-plane displays can minimize these problems by creating nearly correct focus cues. But to create the appearance of continuous depth in a multi-plane display, one needs to use depth-weighted blending: i.e., distribute light intensity between adjacent planes. Akeley et al. [ACM Trans. Graph. 23, 804 (2004)] and Liu and Hua [Opt. Express 18, 11562 (2009)] described rather different rules for depth-weighted blending. We examined the effectiveness of those and other rules using a model of a typical human eye and biologically plausible metrics for image quality. We find that the linear blending rule proposed by Akeley and colleagues [ACM Trans. Graph. 23, 804 (2004)] is the best solution for natural stimuli.


international conference on computer graphics and interactive techniques | 2015

Improving light field camera sample design with irregularity and aberration

Li-Yi Wei; Chia-Kai Liang; Graham Butler Myhre; Colvin Pitts; Kurt Akeley

Conventional camera designs usually shun sample irregularities and lens aberrations. We demonstrate that such irregularities and aberrations, when properly applied, can improve the quality and usability of light field cameras. Examples include spherical aberrations for the mainlens, and misaligned sampling patterns for the microlens and photosensor elements. These observations are a natural consequence of a key difference between conventional and light field cameras: optimizing for a single captured 2D image versus a range of reprojected 2D images from a captured 4D light field. We propose designs in mainlens aberrations and microlens/photosensor sample patterns, and evaluate them through simulated measurements and captured results with our hardware prototype.


international conference on computer graphics and interactive techniques | 2002

When will ray-tracing replace rasterization?

Kurt Akeley; David B. Kirk; Larry D. Seiler; Philipp Slusallek; Brad Grantham

Ray-tracing produces images of stunning quality but is difficult to make interactive. Rasterization is fast but making realistic images with it requires splicing many different algorithms together. Both GPU and CPU hardware grow faster each year. Increased GPU performance facilitates new techniques for interactive realism, including high polygon counts, multipass rendering, and texture-intensive techniques such as bumpmapping and shadows. On the other hand, increased CPU performance and dedicated ray-tracing hardware push the potential framerate of ray-tracing ever higher.


international conference on computer graphics and interactive techniques | 2016

A vision for computer vision: emerging technologies

Jon Peddie; Kurt Akeley; Paul E. Debevec; Erik Fonseka; Michael J. Mangan; Michael Raphael

Computer vision is a rapidly evolving discipline. It includes methods for acquiring, processing, and understanding still images and video to model, replicate, and sometimes, exceed human vision and perform useful tasks. Computer vision will be commonly used for a broad range of services in upcoming devices, and implemented in everything from movies, smartphones, cameras, drones and more. Demand for CV is driving the evolution of image sensors, mobile processors, operating systems, application software, and device form factors in order to meet the needs of upcoming applications and services that benefit from computer vision. The resulting impetus means rapid advancements in: • visual computing performance • object recognition effectiveness • speed and responsiveness • power efficiency • video image quality improvement • real-time 3D reconstruction • pre-scanning for movie animation • image stabilization • immersive experiences • and more... Comprised of innovation leaders of computer vision, this panel will cover recent developments, as well as how CV will be enabled and used in 2016 and beyond.


international conference on computer graphics and interactive techniques | 2012

Computational plenoptic imaging

Gordon Wetzstein; Ivo Ihrke; Douglas Lanman; Wolfgang Heidrich; Ramesh Raskar; Kurt Akeley

A new generation of computational cameras is emerging, spawned by the introduction of the Lytro light-field camera to the consumer market and recent accomplishments in the speed at which light can be captured. By exploiting the co-design of camera optics and computational processing, these cameras capture unprecedented details of the plenoptic function: a ray-based model for light that includes the color spectrum as well as spatial, temporal, and directional variation. Although digital light sensors have greatly evolved in the last years, the visual information captured by conventional cameras has remained almost unchanged since the invention of the daguerreotype. All standard CCD and CMOS sensors integrate over the dimensions of the plenoptic function as they convert photons into electrons. In the process, all visual information is irreversibly lost, except for a two-dimensional, spatially varying subset: the common photograph. This course reviews the plenoptic function and discusses approaches for optically encoding high-dimensional visual information that is then recovered computationally in post-processing. It begins with an overview of the plenoptic dimensions and shows how much of this visual information is irreversibly lost in conventional image acquisition. Then it discusses the state of the art in joint optical modulation and computation reconstruction for acquisition of high-dynamic-range imagery and spectral information. It unveils the secrets behind imaging techniques that have recently been featured in the news and outlines other aspects of light that are of interest for various applications before concluding with question, answers, and a short discussion.


Archive | 2012

Selective Transmission of Image Data Based on Device Attributes

Kurt Akeley; Yi-Ren Ng; Kenneth Wayne Waters; Kayvon Fatahalian; Timothy James Knight; Yuriy Romanenko; Chia-Kai Liang; Colvin Pitts; Thomas Hanley; Mugur Marculescu


Archive | 2011

Storage and Transmission of Pictures Including Multiple Frames

Kurt Akeley; Yi-Ren Ng; Kenneth Wayne Waters; Kayvon Fatahalian; Timothy James Knight; Yuriy Romanenko


Archive | 2013

Optimization of optical systems for improved light field capture and manipulation

Timothy James Knight; Colvin Pitts; Kurt Akeley; Yuriy Romanenko; Carl Warren Craddock


Archive | 2013

Light-field processing and analysis, camera control, and user interfaces and interaction on light-field capture devices

Timothy James Knight; Colvin Pitts; Yi-Ren Ng; Alex Fishman; Yuriy Romanenko; Jeff Kalt; Kurt Akeley

Collaboration


Dive into the Kurt Akeley's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul E. Debevec

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Ramesh Raskar

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge