Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gordon Wetzstein is active.

Publication


Featured researches published by Gordon Wetzstein.


international conference on computer graphics and interactive techniques | 2012

Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting

Gordon Wetzstein; Douglas Lanman; Matthew Hirsch; Ramesh Raskar

We introduce tensor displays: a family of compressive light field displays comprising all architectures employing a stack of time-multiplexed, light-attenuating layers illuminated by uniform or directional backlighting (i.e., any low-resolution light field emitter). We show that the light field emitted by an N-layer, M-frame tensor display can be represented by an Nth-order, rank-M tensor. Using this representation we introduce a unified optimization framework, based on nonnegative tensor factorization (NTF), encompassing all tensor display architectures. This framework is the first to allow joint multilayer, multiframe light field decompositions, significantly reducing artifacts observed with prior multilayer-only and multiframe-only decompositions; it is also the first optimization method for designs combining multiple layers with directional backlighting. We verify the benefits and limitations of tensor displays by constructing a prototype using modified LCD panels and a custom integral imaging backlight. Our efficient, GPU-based NTF implementation enables interactive applications. Through simulations and experiments we show that tensor displays reveal practical architectures with greater depths of field, wider fields of view, and thinner form factors, compared to prior automultiscopic displays.


international conference on computer graphics and interactive techniques | 2013

Compressive light field photography using overcomplete dictionaries and optimized projections

Kshitij Marwah; Gordon Wetzstein; Yosuke Bando; Ramesh Raskar

Light field photography has gained a significant research interest in the last two decades; today, commercial light field cameras are widely available. Nevertheless, most existing acquisition approaches either multiplex a low-resolution light field into a single 2D sensor image or require multiple photographs to be taken for acquiring a high-resolution light field. We propose a compressive light field camera architecture that allows for higher-resolution light fields to be recovered than previously possible from a single image. The proposed architecture comprises three key components: light field atoms as a sparse representation of natural light fields, an optical design that allows for capturing optimized 2D light field projections, and robust sparse reconstruction methods to recover a 4D light field from a single coded 2D projection. In addition, we demonstrate a variety of other applications for light field atoms and sparse coding, including 4D light field compression and denoising.


international conference on computer graphics and interactive techniques | 2011

Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays

Gordon Wetzstein; Douglas Lanman; Wolfgang Heidrich; Ramesh Raskar

We develop tomographic techniques for image synthesis on displays composed of compact volumes of light-attenuating material. Such volumetric attenuators recreate a 4D light field or high-contrast 2D image when illuminated by a uniform backlight. Since arbitrary oblique views may be inconsistent with any single attenuator, iterative tomographic reconstruction minimizes the difference between the emitted and target light fields, subject to physical constraints on attenuation. As multi-layer generalizations of conventional parallax barriers, such displays are shown, both by theory and experiment, to exceed the performance of existing dual-layer architectures. For 3D display, spatial resolution, depth of field, and brightness are increased, compared to parallax barriers. For a plane at a fixed depth, our optimization also allows optimal construction of high dynamic range displays, confirming existing heuristics and providing the first extension to multiple, disjoint layers. We conclude by demonstrating the benefits and limitations of attenuation-based light field displays using an inexpensive fabrication method: separating multiple printed transparencies with acrylic sheets.


eurographics | 2008

The Visual Computing of Projector‐Camera Systems

Oliver Bimber; Daisuke Iwai; Gordon Wetzstein; Anselm Grundhöfer

This article focuses on real‐time image correction techniques that enable projector‐camera systems to display images onto screens that are not optimized for projections, such as geometrically complex, coloured and textured surfaces. It reviews hardware‐accelerated methods like pixel‐precise geometric warping, radiometric compensation, multi‐focal projection and the correction of general light modulation effects. Online and offline calibration as well as invisible coding methods are explained. Novel attempts in super‐resolution, high‐dynamic range and high‐speed projection are discussed. These techniques open a variety of new applications for projection displays. Some of them will also be presented in this report.


international conference on computer graphics and interactive techniques | 2011

Polarization fields: dynamic light field display using multi-layer LCDs

Douglas Lanman; Gordon Wetzstein; Matthew Hirsch; Wolfgang Heidrich; Ramesh Raskar

We introduce polarization field displays as an optically-efficient design for dynamic light field display using multi-layered LCDs. Such displays consist of a stacked set of liquid crystal panels with a single pair of crossed linear polarizers. Each layer is modeled as a spatially-controllable polarization rotator, as opposed to a conventional spatial light modulator that directly attenuates light. Color display is achieved using field sequential color illumination with monochromatic LCDs, mitigating severe attenuation and moiré occurring with layered color filter arrays. We demonstrate such displays can be controlled, at interactive refresh rates, by adopting the SART algorithm to tomographically solve for the optimal spatially-varying polarization state rotations applied by each layer. We validate our design by constructing a prototype using modified off-the-shelf panels. We demonstrate interactive display using a GPU-based SART implementation supporting both polarization-based and attenuation-based architectures. Experiments characterize the accuracy of our image formation model, verifying polarization field displays achieve increased brightness, higher resolution, and extended depth of field, as compared to existing automultiscopic display methods for dual-layer and multi-layer LCDs.


pacific conference on computer graphics and applications | 2007

Radiometric Compensation through Inverse Light Transport

Gordon Wetzstein; Oliver Bimber

Video matting is the process of taking a sequence of frames, isolating the foreground, and replacing the background in each frame. We look at existing single-frame matting techniques and present a method that improves upon them by adding depth information acquired by a time-offlight range scanner. We use the depth information to automate the process so it can be practically used for video sequences. In addition, we show that we can improve the results from natural matting algorithms by adding a depth channel. The additional depth information allows us to reduce the artifacts that arise from ambiguities that occur when an object is a similar color to its background.Radiometric compensation techniques allow seamless projections onto complex everyday surfaces. Implemented with projector-camera systems they support the presentation of visual content in situations where projection-optimized screens are not available or not desired - as in museums, historic sites, air-plane cabins, or stage performances. We propose a novel approach that employs the full light transport between projectors and a camera to account for many illumination aspects, such as interreflections, refractions, shadows, and defocus. Precomputing the inverse light transport in combination with an efficient implementation on the GPU makes the real-time compensation of captured local and global light modulations possible.


international conference on computer graphics and interactive techniques | 2015

The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues

Fu-Chung Huang; Kevin Chen; Gordon Wetzstein

Over the last few years, virtual reality (VR) has re-emerged as a technology that is now feasible at low cost via inexpensive cellphone components. In particular, advances of high-resolution micro displays, low-latency orientation trackers, and modern GPUs facilitate immersive experiences at low cost. One of the remaining challenges to further improve visual comfort in VR experiences is the vergence-accommodation conflict inherent to all stereoscopic displays. Accurate reproduction of all depth cues is crucial for visual comfort. By combining well-known stereoscopic display principles with emerging factored light field technology, we present the first wearable VR display supporting high image resolution as well as focus cues. A light field is presented to each eye, which provides more natural viewing experiences than conventional near-eye displays. Since the eye box is just slightly larger than the pupil size, rank-1 light field factorizations are sufficient to produce correct or nearly-correct focus cues; no time-multiplexed image display or gaze tracking is required. We analyze lens distortions in 4D light field space and correct them using the afforded high-dimensional image formation. We also demonstrate significant improvements in resolution and retinal blur quality over related near-eye displays. Finally, we analyze diffraction limits of these types of displays.


Computers & Graphics | 2013

Special Section on Advanced Displays: A survey on computational displays: Pushing the boundaries of optics, computation, and perception

Belen Masia; Gordon Wetzstein; Piotr Didyk; Diego Gutierrez

Display technology has undergone great progress over the last few years. From higher contrast to better temporal resolution or more accurate color reproduction, modern displays are capable of showing images which are much closer to reality. In addition to this trend, we have recently seen the resurrection of stereo technology, which in turn fostered further interest on automultiscopic displays. These advances share the common objective of improving the viewing experience by means of a better reconstruction of the plenoptic function along any of its dimensions. In addition, one usual strategy is to leverage known aspects of the human visual system (HVS) to provide apparent enhancements, beyond the physical limits of the display. In this survey, we analyze these advances, categorize them along the dimensions of the plenoptic function, and present the relevant aspects of human perception on which they rely.


international conference on computer graphics and interactive techniques | 2005

Enabling view-dependent stereoscopic projection in real environments

Oliver Bimber; Gordon Wetzstein; Andreas Emmerling; Christian Nitschke; Anselm Grundhöfer

We show how view-dependent image-based and geometric warping, radiometric compensation, and multi-focal projection enable a view-dependent stereoscopic visualization on ordinary (geometrically complex, colored and textured) surfaces within everyday environments. Special display configurations for immersive or semi-immersive AR/VR applications that require permanent and artificial projection canvases might become unnecessary. We demonstrate several ad-hoc visualization examples in a real architectural and museum application context.


eurographics | 2011

Computational Plenoptic Imaging

Gordon Wetzstein; Ivo Ihrke; Douglas Lanman; Wolfgang Heidrich

The plenoptic function is a ray‐based model for light that includes the colour spectrum as well as spatial, temporal and directional variation. Although digital light sensors have greatly evolved in the last years, one fundamental limitation remains: all standard CCD and CMOS sensors integrate over the dimensions of the plenoptic function as they convert photons into electrons; in the process, all visual information is irreversibly lost, except for a two‐dimensional, spatially varying subset—the common photograph. In this state‐of‐the‐art report, we review approaches that optically encode the dimensions of the plenoptic function transcending those captured by traditional photography and reconstruct the recorded information computationally.

Collaboration


Dive into the Gordon Wetzstein's collaboration.

Top Co-Authors

Avatar

Ramesh Raskar

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wolfgang Heidrich

King Abdullah University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Matthew Hirsch

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Wolfgang Heidrich

King Abdullah University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Oliver Bimber

Johannes Kepler University of Linz

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge