Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Oliver Cossairt is active.

Publication


Featured researches published by Oliver Cossairt.


Applied Optics | 2007

Occlusion-capable multiview volumetric three-dimensional display

Oliver Cossairt; Joshua Napoli; Samuel L. Hill; Rick K. Dorval; Gregg E. Favalora

Volumetric 3D displays are frequently purported to lack the ability to reconstruct scenes with viewer-position-dependent effects such as occlusion. To counter these claims, a swept-screen 198-view horizontal-parallax-only 3D display is reported here that is capable of viewer-position-dependent effects. A digital projector illuminates a rotating vertical diffuser with a series of multiperspective 768 x 768 pixel renderings of a 3D scene. Evidence of near-far object occlusion is reported. The aggregate virtual screen surface for a stationary observer is described, as are guidelines to construct a full-parallax system and the theoretical ability of the present system to project imagery outside of the volume swept by the screen.


international conference on computational photography | 2010

Spectral Focal Sweep: Extended depth of field from chromatic aberrations

Oliver Cossairt; Shree K. Nayar

In recent years, many new camera designs have been proposed which preserve image detail over a larger depth range than conventional cameras. These methods rely on either mechanical motion or a custom optical element placed in the pupil plane of a camera lens to create the desired point spread function (PSF). This work introduces a new Spectral Focal Sweep (SFS) camera which can be used to extend depth of field (DOF) when some information about the reflectance spectra of objects being imaged is known. Our core idea is to exploit the principle that for a lens without chromatic correction, the focal length varies with wavelength. We use a SFS camera to capture an image that effectively “sweeps” the focal plane continuously through a scene without the need for either mechanical motion or custom optical elements. We demonstrate that this approach simplifies lens design constraints, enabling an inexpensive implementation to be constructed with off-the-shelf components. We verify the effectiveness of our implementation and show several example images illustrating a significant increase in DOF over conventional cameras.


international conference on computational photography | 2011

Gigapixel Computational Imaging

Oliver Cossairt; Daniel Miau; Shree K. Nayar

Today, consumer cameras produce photographs with tens of millions of pixels. The recent trend in image sensor resolution seems to suggest that we will soon have cameras with billions of pixels. However, the resolution of any camera is fundamentally limited by geometric aberrations. We derive a scaling law that shows that, by using computations to correct for aberrations, we can create cameras with unprecedented resolution that have low lens complexity and compact form factor. In this paper, we present an architecture for gigapixel imaging that is compact and utilizes a simple optical design. The architecture consists of a ball lens shared by several small planar sensors, and a post-capture image processing stage. Several variants of this architecture are shown for capturing a contiguous hemispherical field of view as well as a complete spherical field of view. We demonstrate the effectiveness of our architecture by showing example images captured with two proof-of-concept gigapixel cameras.


international conference on computer graphics and interactive techniques | 2010

Diffusion coded photography for extended depth of field

Oliver Cossairt; Changyin Zhou; Shree K. Nayar

In recent years, several cameras have been introduced which extend depth of field (DOF) by producing a depth-invariant point spread function (PSF). These cameras extend DOF by deblurring a captured image with a single spatially-invariant PSF. For these cameras, the quality of recovered images depends both on the magnitude of the PSF spectrum (MTF) of the camera, and the similarity between PSFs at different depths. While researchers have compared the MTFs of different extended DOF cameras, relatively little attention has been paid to evaluating their depth invariances. In this paper, we compare the depth invariance of several cameras, and introduce a new camera that improves in this regard over existing designs, while still maintaining a good MTF. Our technique utilizes a novel optical element placed in the pupil plane of an imaging system. Whereas previous approaches use optical elements characterized by their amplitude or phase profile, our approach utilizes one whose behavior is characterized by its scattering properties. Such an element is commonly referred to as an optical diffuser, and thus we refer to our new approach as diffusion coding. We show that diffusion coding can be analyzed in a simple and intuitive way by modeling the effect of a diffuser as a kernel in light field space. We provide detailed analysis of diffusion coded cameras and show results from an implementation using a custom designed diffuser.


international conference on computer graphics and interactive techniques | 2008

Light field transfer: global illumination between real and synthetic objects

Oliver Cossairt; Shree K. Nayar; Ravi Ramamoorthi

We present a novel image-based method for compositing real and synthetic objects in the same scene with a high degree of visual realism. Ours is the first technique to allow global illumination and near-field lighting effects between both real and synthetic objects at interactive rates, without needing a geometric and material model of the real scene. We achieve this by using a light field interface between real and synthetic components---thus, indirect illumination can be simulated using only two 4D light fields, one captured from and one projected onto the real scene. Multiple bounces of interreflections are obtained simply by iterating this approach. The interactivity of our technique enables its use with time-varying scenes, including dynamic objects. This is in sharp contrast to the alternative approach of using 6D or 8D light transport functions of real objects, which are very expensive in terms of acquisition and storage and hence not suitable for real-time applications. In our method, 4D radiance fields are simultaneously captured and projected by using a lens array, video camera, and digital projector. The method supports full global illumination with restricted object placement, and accommodates moderately specular materials. We implement a complete system and show several example scene compositions that demonstrate global illumination effects between dynamic real and synthetic objects. Our implementation requires a single point light source and dark background.


IEEE Transactions on Image Processing | 2013

When Does Computational Imaging Improve Performance

Oliver Cossairt; Mohit Gupta; Shree K. Nayar

A number of computational imaging techniques are introduced to improve image quality by increasing light throughput. These techniques use optical coding to measure a stronger signal level. However, the performance of these techniques is limited by the decoding step, which amplifies noise. Although it is well understood that optical coding can increase performance at low light levels, little is known about the quantitative performance advantage of computational imaging in general settings. In this paper, we derive the performance bounds for various computational imaging techniques. We then discuss the implications of these bounds for several real-world scenarios (e.g., illumination conditions, scene properties, and sensor noise characteristics). Our results show that computational imaging techniques do not provide a significant performance advantage when imaging with illumination that is brighter than typical daylight. These results can be readily used by practitioners to design the most suitable imaging systems given the application at hand.


electronic imaging | 2005

Spatial 3-D Infrastructure: Display-Independent Software Framework, High-Speed Rendering Electronics, and Several New Displays

Won Chun; Joshua Napoli; Oliver Cossairt; Rick K. Dorval; Deirdre M. Hall; Thomas J. Purtell Ii; James F. Schooler; Yigal Banker; Gregg E. Favalora

We present a software and hardware foundation to enable the rapid adoption of 3-D displays. Different 3-D displays - such as multiplanar, multiview, and electroholographic displays - naturally require different rendering methods. The adoption of these displays in the marketplace will be accelerated by a common software framework. The authors designed the SpatialGL API, a new rendering framework that unifies these display methods under one interface. SpatialGL enables complementary visualization assets to coexist through a uniform infrastructure. Also, SpatialGL supports legacy interfaces such as the OpenGL API. The authors’ first implementation of SpatialGL uses multiview and multislice rendering algorithms to exploit the performance of modern graphics processing units (GPUs) to enable real-time visualization of 3-D graphics from medical imaging, oil & gas exploration, and homeland security. At the time of writing, SpatialGL runs on COTS workstations (both Windows and Linux) and on Actuality’s high-performance embedded computational engine that couples an NVIDIA GeForce 6800 Ultra GPU, an AMD Athlon 64 processor, and a proprietary, high-speed, programmable volumetric frame buffer that interfaces to a 1024 x 768 x 3 digital projector. Progress is illustrated using an off-the-shelf multiview display, Actuality’s multiplanar Perspecta Spatial 3D System, and an experimental multiview display. The experimental display is a quasi-holographic view-sequential system that generates aerial imagery measuring 30 mm x 25 mm x 25 mm, providing 198 horizontal views.


computer vision and pattern recognition | 2010

Depth from Diffusion

Changyin Zhou; Oliver Cossairt; Shree K. Nayar

An optical diffuser is an element that scatters light and is commonly used to soften or shape illumination. In this paper, we propose a novel depth estimation method that places a diffuser in the scene prior to image capture. We call this approach depth-from-diffusion (DFDiff). We show that DFDiff is analogous to conventional depth-from-defocus (DFD), where the scatter angle of the diffuser determines the effective aperture of the system. The main benefit of DFDiff is that while DFD requires very large apertures to improve depth sensitivity, DFDiff only requires an increase in the diffusion angle – a much less expensive proposition. We perform a detailed analysis of the image formation properties of a DFDiff system, and show a variety of examples demonstrating greater precision in depth estimation when using DFDiff.


Journal of The Optical Society of America A-optics Image Science and Vision | 2011

Scaling law for computational imaging using spherical optics

Oliver Cossairt; Daniel Miau; Shree K. Nayar

The resolution of a camera system determines the fidelity of visual features in captured images. Higher resolution implies greater fidelity and, thus, greater accuracy when performing automated vision tasks, such as object detection, recognition, and tracking. However, the resolution of any camera is fundamentally limited by geometric aberrations. In the past, it has generally been accepted that the resolution of lenses with geometric aberrations cannot be increased beyond a certain threshold. We derive an analytic scaling law showing that, for lenses with spherical aberrations, resolution can be increased beyond the aberration limit by applying a postcapture deblurring step. We then show that resolution can be further increased when image priors are introduced. Based on our analysis, we advocate for computational camera designs consisting of a spherical lens shared by several small planar sensors. We show example images captured with a proof-of-concept gigapixel camera, demonstrating that high resolution can be achieved with a compact form factor and low complexity. We conclude with an analysis on the trade-off between performance and complexity for computational imaging systems with spherical lenses.


Optics Express | 2015

High spatio-temporal resolution video with compressed sensing.

Roman Koller; Lukas Schmid; Nathan Matsuda; Thomas Niederberger; Leonidas Spinoulas; Oliver Cossairt; Guido M. Schuster; Aggelos K. Katsaggelos

We present a prototype compressive video camera that encodes scene movement using a translated binary photomask in the optical path. The encoded recording can then be used to reconstruct multiple output frames from each captured image, effectively synthesizing high speed video. The use of a printed binary mask allows reconstruction at higher spatial resolutions than has been previously demonstrated. In addition, we improve upon previous work by investigating tradeoffs in mask design and reconstruction algorithm selection. We identify a mask design that consistently provides the best performance across multiple reconstruction strategies in simulation, and verify it with our prototype hardware. Finally, we compare reconstruction algorithms and identify the best choice in terms of balancing reconstruction quality and speed.

Collaboration


Dive into the Oliver Cossairt's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kuan He

Northwestern University

View shared research outputs
Top Co-Authors

Avatar

Marc Walton

Northwestern University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiang Huang

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Fengqiang Li

Northwestern University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gregg E. Favalora

Charles Stark Draper Laboratory

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge