Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bennett Wilburn is active.

Publication


Featured researches published by Bennett Wilburn.


international conference on computer graphics and interactive techniques | 2005

High performance imaging using large camera arrays

Bennett Wilburn; Neel Joshi; Vaibhav Vaish; Eino-Ville Talvala; Emilio R. Antúnez; Adam Barth; Andrew Adams; Mark Horowitz; Marc Levoy

The advent of inexpensive digital image sensors and the ability to create photographs that combine information from a number of sensed images are changing the way we think about photography. In this paper, we describe a unique array of 100 custom video cameras that we have built, and we summarize our experiences using this array in a range of imaging applications. Our goal was to explore the capabilities of a system that would be inexpensive to produce in the future. With this in mind, we used simple cameras, lenses, and mountings, and we assumed that processing large numbers of images would eventually be easy and cheap. The applications we have explored include approximating a conventional single center of projection video camera with high performance along one or more axes, such as resolution, dynamic range, frame rate, and/or large aperture, and using multiple cameras to approximate a video camera with a large synthetic aperture. This permits us to capture a video light field, to which we can apply spatiotemporal view interpolation algorithms in order to digitally simulate time dilation and camera motion. It also permits us to create video sequences using custom non-uniform synthetic apertures.


computer vision and pattern recognition | 2004

Using plane + parallax for calibrating dense camera arrays

Vaibhav Vaish; Bennett Wilburn; Neel Joshi; Marc Levoy

A light field consists of images of a scene taken from different viewpoints. Light fields are used in computer graphics for image-based rendering and synthetic aperture photography, and in vision for recovering shape. In this paper, we describe a simple procedure to calibrate camera arrays used to capture light fields using a plane + parallax framework. Specifically, for the case when the cameras lie on a plane, we show (i) how to estimate camera positions up to an affine ambiguity, and (ii) how to reproject light field images onto a family of planes using only knowledge of planar parallax for one point in the scene. While planar parallax does not completely describe the geometry of the light field, it is adequate for the first two applications which, it turns out, do not depend on having a metric calibration of the light field. Experiments on acquired light fields indicate that our method yields better results than full metric calibration.


computer vision and pattern recognition | 2005

Synthetic Aperture Focusing using a Shear-Warp Factorization of the Viewing Transform

Vaibhav Vaish; Gaurav Garg; Eino-Ville Talvala; Emilio R. Antúnez; Bennett Wilburn; Mark Horowitz; Marc Levoy

Synthetic aperture focusing consists of warping and adding together the images in a 4D light field so that objects lying on a specified surface are aligned and thus in focus, while objects lying of this surface are misaligned and hence blurred. This provides the ability to see through partial occluders such as foliage and crowds, making it a potentially powerful tool for surveillance. If the cameras lie on a plane, it has been previously shown that after an initial homography, one can move the focus through a family of planes that are parallel to the camera plane by merely shifting and adding the images. In this paper, we analyze the warps required for tilted focal planes and arbitrary camera configurations. We characterize the warps using a new rank- 1 constraint that lets us focus on any plane, without having to perform a metric calibration of the cameras. We also show that there are camera configurations and families of tilted focal planes for which the warps can be factorized into an initial homography followed by shifts. This shear-warp factorization permits these tilted focal planes to be synthesized as efficiently as frontoparallel planes. Being able to vary the focus by simply shifting and adding images is relatively simple to implement in hardware and facilitates a real-time implementation. We demonstrate this using an array of 30 videoresolution cameras; initial homographies and shifts are performed on per-camera FPGAs, and additions and a final warp are performed on 3 PCs.


IEEE Journal of Solid-state Circuits | 1998

Low-power SRAM design using half-swing pulse-mode techniques

Ken Mai; Toshihiko Mori; Bharadwaj Amrutur; Ron Ho; Bennett Wilburn; Mark Horowitz; Isao Fukushi; T. Izawa; Shin Mitarai

This paper describes a half-swing pulse-mode gate family that uses reduced input signal swing without sacrificing performance. These gates are well suited for decreasing the power in SRAM decoders and write circuits by reducing the signal swing on high-capacitance predecode lines, write bus lines, and bit lines. Charge recycling between positive and negative half-swing pulses further reduces the power dissipation. These techniques are demonstrated in a 2-K/spl times/16-b SRAM fabricated in a 0.25-/spl mu/m dual-V/sub t/ CMOS technology that dissipates 0.9 mW operating at 1 V, 100 MHz, and room temperature. On-chip voltage samplers were used to probe internal nodes.


symposium on vlsi circuits | 1998

Applications of on-chip samplers for test and measurement of integrated circuits

Ron Ho; Bharadwaj Amrutur; Ken Mai; Bennett Wilburn; Toshihiko Mori; Mark Horowitz

Displaying the real-time behavior of critical signals on VLSI chips is difficult and can require expensive test equipment. We present a simple sampling technique to display the analog waveforms of high bandwidth on-chip signals on a laboratory oscilloscope. It is based on the subsampling of periodic signals. This circuit was used to verify the operation of a recent low-power SRAM design.


computer vision and pattern recognition | 2008

Stereo reconstruction with mixed pixels using adaptive over-segmentation

Y. Taguchi; Bennett Wilburn; Charles Lawrence Zitnick

We present an over-segmentation based, dense stereo algorithm that jointly estimates segmentation and depth. For mixed pixels on segment boundaries, the algorithm computes foreground opacity (alpha), as well as color and depth for the foreground and background. We model the scene as a collection of fronto-parallel planar segments in a reference view, and use a generative model for image formation that handles mixed pixels at segment boundaries. Our method iteratively updates the segmentation based on color, depth and shape constraints using MAP estimation. Given a segmentation, the depth estimates are updated using belief propagation. We show that our method is competitive with the state-of-the-art based on the new Middlebury stereo evaluation, and that it overcomes limitations of traditional segmentation based methods while properly handling mixed pixels. Z-keying results show the advantages of combining opacity and depth estimation.


computer vision and pattern recognition | 2011

High-quality shape from multi-view stereo and shading under general illumination

Chenglei Wu; Bennett Wilburn; Yasuyuki Matsushita; Christian Theobalt

Multi-view stereo methods reconstruct 3D geometry from images well for sufficiently textured scenes, but often fail to recover high-frequency surface detail, particularly for smoothly shaded surfaces. On the other hand, shape-from-shading methods can recover fine detail from shading variations. Unfortunately, it is non-trivial to apply shape-from-shading alone to multi-view data, and most shading-based estimation methods only succeed under very restricted or controlled illumination. We present a new algorithm that combines multi-view stereo and shading-based refinement for high-quality reconstruction of 3D geometry models from images taken under constant but otherwise arbitrary illumination. We have tested our algorithm on several scenes that were captured under several general and unknown lighting conditions, and we show that our final reconstructions rival laser range scans.


international conference on computer vision | 2007

Penrose Pixels Super-Resolution in the Detector Layout Domain

Moshe Ben-Ezra; Zhouchen Lin; Bennett Wilburn

We present a novel approach to reconstruction based super- resolution that explicitly models the detectors pixel layout. Pixels in our model can vary in shape and size, and there may be gaps between adjacent pixels. Furthermore, their layout can be periodic as well as aperiodic, such as penrose tiling or a biological retina. We also present a new variant of the well known error back-projection super-resolution algorithm that makes use of the exact detector model in its back projection operator for better accuracy. Our method can be applied equally well to either periodic or aperiodic pixel tiling. Through analysis and extensive testing using synthetic and real images, we show that our approach outperforms existing reconstruction based algorithms for regular pixel arrays. We obtain significantly better results using aperiodic pixel layouts. As an interesting example, we apply our method to a retina-like pixel structure modeled by a centroidal Voronoi tessellation. We demonstrate that, in principle, this structure is better for super-resolution than the regular pixel array used in todays sensors.


computer vision and pattern recognition | 2008

An LED-only BRDF measurement device

Moshe Ben-Ezra; Jiaping Wang; Bennett Wilburn; Xiaoyang Li; Le Ma

Light emitting diodes (LEDs) can be used as light detectors and as light emitters. In this paper, we present a novel BRDF measurement device consisting exclusively of LEDs. Our design can acquire BRDFs over a full hemisphere, or even a full sphere (for the bidirectional transmittance distribution function BTDF), and can also measure a (partial) multi-spectral BRDF. Because we use no cameras, projectors, or even mirrors, our design does not suffer from occlusion problems. It is fast, significantly simpler, and more compact than existing BRDF measurement designs.


computer vision and pattern recognition | 2012

Edge-preserving photometric stereo via depth fusion

Qing Zhang; Mao Ye; Ruigang Yang; Yasuyuki Matsushita; Bennett Wilburn; Huimin Yu

We present a sensor fusion scheme that combines active stereo with photometric stereo. Aiming at capturing full-frame depth for dynamic scenes at a minimum of three lighting conditions, we formulate an iterative optimization scheme that (1) adaptively adjusts the contribution from photometric stereo so that discontinuity can be preserved; (2) detects shadow areas by checking the visibility of the estimated point with respect to the light source, instead of using image-based heuristics; and (3) behaves well for ill-conditioned pixels that are under shadow, which are inevitable in almost any scene. Furthermore, we decompose our non-linear cost function into subproblems that can be optimized efficiently using linear techniques. Experiments show significantly improved results over the previous state-of-the-art in sensor fusion.

Collaboration


Dive into the Bennett Wilburn's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge