Vaibhav Vaish
Stanford University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Vaibhav Vaish.
international conference on computer graphics and interactive techniques | 2005
Bennett Wilburn; Neel Joshi; Vaibhav Vaish; Eino-Ville Talvala; Emilio R. Antúnez; Adam Barth; Andrew Adams; Mark Horowitz; Marc Levoy
The advent of inexpensive digital image sensors and the ability to create photographs that combine information from a number of sensed images are changing the way we think about photography. In this paper, we describe a unique array of 100 custom video cameras that we have built, and we summarize our experiences using this array in a range of imaging applications. Our goal was to explore the capabilities of a system that would be inexpensive to produce in the future. With this in mind, we used simple cameras, lenses, and mountings, and we assumed that processing large numbers of images would eventually be easy and cheap. The applications we have explored include approximating a conventional single center of projection video camera with high performance along one or more axes, such as resolution, dynamic range, frame rate, and/or large aperture, and using multiple cameras to approximate a video camera with a large synthetic aperture. This permits us to capture a video light field, to which we can apply spatiotemporal view interpolation algorithms in order to digitally simulate time dilation and camera motion. It also permits us to create video sequences using custom non-uniform synthetic apertures.
computer vision and pattern recognition | 2004
Vaibhav Vaish; Bennett Wilburn; Neel Joshi; Marc Levoy
A light field consists of images of a scene taken from different viewpoints. Light fields are used in computer graphics for image-based rendering and synthetic aperture photography, and in vision for recovering shape. In this paper, we describe a simple procedure to calibrate camera arrays used to capture light fields using a plane + parallax framework. Specifically, for the case when the cameras lie on a plane, we show (i) how to estimate camera positions up to an affine ambiguity, and (ii) how to reproject light field images onto a family of planes using only knowledge of planar parallax for one point in the scene. While planar parallax does not completely describe the geometry of the light field, it is adequate for the first two applications which, it turns out, do not depend on having a metric calibration of the light field. Experiments on acquired light fields indicate that our method yields better results than full metric calibration.
computer vision and pattern recognition | 2005
Vaibhav Vaish; Gaurav Garg; Eino-Ville Talvala; Emilio R. Antúnez; Bennett Wilburn; Mark Horowitz; Marc Levoy
Synthetic aperture focusing consists of warping and adding together the images in a 4D light field so that objects lying on a specified surface are aligned and thus in focus, while objects lying of this surface are misaligned and hence blurred. This provides the ability to see through partial occluders such as foliage and crowds, making it a potentially powerful tool for surveillance. If the cameras lie on a plane, it has been previously shown that after an initial homography, one can move the focus through a family of planes that are parallel to the camera plane by merely shifting and adding the images. In this paper, we analyze the warps required for tilted focal planes and arbitrary camera configurations. We characterize the warps using a new rank- 1 constraint that lets us focus on any plane, without having to perform a metric calibration of the cameras. We also show that there are camera configurations and families of tilted focal planes for which the warps can be factorized into an initial homography followed by shifts. This shear-warp factorization permits these tilted focal planes to be synthesized as efficiently as frontoparallel planes. Being able to vary the focus by simply shifting and adding images is relatively simple to implement in hardware and facilitates a real-time implementation. We demonstrate this using an array of 30 videoresolution cameras; initial homographies and shifts are performed on per-camera FPGAs, and additions and a final warp are performed on 3 PCs.
international conference on computer graphics and interactive techniques | 2004
Marc Levoy; Billy Chen; Vaibhav Vaish; Mark Horowitz; Ian E. McDowall; Mark T. Bolas
Confocal microscopy is a family of imaging techniques that employ focused patterned illumination and synchronized imaging to create cross-sectional views of 3D biological specimens. In this paper, we adapt confocal imaging to large-scale scenes by replacing the optical apertures used in microscopy with arrays of real or virtual video projectors and cameras. Our prototype implementation uses a video projector, a camera, and an array of mirrors. Using this implementation, we explore confocal imaging of partially occluded environments, such as foliage, and weakly scattering environments, such as murky water. We demonstrate the ability to selectively image any plane in a partially occluded environment, and to see further through murky water than is otherwise possible. By thresholding the confocal images, we extract mattes that can be used to selectively illuminate any plane in the scene.
workshop on applications of computer vision | 2000
Nalini K. Ratha; Ruud M. Bolle; Vinayaka Pandit; Vaibhav Vaish
Fingerprint matching is challenging as the matcher has to minimize two competing error rates: the False Accept Rate and the False Reject Rate. We propose a novel, efficient, accurate and distortion-tolerant fingerprint authentication technique based on graph representation. Using the fingerprint minutiae features, a labeled, and weighted graph of minutiae is constructed for both the query fingerprint and the reference fingerprint. In the first phase, we obtain a minimum set of matched node pairs by matching their neighborhood structures. In the second phase, we include more pairs in the match by comparing distances with respect to matched pairs obtained in first phase. An optional third phase, extending the neighborhood around each feature, is entered if we cannot arrive at a decision based on the analysis in first two phases. The proposed algorithm has been tested with excellent results on a large private livescan database obtained with optical scanners.
computer vision and pattern recognition | 2006
Vaibhav Vaish; Marc Levoy; Richard Szeliski; Charles Lawrence Zitnick; Sing Bing Kang
computer vision and pattern recognition | 2004
Bennett Wilburn; Neel Joshi; Vaibhav Vaish; Marc Levoy; Mark Horowitz
Archive | 2003
Paul A. Beardsley; Ramesh Raskar; Vaibhav Vaish
computer vision and pattern recognition | 2004
Bennett Wilburn; Neel Joshi; Vaibhav Vaish; Marc Levoy; Mark Horowitz
Archive | 2007
Marc Levoy; Vaibhav Vaish