Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paolo Favaro is active.

Publication


Featured researches published by Paolo Favaro.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2012

The Light Field Camera: Extended Depth of Field, Aliasing, and Superresolution

Tom E. Bishop; Paolo Favaro

Portable light field (LF) cameras have demonstrated capabilities beyond conventional cameras. In a single snapshot, they enable digital image refocusing and 3D reconstruction. We show that they obtain a larger depth of field but maintain the ability to reconstruct detail at high resolution. In fact, all depths are approximately focused, except for a thin slab where blur size is bounded, i.e., their depth of field is essentially inverted compared to regular cameras. Crucial to their success is the way they sample the LF, trading off spatial versus angular resolution, and how aliasing affects the LF. We show that applying traditional multiview stereo methods to the extracted low-resolution views can result in reconstruction errors due to aliasing. We address these challenges using an explicit image formation model, and incorporate Lambertian and texture preserving priors to reconstruct both scene depth and its superresolved texture in a variational Bayesian framework, eliminating aliasing by fusing multiview information. We demonstrate the method on synthetic and real images captured with our LF camera, and show that it can outperform other computational camera systems.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2005

A geometric approach to shape from defocus

Paolo Favaro; Stefano Soatto

We introduce a novel approach to shape from defocus, i.e., the problem of inferring the three-dimensional (3D) geometry of a scene from a collection of defocused images. Typically, in shape from defocus, the task of extracting geometry also requires deblurring the given images. A common approach to bypass this task relies on approximating the scene locally by a plane parallel to the image (the so-called equifocal assumption). We show that this approximation is indeed not necessary, as one can estimate 3D geometry while avoiding deblurring without strong assumptions on the scene. Solving the problem of shape from defocus requires modeling how light interacts with the optics before reaching the imaging surface. This interaction is described by the so-called point spread function (PSF). When the form of the PSF is known, we propose an optimal method to infer 3D geometry from defocused images that involves computing orthogonal operators which are regularized via functional singular value decomposition. When the form of the PSF is unknown, we propose a simple and efficient method that first learns a set of projection operators from blurred images and then uses these operators to estimate the 3D geometry of the scene from novel blurred images. Our experiments on both real and synthetic images show that the performance of the algorithm is relatively insensitive to the form of the PSF Our general approach is to minimize the Euclidean norm of the difference between the estimated images and the observed images. The method is geometric in that we reduce the minimization to performing projections onto linear subspaces, by using inner product structures on both infinite and finite-dimensional Hilbert spaces. Both proposed algorithms involve only simple matrix-vector multiplications which can be implemented in real-time.


computer vision and pattern recognition | 2011

A closed form solution to robust subspace estimation and clustering

Paolo Favaro; René Vidal; Avinash Ravichandran

We consider the problem of fitting one or more subspaces to a collection of data points drawn from the subspaces and corrupted by noise/outliers. We pose this problem as a rank minimization problem, where the goal is to decompose the corrupted data matrix as the sum of a clean, self-expressive, low-rank dictionary plus a matrix of noise/outliers. Our key contribution is to show that, for noisy data, this non-convex problem can be solved very efficiently and in closed form from the SVD of the noisy data matrix. Remarkably, this is true for both one or more subspaces. An important difference with respect to existing methods is that our framework results in a polynomial thresholding of the singular values with minimal shrinkage. Indeed, a particular case of our framework in the case of a single subspace leads to classical PCA, which requires no shrinkage. In the case of multiple subspaces, our framework provides an affinity matrix that can be used to cluster the data according to the sub-spaces. In the case of data corrupted by outliers, a closed-form solution appears elusive. We thus use an augmented Lagrangian optimization framework, which requires a combination of our proposed polynomial thresholding operator with the more traditional shrinkage-thresholding operator.


international conference on computational photography | 2009

Light field superresolution

Tom E. Bishop; Sara Zanetti; Paolo Favaro

Light field cameras have been recently shown to be very effective in applications such as digital refocusing and 3D reconstruction. In a single snapshot these cameras provide a sample of the light field of a scene by trading off spatial resolution with angular resolution. Current methods produce images at a resolution that is much lower than that of traditional imaging devices. However, by explicitly modeling the image formation process and incorporating priors such as Lambertianity and texture statistics, these types of images can be reconstructed at a higher resolution. We formulate this method in a variational Bayesian framework and perform the reconstruction of both the surface of the scene and the (superresolved) light field. The method is demonstrated on both synthetic and real images captured with our light-field camera prototype.


international conference on computer vision | 2001

Real-time feature tracking and outlier rejection with changes in illumination

Hailin Jin; Paolo Favaro; Stefano Soatto

We develop an efficient algorithm to track point features supported by image patches undergoing affine deformations and changes in illumination. The algorithm is based on a combined model of geometry and photometry, that is used to track features as well as to detect outliers in a hypothesis testing framework. The algorithm runs in real time on a personal computer; and is available to the public.


asian conference on computer vision | 2010

Full-resolution depth map estimation from an aliased plenoptic light field

Tom E. Bishop; Paolo Favaro

In this paper we show how to obtain full-resolution depth maps from a single image obtained from a plenoptic camera. Previous work showed that the estimation of a low-resolution depth map with a plenoptic camera differs substantially from that of a camera array and, in particular, requires appropriate depth-varying antialiasing filtering. In this paper we show a quite striking result: One can instead recover a depth map at the same full-resolution of the input data. We propose a novel algorithm which exploits a photoconsistency constraint specific to light fields captured with plenoptic cameras. Key to our approach is handling missing data in the photoconsistency constraint and the introduction of novel boundary conditions that impose texture consistency in the reconstructed full-resolution images. These ideas are combined with an efficient regularization scheme to give depth maps at a higher resolution than in any previous method. We provide results on both synthetic and real data.


The Visual Computer | 2003

A semi-direct approach to structure from motion

Hailin Jin; Paolo Favaro; Stefano Soatto

The problem of structure from motion is often decomposed into two steps: feature correspondence and three-dimensional reconstruction. This separation often causes gross errors when establishing correspondence fails. Therefore, we advocate the necessity to integrate visual information not only in time (i.e. across different views), but also in space, by matching regions – rather than points – using explicit photometric deformation models. We present an algorithm that integrates image-feature tracking and three-dimensional motion estimation into a closed loop, while detecting and rejecting outlier regions that do not fit the model. Due to occlusions and the causal nature of our algorithm, a drift in the estimates accumulates over time. We describe a method to perform global registration of local estimates of motion and structure by matching the appearance of feature regions stored over long time periods. We use image intensities to construct a score function that takes into account changes in brightness and contrast. Our algorithm is recursive and suitable for real-time implementation.


computer vision and pattern recognition | 2005

Visual tracking in the presence of motion blur

Hailin Jin; Paolo Favaro; Roberto Cipolla

We consider the problem of visual tracking of regions of interest in a sequence of motion blurred images. Traditional methods couple tracking with deblurring in order to correctly account for the effects of motion blur. Such coupling is usually appropriate, but computationally wasteful when visual tracking is the lone objective. Instead of deblurring images, we propose to match regions by blurring them. The matching score for two image regions is governed by a cost function that only involves the region deformation parameters and two motion blur vectors. We present an efficient algorithm to minimize the proposed cost function and demonstrate it on sequences of real blurred images.


computer vision and pattern recognition | 2004

A variational approach to scene reconstruction and image segmentation from motion-blur cues

Paolo Favaro; Stefano Soatto

In this paper we are interested in the joint reconstruction of geometry and photometry of scenes with multiple moving objects from a collection of motion-blurred images. We make simplifying assumptions on the photometry of the scene (we neglect complex illumination effects) and infer the motion field of the scene, its depth map, and its radiance. In particular, we choose to partition the image into regions where motion is well approximated by a simple planar translation. We model motion-blurred images as the solution of an anisotropic diffusion equation, whose initial conditions depend on the radiance and whose diffusion tensor encodes the depth map of the scene and the motion field. We propose an algorithm to infer the unknowns of the model by minimizing the discrepancy between the measured images and the ones synthesized via diffusion. Since the problem is ill-posed, we also introduce additional Tikhonov regularization terms.


european conference on computer vision | 2000

3-D Motion and Structure from 2-D Motion Causally Integrated over Time: Implementation

Alessandro Chiuso; Paolo Favaro; Hailin Jin; Stefano Soatto

The causal estimation of three-dimensional motion from a sequence of two-dimensional images can be posed as a nonlinear filtering problem. We describe the implementation of an algorithm whose uniform observability, minimal realization and stability have been proven analytically in [5]. We discuss a scheme for handling occlusions, drift in the scale factor and tuning of the filter. We also present an extension to partially calibrated camera models and prove its observability. We report the performance of our implementation on a few long sequences of real images. More importantly, however, we have made our real-time implementation - which runs on a personal computer - available to the public for first-hand testing.

Collaboration


Dive into the Paolo Favaro's collaboration.

Top Co-Authors

Avatar

Stefano Soatto

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

René Vidal

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge