Rafael Felipe V. Saracchini
State University of Campinas
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rafael Felipe V. Saracchini.
Computer Vision and Image Understanding | 2012
Rafael Felipe V. Saracchini; Jorge Stolfi; Helena Cristina da Gama Leitão; Gary A. Atkinson; Melvyn L. Smith
Highlights? A critical survey and classification of gradient integration methods. ? A new multi-scale integrator to recover the depth map from a gradient map. ? Use of an input weight map to identify missing data. ? A weight-sensitive Poisson formulation of the integration problem. ? Empirical comparison with state-of-art methods. We describe a robust method for the recovery of the depth map (or height map) from a gradient map (or normal map) of a scene, such as would be obtained by photometric stereo or interferometry. Our method allows for uncertain or missing samples, which are often present in experimentally measured gradient maps, and also for sharp discontinuities in the scenes depth, e.g. along object silhouette edges. By using a multi-scale approach, our integration algorithm achieves linear time and memory costs. A key feature of our method is the allowance for a given weight map that flags unreliable or missing gradient samples. We also describe several integration methods from the literature that are commonly used for this task. Based on theoretical analysis and tests with various synthetic and measured gradient maps, we argue that our algorithm is as accurate as the best existing methods, handling incomplete data and discontinuities, and is more efficient in time and memory usage, especially for large gradient maps.
Computers in Industry | 2013
Rafael Felipe V. Saracchini; Jorge Stolfi; Helena Cristina da Gama Leitão; Gary A. Atkinson; Melvyn L. Smith
We show that using example-based photometric stereo, it is possible to achieve realistic reconstructions of the human face. The method can handle non-Lambertian reflectance and attached shadows after a simple calibration step. We use spherical harmonics to model and de-noise the illumination functions from images of a reference object with known shape, and a fast grid technique to invert those functions and recover the surface normal for each point of the target object. The depth coordinate is obtained by weighted multi-scale integration of these normals, using an integration weight mask obtained automatically from the images themselves. We have applied these techniques to improve the PhotoFace system of Hansen et al. (2010).
brazilian symposium on computer graphics and image processing | 2008
Helena Cristina da Gama Leitão; Rafael Felipe V. Saracchini; Jorge Stolfi
We describe a procedure to solve the basic problem of variable lighting photometric stereo - namely, recovering the normal directions and intrinsic albedos at all visible points of an opaque object, by analyzing three or more photos of the same taken with different illuminations. We follow the gauge-based approach, where the lighting conditions and light scattering properties of the surface are given indirectly by photographing a gauge object with known shape and albedo, under the same lighting conditions. Unlike previous solutions, our method yields reliable results even when some of the images contain cast shadows, penumbras, highlights, or inter-object lighting, at a cost. The cost of inner loop grows quadratically, (rather than exponentially) with the number m of input images. Usable approximations can be obtained in m log m time.
british machine vision conference | 2010
Rafael Felipe V. Saracchini; Jorge Stolfi; Helena Cristina da Gama Leitão; Gary A. Atkinson; Melvyn L. Smith
We describe a robust method to recover the depth coordinate from a normal or slope map of a scene, obtained e.g. through photometric stereo or interferometry. The key feature of our method is the fast solution of the Poisson-like integration equations by a multi-scale iterative technique. The method accepts a weight map that can be used to exclude regions where the slope information is missing or untrusted, and to allow the integration of height maps with linear discontinuities (such as along object silhouettes) which are not recorded in the slope maps. Except for pathological cases, the memory and time costs of our method are typically proportional to the number of pixels N. Tests show that our method is as accurate as the best weighted slope integrators, but substantially more efficient in time and space.
pacific-rim symposium on image and video technology | 2011
Rafael Felipe V. Saracchini; Jorge Stolfi; Helena Cristina da Gama Leitão; Gary A. Atkinson; Melvyn L. Smith
We describe a fast and robust gradient integration method that computes scene depths (or heights) from surface gradient (or surface normal) data such as would be obtained by photometric stereo or interferometry. Our method allows for uncertain or missing samples, which are often present in experimentally measured gradient maps; for sharp discontinuities in the scenes depth, e.g. along object silhouette edges; and for irregularly spaced sampling points. To accommodate these features of the problem, we use an original and flexible representation of slope data, the weight-delta mesh. Like other state of the art solutions, our algorithm reduces the problem to a system of linear equations that is solved by Gauss-Seidel iteration with multi-scale acceleration. Its novel key step is a mesh decimation procedure that preserves the connectivity of the initial mesh. Tests with various synthetic and measured gradient data show that our algorithm is as accurate and efficient as the best available integrators for uniformly sampled data. Moreover our algorithm remains accurate and efficient even for large sets of weakly-connected instances of the problem, which cannot be efficiently handled by any existing algorithm.
IEEE Transactions on Image Processing | 2011
Rafael Felipe V. Saracchini; Jorge Stolfi; H.C. da Gama Leitao
In this paper, we describe a data structure and an algorithm to accelerate the table lookup step in example-based multiimage photometric stereo. In that step, one must find a pixel of a reference object, of known shape and color, whose appearance under different illumination fields is similar to that of a given scene pixel. This search reduces to finding the closest match to a given -vector in a table with a thousand or more -vectors. Our method is faster than previously known solutions for this problem but, unlike some of them, is exact, i.e., always yields the best matching entry in the table, and does not assume point-like sources. Our solution exploits the fact that the table is in fact a fairly flat 2-D manifold in -dimensional space so that the search can be efficiently solved with a uniform 2-D grid structure.
brazilian symposium on computer graphics and image processing | 2007
Helena Cristina da Gama Leitão; Rafael Felipe V. Saracchini; Jorge Stolfi
In this paper, we show how to speed up the table lookup step in gauge-based multi-image photometric stereo. In that step, one must find a pixel of a gauge object, of known shape and color, whose appearance under m different illumination fields is similar to that of a given scene pixel. This search reduces to finding the closest match to a given m- vector in a table with a thousand or more m-vectors. Our speed-up method exploits the fact that the table is in fact a fairly flat two-dimensional manifold in m-dimensional space, so that the search can be efficiently solved with a two-dimensional bucket grid structure.
international conference on computer vision theory and applications | 2016
Rafael Felipe V. Saracchini; Carlos A. Catalina; Rodrigo Minetto; Jorge Stolfi
In this paper we describe VOPT, a robust algorithm for visual odometry. It tracks features of the environment with known position in space, which can be acquired through monocular or RGBD SLAM mapping algorithms. The main idea of VOPT is to jointly optimize the matching of feature projections on successive frames, the camera’s extrinsic matrix, the photometric correction parameters, and the weight of each feature at the same time, by a multi-scale iterative procedure. VOPT uses GPU acceleration to achieve real-time performance, and includes robust procedures for automatic initialization and recovery, without user intervention. Our tests show that VOPT outperforms the PTAMM algorithm in challenging videos available publicly.
International Joint Conference on Computer Vision, Imaging and Computer Graphics | 2016
Rafael Felipe V. Saracchini; Carlos A. Catalina; Rodrigo Minetto; Jorge Stolfi
In this paper we describe VOPT (Visual Odometry by Patch Tracking), a robust algorithm for visual odometry, which is able to operate with sparse or dense maps computed by simultaneous localization and mapping (SLAM) algorithms. By using an iterative multi-scale procedure, VOPT is able to estimate the individual motion, photometric correction and reliability tracking confidence of a set of planar patches. In order to overcome the high computational cost of the patch adjustment, we use a GPU-based least-square solver, achieving real-time performance. The algorithm can also be used as a building block to other procedures for automatic initialization and recovery of 3D scene. Our tests show that VOPT outperforms the well-known PTAMM and the state-of-art ORB-SLAM algorithm in challenging videos using the same input maps.
international conference on information visualization theory and applications | 2015
Helena Cristina da Gama Leitão; Rafael Felipe V. Saracchini; Jorge Stolfi
This article describes a three-channel encoding of nucleotide sequences, and proper formulas for filtering and downsampling such encoded sequences for multi-scale signal analysis. With proper interpolation, the encoded sequences can be visualized as curves in three-dimensional space. The filtering uses Gaussian-like smoothing kernels, chosen so that all levels of the multi-scale pyramid (except the original curve) are practically free from aliasing artifacts and have the same degree of smoothing. With these precautions, the overall shape of the space curve is robust under small changes in the DNA sequence, such as single-point mutations, insertions, deletions, and shifts.