Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Samuel Dambreville is active.

Publication


Featured researches published by Samuel Dambreville.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2008

A Framework for Image Segmentation Using Shape Models and Kernel Space Shape Priors

Samuel Dambreville; Yogesh Rathi; Allen R. Tannenbaum

Segmentation involves separating an object from the background in a given image. The use of image information alone often leads to poor segmentation results due to the presence of noise, clutter or occlusion. The introduction of shape priors in the geometric active contour (GAC) framework has proved to be an effective way to ameliorate some of these problems. In this work, we propose a novel segmentation method combining image information with prior shape knowledge, using level-sets. Following the work of Leventon et al., we propose to revisit the use of PCA to introduce prior knowledge about shapes in a more robust manner. We utilize kernel PCA (KPCA) and show that this method outperforms linear PCA by allowing only those shapes that are close enough to the training data. In our segmentation framework, shape knowledge and image information are encoded into two energy functionals entirely described in terms of shapes. This consistent description permits to fully take advantage of the Kernel PCA methodology and leads to promising segmentation results. In particular, our shape-driven segmentation technique allows for the simultaneous encoding of multiple types of shapes, and offers a convincing level of robustness with respect to noise, occlusions, or smearing.


electronic imaging | 2006

Statistical shape analysis using kernel PCA

Yogesh Rathi; Samuel Dambreville; Allen R. Tannenbaum

Mercer kernels are used for a wide range of image and signal processing tasks like de-noising, clustering, discriminant analysis etc. These algorithms construct their solutions in terms of the expansions in a high-dimensional feature space F. However, many applications like kernel PCA (principal component analysis) can be used more effectively if a pre-image of the projection in the feature space is available. In this paper, we propose a novel method to reconstruct a unique approximate pre-image of a feature vector and apply it for statistical shape analysis. We provide some experimental results to demonstrate the advantages of kernel PCA over linear PCA for shape learning, which include, but are not limited to, ability to learn and distinguish multiple geometries of shapes and robustness to occlusions.


computer vision and pattern recognition | 2006

Shape-Based Approach to Robust Image Segmentation using Kernel PCA

Samuel Dambreville; Yogesh Rathi; Allen Tannen

Segmentation involves separating an object from the background. In this work, we propose a novel segmentation method combining image information with prior shape knowledge, within the level-set framework. Following the work of Leventon et al., we revisit the use of principal component analysis (PCA) to introduce prior knowledge about shapes in a more robust manner. To this end, we utilize Kernel PCA and show that this method of learning shapes outperforms linear PCA, by allowing only shapes that are close enough to the training data. In the proposed segmentation algorithm, shape knowledge and image information are encoded into two energy functionals entirely described in terms of shapes. This consistent description allows to fully take advantage of the Kernel PCA methodology and leads to promising segmentation results. In particular, our shape-driven segmentation technique allows for the simultaneous encoding of multiple types of shapes, and offers a convincing level of robustness with respect to noise, clutter, partial occlusions, or smearing.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2010

Point Set Registration via Particle Filtering and Stochastic Dynamics

Romeil Sandhu; Samuel Dambreville; Allen R. Tannenbaum

In this paper, we propose a particle filtering approach for the problem of registering two point sets that differ by a rigid body transformation. Typically, registration algorithms compute the transformation parameters by maximizing a metric given an estimate of the correspondence between points across the two sets of interest. This can be viewed as a posterior estimation problem, in which the corresponding distribution can naturally be estimated using a particle filter. In this work, we treat motion as a local variation in pose parameters obtained by running a few iterations of a certain local optimizer. Employing this idea, we introduce stochastic motion dynamics to widen the narrow band of convergence often found in local optimizer approaches for registration. Thus, the novelty of our method is threefold: First, we employ a particle filtering scheme to drive the point set registration process. Second, we present a local optimizer that is motivated by the correlation measure. Third, we increase the robustness of the registration performance by introducing a dynamic model of uncertainty for the transformation parameters. In contrast with other techniques, our approach requires no annealing schedule, which results in a reduction in computational complexity (with respect to particle size) as well as maintains the temporal coherency of the state (no loss of information). Also unlike some alternative approaches for point set registration, we make no geometric assumptions on the two data sets. Experimental results are provided that demonstrate the robustness of the algorithm to initialization, noise, missing structures, and/or differing point densities in each set, on several challenging 2D and 3D registration scenarios.


american control conference | 2006

Tracking deformable objects with unscented Kalman filtering and geometric active contours

Samuel Dambreville; Yogesh Rathi; Allen R. Tannenbaum

Geometric active contours represented as the zero level sets of the graph of a surface have been used very successfully to segment static images. However, tracking involves estimating the global motion of the object and its local deformations as functions of time. Some attempts have been made to use geometric active contours for tracking, but most of these minimize the energy at each frame and do not utilize the temporal coherency of the motion or the deformation. Recently, particle filters for geometric active contours were used for tracking deforming objects. However, the method is computationally very expensive since it requires a large number of particles to approximate the state density. In the present work, we propose to use the unscented Kalman filter together with geometric active contours to track deformable objects in a computationally efficient manner


european conference on computer vision | 2008

Robust 3D Pose Estimation and Efficient 2D Region-Based Segmentation from a 3D Shape Prior

Samuel Dambreville; Romeil Sandhu; Anthony J. Yezzi; Allen R. Tannenbaum

In this work, we present an approach to jointly segment a rigid object in a 2D image and estimate its 3D pose, using the knowledge of a 3D model. We naturally couple the two processes together into a unique energy functional that is minimized through a variational approach. Our methodology differs from the standard monocular 3D pose estimation algorithms since it does not rely on local image features. Instead, we use global image statistics to drive the pose estimation process. This confers a satisfying level of robustness to noise and initialization for our algorithm, and bypasses the need to establish correspondences between image and object features. Moreover, our methodology possesses the typical qualities of region-based active contour techniques with shape priors, such as robustness to occlusions or missing information, without the need to evolve an infinite dimensional curve. Another novelty of the proposed contribution is to use a unique 3D model surface of the object, instead of learning a large collection of 2D shapes to accommodate for the diverse aspects that a 3D object can take when imaged by a camera. Experimental results on both synthetic and real images are provided, which highlight the robust performance of the technique on challenging tracking and segmentation applications.


computer vision and pattern recognition | 2008

Particle filtering for registration of 2D and 3D point sets with stochastic dynamics

Romeil Sandhu; Samuel Dambreville; Allen R. Tannenbaum

In this paper, we propose a particle filtering approach for the problem of registering two point sets that differ by a rigid body transformation. Typically, registration algorithms compute the transformation parameters by maximizing a metric given an estimate of the correspondence between points across the two sets of interest. This can be viewed as a posterior estimation problem, in which the corresponding distribution can naturally be estimated using a particle filter. In this work, we treat motion as a local variation in pose parameters obtained from running a few iterations of the standard Iterative Closest Point (ICP) algorithm. Employing this idea, we introduce stochastic motion dynamics to widen the narrow band of convergence often found in local optimizer functions used to tackle the registration task. Thus, the novelty of our method is twofold: Firstly, we employ a particle filtering scheme to drive the point set registration process. Secondly, we increase the robustness of the registration performance by introducing a dynamic model of uncertainty for the transformation parameters. In contrast with other techniques, our approach requires no annealing schedule, which results in a reduction in computational complexity as well as maintains the temporal coherency of the state (no loss of information). Also, unlike most alternative approaches for point set registration, we make no geometric assumptions on the two data sets. Experimental results are provided that demonstrate the robustness of the algorithm to initialization, noise, missing structures or differing point densities in each sets, on challenging 2D and 3D registration tasks.


computer vision and pattern recognition | 2009

Non-rigid 2D-3D pose estimation and 2D image segmentation

Romeil Sandhu; Samuel Dambreville; Anthony J. Yezzi; Allen R. Tannenbaum

In this work, we present a non-rigid approach to jointly solve the tasks of 2D-3D pose estimation and 2D image segmentation. In general, most frameworks which couple both pose estimation and segmentation assume that one has the exact knowledge of the 3D object. However, in non-ideal conditions, this assumption may be violated if only a general class to which a given shape belongs to is given (e.g., cars, boats, or planes). Thus, the key contribution in this work is to solve the 2D-3D pose estimation and 2D image segmentation for a general class of objects or deformations for which one may not be able to associate a skeleton model. Moreover, the resulting scheme can be viewed as an extension of the framework presented in, in which we include the knowledge of multiple 3D models rather than assuming the exact knowledge of a single 3D shape prior. We provide experimental results that highlight the algorithms robustness to noise, clutter, occlusion, and shape recovery on several challenging pose estimation and segmentation scenarios.


Siam Journal on Imaging Sciences | 2010

A Geometric Approach to Joint 2D Region-Based Segmentation and 3D Pose Estimation Using a 3D Shape Prior

Samuel Dambreville; Romeil Sandhu; Anthony J. Yezzi; Allen R. Tannenbaum

In this work, we present an approach to jointly segment a rigid object in a two-dimensional (2D) image and estimate its three-dimensional (3D) pose, using the knowledge of a 3D model. We naturally couple the two processes together into a shape optimization problem and minimize a unique energy functional through a variational approach. Our methodology differs from the standard monocular 3D pose estimation algorithms since it does not rely on local image features. Instead, we use global image statistics to drive the pose estimation process. This confers a satisfying level of robustness to noise and initialization for our algorithm and bypasses the need to establish correspondences between image and object features. Moreover, our methodology possesses the typical qualities of region-based active contour techniques with shape priors, such as robustness to occlusions or missing information, without the need to evolve an infinite dimensional curve. Another novelty of the proposed contribution is to use a unique 3D model surface of the object, instead of learning a large collection of 2D shapes to accommodate the diverse aspects that a 3D object can take when imaged by a camera. Experimental results on both synthetic and real images are provided, which highlight the robust performance of the technique in challenging tracking and segmentation applications.


international conference on image analysis and recognition | 2006

A shape-based approach to robust image segmentation

Samuel Dambreville; Yogesh Rathi; Allen R. Tannenbaum

We propose a novel segmentation approach for introducing shape priors in the geometric active contour framework. Following the work of Leventon, we propose to revisit the use of linear principal component analysis (PCA) to introduce prior knowledge about shapes in a more robust manner. Our contribution in this paper is twofold. First, we demonstrate that building a space of familiar shapes by applying PCA on binary images (instead of signed distance functions) enables one to constrain the contour evolution in a way that is more faithful to the elements of a training set. Secondly, we present a novel region-based segmentation framework, able to separate regions of different intensities in an image. Shape knowledge and image information are encoded into two energy functionals entirely described in terms of shapes. This consistent description allows for the simultaneous encoding of multiple types of shapes and leads to promising segmentation results. In particular, our shape-driven segmentation technique offers a convincing level of robustness with respect to noise, clutter, partial occlusions, and blurring.

Collaboration


Dive into the Samuel Dambreville's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yogesh Rathi

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar

Romeil Sandhu

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Anthony J. Yezzi

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Shawn Lankton

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Marc Niethammer

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

James G. Malcolm

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ajit P. Yoganathan

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Allen Tannen

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge