Etienne Grossmann
Intel
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Etienne Grossmann.
Image and Vision Computing | 2000
Etienne Grossmann; José Santos-Victor
We consider reconstruction algorithms using points tracked over a sequence of (at least three) images, to estimate the positions of the cameras (motion parameters), the 3D coordinates (structure parameters), and the calibration matrix of the cameras (calibration parameters). Many algorithms have been reported in literature, and there is a need to know how well they may perform. We show how the choice of assumptions on the camera intrinsic parameters (either fixed, or with a probabilistic prior) influences the precision of the estimator. We associate a Maximum Likelihood estimator to each type of assumptions, and derive analytically their covariance matrices, independently of any specific implementation. We verify that the obtained covariance matrices are realistic, and compare the relative performance of each type of estimator. q 2000 Elsevier Science B.V. All rights reserved.
Computer Vision and Image Understanding | 2005
Etienne Grossmann; José Santos-Victor
We present a method to reconstruct from one or more images a scene that is rich in planes, alignments, symmetries, orthogonalities, and other forms of geometrical regularity. Given image points of interest and some geometric information, the method recovers least-squares estimates of the 3D points, camera position(s), orientation(s), and eventually calibration(s). Our contributions lie (i) in a novel way of exploiting some types of symmetry and of geometric regularity, (ii) in treating indifferently one or more images, (iii) in a geometric test that indicates whether the input data uniquely defines a reconstruction, and (iv) a parameterization method for collections of 3D points subject to geometric constraints. Moreover, the reconstruction algorithm lends itself to sensitivity analysis. The method is benchmarked on synthetic data and its effectiveness is shown on real-world data.
Computer Vision and Image Understanding | 2010
Etienne Grossmann; José António Gaspar; Francesco Orabona
We consider the problem of estimating the relative orientation of a number of individual photocells - or pixels - that hold fixed relative positions. The photocells measure the intensity of light traveling on a pencil of lines. We assume that the light-field thus sampled is changing, e.g. as the result of motion of the sensors and use the obtained measurements to estimate the orientations of the photocells. Our approach is based on correlation and information-theory dissimilarity measures. Experiments with real-world data show that the dissimilarity measures are strongly related to the angular separation between the photocells, and the relation can be modeled quantitatively. In particular we show that this model allows to estimate the angular separation from the dissimilarity. Although the resulting estimators are not very accurate, they maintain their performance throughout different visual environments, suggesting that the model encodes a very general property of our visual world. Finally, leveraging this method to estimate angles from signal pairs, we show how distance geometry techniques allow to recover the complete sensor geometry.
british machine vision conference | 1998
Etienne Grossmann; José Santos-Victor
We consider reconstruction algorithms using points tracked over a sequence of (at least three) images, to estimate the positions of the cameras ( motion parameters), the 3D coordinates (structure parameters), and the calibration matrix of the cameras (calibration parameters). Many algorithms have been reported in literature, and there is a need to know how well they may perform. We show how the choice of assumptions on the camera intrinsic parameters (either fixed, or with a probabilistic prior) influences the precision of the estimator. We associate a Maximum Likelihood estimator to each type of assumptions, and derive analytically their covariance matrices, independently of any specific implementation. We verify that the obtained covariance matrices are realistic, and compare the relative performance of each type of estimator.
Innovations in Intelligent Machines (1) | 2007
José António Gaspar; Niall Winters; Etienne Grossmann; José Santos Victor
Vision is an extraordinarily powerful sense. The ability to perceive the environment allows for movement to be regulated by the world. Humans do this effortlessly but still lack the understanding of how perception works. In the case of visual perception, many researchers, from psychologists to engineers, are working on this complex problem. Our approach is to build artificial visual systems to examine how a robot can use images, which convey only 2D information, in a robust manner to drive its actions in 3D space. The perceptual capabilities we developed allowed our robot to undertake everyday navigation tasks, such as “go to the fourth office in the second corridor”. A critical component of any perceptual system, human or artificial, is the sensing modality used to obtain information about the environment. In the biological world, for example, one striking observation is the diversity of “ocular” geometries. The majority of insects and arthropods benefit from a wide field of view and their eyes have a spacevariant resolution. To some extent, the perceptual capabilities of these animals can be explained by their specially adapted eye geometries. Similarly, in this work, we explore the advantages of having large fields of view by using an omnidirectional camera with a 360◦ azimuthal field of view. Once images have been acquired by the omnidirectional camera, a question arises as to what to do with them. Should they form an internal representation of the world? Over time, can they provide intrinsic information about the world so as no representation is required? These fundamental questions have long been addressed by the computer vision community and go to the heart of our current understanding of visual perception. Before going on to detail our approach, a brief overview of this understanding will be provided.
british machine vision conference | 2000
Etienne Grossmann; José Santos-Victor
We consider the problem of representing sets of 3D points in the context of 3D reconstruction from point matches. We present a new representation for sets of 3D points, which is general, compact and expressive : any set of points can be represented; geometric relations that are often present in manmade scenes, such as coplanarity, alignment and orthogonality, are explicitly expressed. In essence, we propose to define each 3D point by three independent linear constraints that it verifies, and exploit the fact that coplanar points verify a common constraint. We show how to use the dual representation in Maximum Likelihood estimation, and that it substantially improves the precision of 3D reconstruction.
Robotics and Autonomous Systems | 1997
Etienne Grossmann; José Santos-Victor
Abstract Over the years, computer vision researchers have developed a number of algorithms to solve a large number of problems. However, most of the existing algorithms are not characterized in terms of their performance, accuracy, cost, etc. Consequently, it is hardly ever possible to compare and choose between these various algorithms to tackle a specific problem. One of the contributions of this paper is the introduction of a framework for evaluating the performance of optical flow estimators, which is based on classical estimation theory criteria, and on considerations about the computation cost. This framework is general, and may be applied to other estimation problems. The optic flow is widely used in many vision systems. It is a vector velocity field defined on sequences of images. The affine optic flow is formed by the optical flow together with its first-order derivatives with respect to image coordinates. As a second contribution, we present two new estimators for the affine flow. We justify theoretically their design with hypotheses concerning the input images, which we show to be empirically valid. Finally, we use the performance analysis framework in order to compare the affine flow estimators with a more classical “differential” method.
british machine vision conference | 2002
Etienne Grossmann; José Santos-Victor
We address the 3D reconstruction of scenes in which some planarity, collinearity, symmetry and other geometric properties are known a-priori. Our main contribution is a reconstruction method that has advantages of both constraintbased and model-based methods. Like in the former, the reconstructed object needs not be an assemblage of predefined shapes. Like in the latter, the reconstruction is a maximum likelihood estimate and its precision can be estimated. Moreover, we improve on other constraint-based methods by using symmetry and other forms of regularity in the scene, and by working indifferently with one or more images. A second contribution is a method for parameterising a configuration of 3D points subject to geometric constraints. Using this parameterisation, the maximum likelihood reconstruction is obtained by solving an unconstrained optimisation problem. Another contribution lies in validating experimentally the assumption under which the maximum likelihood estimator was defined, namely, that the errors in hand-identified 2D points behave approximately like identically distributed independent Gaussian random variables. With this assumption validated, benchmarking is performed on synthetic data and the precision obtained on real-world data is shown. These experiments show that the maximum likelihood estimator is well-behaved and give insight on the precision obtained in real-world situations.
iberian conference on pattern recognition and image analysis | 2013
Ricardo Galego; Ricardo Ferreira; Alexandre Bernardino; Etienne Grossmann; José António Gaspar
This paper departs from traditional calibration in the sense that the pixels forming the camera have a completely unknown topology. Previous works have shown that the statistical properties of natural scenes, and a uniform motion of a camera both in translation and rotation, allow determining the topology of a central camera [10,6]. Here we show that there is a quasi-linear relationship between time-correlation and angular inter-pixel distance, considering small angles and a simple scenario encompassing one bright light on a dark background. The topology reconstruction algorithm is therefore based on correlating time series (pixel streams) acquired by the pixels of the moving camera. Correlations are converted to inter-pixel distances using a fixed linear transformation. Distances are finally embedded on a plane using a manifold learning methodology, namely Isomap. Experiments on real datasets have been conducted and have shown that the theoretical derivations are accurate for the considered scenario.
international conference on computer vision | 2007
Etienne Grossmann; Francesco Orabona; José António Gaspar
We consider the problem of estimating the relative orientation of a number of individual photocells-or pixels-that hold fixed relative positions. The photocells measure the intensity of on a pencil of lines. We assume that the light-field thus sampled is changing, e.g. as the result of motion of the sensors and use the obtained measurements to estimate the orientations of the photocells. We explore an information-theoretic and geometric approach: based on real-world data, we build anon-parametric functional relation linking the information distance between the data streams of two photocells, and the angular separation between the photocells. Then, given data streams produced by arrays of pixels in similar conditions, we use the functional relation to estimate the angles between pixels. Finally, we embed the estimated angles in the unit 3D sphere to obtain the estimated layout of the array.