Pierre Moreels
California Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Pierre Moreels.
International Journal of Computer Vision | 2007
Pierre Moreels; Pietro Perona
We explore the performance of a number of popular feature detectors and descriptors in matching 3D object features across viewpoints and lighting conditions. To this end we design a method, based on intersecting epipolar constraints, for providing ground truth correspondence automatically. These correspondences are based purely on geometric information, and do not rely on the choice of a specific feature appearance descriptor. We test detector-descriptor combinations on a database of 100 objects viewed from 144 calibrated viewpoints under three different lighting conditions. We find that the combination of Hessian-affine feature finder and SIFT features is most robust to viewpoint change. Harris-affine combined with SIFT and Hessian-affine combined with shape context descriptors were best respectively for lighting change and change in camera focal length. We also find that no detector-descriptor combination performs well with viewpoint changes of more than 25–30∘.
international conference on computer vision | 2005
Pierre Moreels; Pietro Perona
We explore the performance of a number of popular feature detectors and descriptors in matching 3D object features across viewpoints and lighting conditions. To this end we design a method, based on intersecting epipolar constraints, for providing ground truth correspondence automatically. We collect a database of 100 objects viewed from 144 calibrated viewpoints under three different lighting conditions. We find that the combination of Hessian-affine feature finder and SIFT features is most robust to viewpoint change. Harris-affine combined with SIFT and Hessian-affine combined with shape context descriptors were best respectively for lighting changes and scale changes. We also find that no detector-descriptor combination performs well with viewpoint changes of more than 25-30/spl deg/.
european conference on computer vision | 2004
Pierre Moreels; Michael Maire; Pietro Perona
We present a probabilistic framework for recognizing objects in images of cluttered scenes. Hundreds of objects may be considered and searched in parallel. Each object is learned from a single training image and modeled by the visual appearance of a set of features, and their position with respect to a common reference frame. The recognition process computes identity and position of objects in the scene by finding the best interpretation of the scene in terms of learned objects. Features detected in an input image are either paired with database features, or marked as clutters. Each hypothesis is scored using a generative model of the image which is defined using the learned objects and a model for clutter. While the space of possible hypotheses is enormously large, one may find the best hypothesis efficiently – we explore some heuristics to do so. Our algorithm compares favorably with state-of-the-art recognition systems.
ieee international conference on automatic face & gesture recognition | 2008
Alex Holub; Pierre Moreels; Pietro Perona
How do we identify images of the same person in photo albums? How can we find images of a particular celebrity using Web image search engines? These types of tasks require solving numerous challenging issues in computer vision including: detecting whether an image contains a face, maintaining robustness to lighting, pose, occlusion, scale, and image quality, and using appropriate distance metrics to identify and compare different faces. In this paper we present a complete system which yields good performance on challenging tasks involving face recognition including image retrieval, unsupervised clustering of faces, and increasing precision of dasiaGoogle Imagepsila searches. All tasks use highly variable real data obtained from raw image searches on the web.
european conference on computer vision | 2008
Pierre Moreels; Pietro Perona
A probabilistic system for recognition of individual objects is presented. The objects to recognize are composed of constellations of features, and features from a same object share the common reference frame of the image in which they are detected. Features appearance and pose are modeled by probabilistic distributions, the parameters of which are shared across features in order to allow training from few examples. In order to avoid an expensive combinatorial search, our recognition system is organized as a cascade of well-established, simple and inexpensive detectors. The candidate hypotheses output by our algorithm are evaluated by a generative probabilistic model that takes into account each stage of the matching process. We apply our ideas to the problem of individual object recognition and test our method on several data-sets. We compare with Lowes algorithm [7] and demonstrate significantly better performance.
computer vision and pattern recognition | 2009
Alex Holub; Pierre Moreels; Atiq Islam; Andrei Peter Makhanov; Rui Yang
This paper describes a system for automatically extracting meta-information on people from videos on the Web. The system contains multiple modules which automatically track people, including both faces and bodies, and clusters the people into distinct groups. We present new technology and significantly modify existing algorithms for body-detection, shot-detection and grouping, tracking, and track-clustering within our system. The system was designed to work effectivity on Web content, and thus exhibits robust tracking and clustering behavior over a broad spectrum of professional and semi-professional video content. In order to quantify and evaluate our system we created a large ground-truth data-set of people within video. Finally, we provide actual video examples of our algorithm and find that the results are quite strong over a broad range of content.
Optical Pattern Recognition XV | 2004
Jay C. Hanan; Tien-Hsin Chao; Pierre Moreels
Feature detectors have been considered for the role of supplying additional information to a neural network tracker. The feature detector focuses on areas of the image with significant information. Basically, if a picture says a thousand words, the feature detectors are looking for the key phrases (keypoints). These keypoints are rotationally invariant and may be matched across frames. Application of these advanced feature detectors to the neural network tracking system at JPL has promising potential. As part of an ongoing program, an advanced feature detector was tested for augmentation of a neural network based tracker. The advance feature detector extended tracking periods in test sequences including aircraft tracking, rover tracking, and simulated Martian landing. Future directions of research are also discussed.
Proceedings of SPIE, the International Society for Optical Engineering | 2007
Ambrus Csaszar; Jay C. Hanan; Pierre Moreels; Christopher Assad
A low-cost vehicle test-bed system was developed to iteratively test, refine and demonstrate navigation algorithms before attempting to transfer the algorithms to more advanced rover prototypes. The platform used here was a modified radio controlled (RC) car. A microcontroller board and onboard laptop computer allow for either autonomous or remote operation via a computer workstation. The sensors onboard the vehicle represent the types currently used on NASA-JPL rover prototypes. For dead-reckoning navigation, optical wheel encoders, a single axis gyroscope, and 2-axis accelerometer were used. An ultrasound ranger is available to calculate distance as a substitute for the stereo vision systems presently used on rovers. The prototype also carries a small laptop computer with a USB camera and wireless transmitter to send real time video to an off-board computer. A real-time user interface was implemented that combines an automatic image feature selector, tracking parameter controls, streaming video viewer, and user generated or autonomous driving commands. Using the test-bed, real-time landmark tracking was demonstrated by autonomously driving the vehicle through the JPL Mars yard. The algorithms tracked rocks as waypoints. This generated coordinates calculating relative motion and visually servoing to science targets. A limitation for the current system is serial computing−each additional landmark is tracked in order−but since each landmark is tracked independently, if transferred to appropriate parallel hardware, adding targets would not significantly diminish system speed.
neural information processing systems | 2004
Pierre Moreels; Pietro Perona
Archive | 2009
Alex Holub; Atiq Islam; Andrei Peter Makhanov; Pierre Moreels; Rui Yang