Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jordi Pagès is active.

Publication


Featured researches published by Jordi Pagès.


Pattern Recognition | 2004

Pattern codification strategies in structured light systems

Joaquim Salvi; Jordi Pagès; Joan Batlle

Coded structured light is considered one of the most reliable techniques for recovering the surface of objects. This technique is based on projecting a light pattern and viewing the illuminated scene from one or more points of view. Since the pattern is coded, correspondences between image points and points of the projected pattern can be easily found. The decoded points can be triangulated and 3D information is obtained. We present an overview of the existing techniques, as well as a new and definitive classification of patterns for structured light sensors. We have implemented a set of representative techniques in this field and present some comparative results. The advantages and constraints of the different patterns are also discussed.


Image and Vision Computing | 2005

Optimised De Bruijn patterns for one-shot shape acquisition

Jordi Pagès; Joaquim Salvi; Christophe Collewet; Josep Forest

Coded structured light is an optical technique based on active stereovision which allows shape acquisition. By projecting a suitable set of light patterns onto the surface of an object and capturing images with a camera, a large number of correspondences can be found and 3D points can be reconstructed by means of triangulation. One-shot techniques are based on projecting an unique pattern so that moving objects can be measured. A major group of techniques in this field define coloured multi-slit or stripe patterns in order to obtain dense reconstructions. The former type of patterns is suitable for locating intensity peaks in the image while the latter is aimed to locate edges. In this paper, we present a new way to design coloured stripe patterns so that both intensity peaks and edges can be located without loss of accuracy and reducing the number of hue levels included in the pattern. The results obtained by the new pattern are quantitatively and qualitatively compared to similar techniques. These results also contribute to a comparison between the peak-based and edge-based reconstruction strategies.


international conference on robotics and automation | 2003

Overview of coded light projection techniques for automatic 3D profiling

Jordi Pagès; Joaquim Salvi; Rafael Garcia; Carles Matabosch

Obtaining automatic 3D profile of objects is one of the most important issues in computer vision. With this information, a large number of applications become feasible: from visual inspection of industrial parts to 3D reconstruction of the environment for mobile robots. In order to achieve 3D data, range finders can be used. Coded structured light approach is one of the most widely used techniques to retrieve 3D information of an unknown surface. An overview of the existing techniques as well as a new classification of patterns for structured light sensors is presented. This kind of systems belong to the group of active triangulation method, which are based on projecting a light pattern and imaging the illuminated scene from one or more points of view. Since the patterns are coded, correspondences between points of the image(s) and points of the projected pattern can be easily found. Once correspondences are found, a classical triangulation strategy between camera(s) and projector device leads to the reconstruction of the surface. Advantages and constraints of the different patterns are discussed.


IEEE Transactions on Robotics | 2006

Optimizing plane-to-plane positioning tasks by image-based visual servoing and structured light

Jordi Pagès; Christophe Collewet; François Chaumette; Joaquim Salvi

This paper considers the problem of positioning an eye-in-hand system so that it becomes parallel to a planar object. Our approach to this problem is based on linking to the camera a structured light emitter designed to produce a suitable set of visual features. The aim of using structured light is not only for simplifying the image processing and allowing low-textured objects to be considered, but also for producing a control scheme with nice properties like decoupling, convergence, and adequate camera trajectory. This paper focuses on an image-based approach that achieves decoupling in all the workspace, and for which the global convergence is ensured in perfect conditions. The behavior of the image-based approach is shown to be partially equivalent to a 3-D visual servoing scheme, but with a better robustness with respect to image noise. Concerning the robustness of the approach against calibration errors, it is demonstrated both analytically and experimentally


international conference on robotics and automation | 2006

An approach to visual servoing based on coded light

Jordi Pagès; Christophe Collewet; François Chaumette; Joaquim Salvi

Positioning a robot with respect to objects by using data provided by a camera is a well known technique called visual servoing. In order to perform a task, the object must exhibit visual features which can be extracted from different points of view. Then, visual servoing is object-dependent as it depends on the object appearance. Therefore, performing the positioning task is not possible in presence of non-textured objects or objects for which extracting visual features is too complex or too costly. This paper proposes a solution to tackle this limitation inherent to the current visual servoing techniques. Our proposal is based on the coded structured light approach as a reliable and fast way to solve the correspondence problem. In this case, a coded light pattern is projected providing robust visual features independently of the object appearance


intelligent robots and systems | 2005

Robust decoupled visual servoing based on structured light

Jordi Pagès; Christophe Collewet; François Chaumette; Joaquim Salvi

This paper focuses on the problem of realizing a plane-to-plane virtual link between a camera attached to the end-effector of a robot and a planar object. In order to do the system independent to the object surface appearance, a structured light emitter is linked to the camera so that 4 laser pointers are projected onto the object. In a previous paper we showed that such a system has good performance and nice characteristics like partial decoupling near the desired state and robustness against misalignment of the emitter and the camera (J. Pages et al., 2004). However, no analytical results concerning the global asymptotic stability of the system were obtained due to the high complexity of the visual features utilized. In this work we present a better set of visual features which improves the properties of the features in (J. Pages et al., 2004) and for which it is possible to prove the global asymptotic stability.


computer vision and pattern recognition | 2006

A Camera-Projector System for Robot Positioning by Visual Servoing

Jordi Pagès; Christophe Collewet; François Chaumette; Joaquim Salvi

Positioning a robot with respect to objects by using data provided by a camera is a well known technique called visual servoing. In order to perform a task, the object must exhibit visual features which can be extracted from different points of view. Then, visual servoing is object-dependent as it depends on the object appearance. Therefore, performing the positioning task is not possible in presence of nontextured objets or objets for which extracting visual features is too complex or too costly. This paper proposes a solution to tackle this limitation inherent to the current visual servoing techniques. Our proposal is based on the coded structured light approach as a reliable and fast way to solve the correspondence problem. In this case, a coded light pattern is projected providing robust visual features independently of the object appearance.


international conference on image processing | 2003

Implementation of a robust coded structured light technique for dynamic 3D measurements

Jordi Pagès; Joaquim Salvi; Carles Matabosch

This paper presents the implementation details of a coded structured light system for rapid shape acquisition of unknown surfaces. Such techniques are based on the projection of patterns onto a measuring surface and grabbing images of every projection with a camera. Analyzing the pattern deformations that appear in the images, 3D information of the surface can be calculated. The implemented technique projects a unique pattern so that it can be used to measure moving surfaces. The structure of the pattern is a grid where the color of the slits are selected using a De Bruijn sequence. Moreover, since both axis of the pattern are coded, the cross points of the grid have two codewords (which permits to reconstruct them very precisely), while pixels belonging to horizontal and vertical slits have also a codeword. Different sets of colors are used for horizontal and vertical slits, so the resulting pattern is invariant to rotation. Therefore, the alignment constraint between camera and projector considered by a lot of authors is not necessary.


intelligent robots and systems | 2004

Plane-to-plane positioning from image-based visual servoing and structured light

Jordi Pagès; Christophe Collewet; François Chaumette; Joaquim Salvi

In this paper we face the problem of positioning a camera attached to the end-effector of a robotic manipulator so that it gets parallel to a planar object. Such problem has been treated for a long time in visual servoing. Our approach is based on linking to the camera several laser pointers so that its configuration is aimed to produce a suitable set of visual features. The aim of using structured light is not only for easing the image processing and to allow low-textured objects to be treated, but also for producing a control scheme with nice properties like decoupling, stability, well conditioning and good camera trajectory.


international conference on robotics and automation | 2013

Visual servoing for the REEM humanoid robot's upper body

Don Joven Agravante; Jordi Pagès; François Chaumette

In this paper, a framework for visual servo control of a humanoid robots upper body is presented. The framework is then implemented and tested on the REEM humanoid robot. The implementation is composed of 2 controllers - a head gaze control and a hand position control. The main application is precise manipulation tasks using the hand. For this, the hand controller takes top priority. The head controller is designed to keep both the hand and object in the eye field of view. For robustness, a secondary task of joint limit avoidance is implemented using the redundancy framework and a large projection operator proposed recently. For safety, joint velocity scaling is implemented. The implementation on REEM is done using the ROS and ViSP middleware. The results presented show simulations on Gazebo and experiments on the real robot. Furthermore, results with the real robot show how visual servoing is able to overcome some deficiency in REEMs kinematic calibration.

Collaboration


Dive into the Jordi Pagès's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge