Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Carlos Sagüés is active.

Publication


Featured researches published by Carlos Sagüés.


international conference on robotics and automation | 2007

SURF features for efficient robot localization with omnidirectional images

Ana C. Murillo; José Jesús Guerrero; Carlos Sagüés

Many robotic applications work with visual reference maps, which usually consist of sets of more or less organized images. In these applications, there is a compromise between the density of reference data stored and the capacity to identify later the robot localization, when it is not exactly in the same position as one of the reference views. Here we propose the use of a recently developed feature, SURF, to improve the performance of appearance-based localization methods that perform image retrieval in large data sets. This feature is integrated with a vision-based algorithm that allows both topological and metric localization using omnidirectional images in a hierarchical approach. It uses pyramidal kernels for the topological localization and three-view geometric constraints for the metric one. Experiments with several omnidirectional images sets are shown, including comparisons with other typically used features (radial lines and SIFT). The advantages of this approach are proved, showing the use of SURF as the best compromise between efficiency and accuracy in the results.


Robotics and Autonomous Systems | 2008

Visual door detection integrating appearance and shape cues

Ana C. Murillo; Jana Kosecka; José Jesús Guerrero; Carlos Sagüés

An important component of human-robot interaction is the capability to associate semantic concepts with encountered locations and objects. This functionality is essential for visually guided navigation as well as location and object recognition. In this paper we focus on the problem of door detection using visual information only. Doors are frequently encountered in structured man-made environments and function as transitions between different places. We adopt a probabilistic approach for door detection, by defining the likelihood of various features for generated door hypotheses. Differing from previous approaches, the proposed model captures both the shape and appearance of the door. This is learned from a few training examples, exploiting additional assumptions about the structure of indoor environments. After the learning stage, we describe a hypothesis generation process and several approaches to evaluate the likelihood of the generated hypotheses. The approach is tested on numerous examples of indoor environment. It shows a good performance provided that the door extent in the images is sufficiently large and well supported by low level feature measurements.


Robotics and Autonomous Systems | 2007

From omnidirectional images to hierarchical localization

Ana C. Murillo; Carlos Sagüés; José Jesús Guerrero; Toon Goedemé; Tinne Tuytelaars; L. Van Gool

We propose a new vision-based method for global robot localization using an omnidirectional camera. Topological and metric localization information are combined in an efficient, hierarchical process, with each step being more complex and accurate than the previous one but evaluating fewer images. This allows us to work with large reference image sets in a reasonable amount of time. Simultaneously, thanks to the use of 1D three-view geometry, accurate metric localization can be achieved based on just a small number of nearby reference images. Owing to the wide baseline features used, the method deals well with illumination changes and occlusions, while keeping the computational load small. The simplicity of the radial line features used speeds up the process without affecting the accuracy too much. We show experiments with two omnidirectional image data sets to evaluate the performance of the method and compare the results using the proposed radial lines with results from state-of-the-art wide-baseline matching techniques.


IEEE Transactions on Robotics | 2012

Distributed Consensus on Robot Networks for Dynamically Merging Feature-Based Maps

Rosario Aragues; Jorge Cortés; Carlos Sagüés

In this paper, we study the feature-based map merging problem in robot networks. While in operation, each robot observes the environment and builds and maintains a local map. Simultaneously, each robot communicates and computes the global map of the environment. Communication between robots is range-limited. We propose a dynamic strategy, based on consensus algorithms, that is fully distributed and does not rely on any particular communication topology. Under mild connectivity conditions on the communication graph, our merging algorithm, asymptotically, converges to the global map. We present a formal analysis of its convergence rate and provide accurate characterizations of the errors as a function of the timestep. The proposed approach has been experimentally validated using real visual information.


advances in computing and communications | 2012

Distributed algebraic connectivity estimation for adaptive event-triggered consensus

Rosario Aragues; Guodong Shi; Dimos V. Dimarogonas; Carlos Sagüés; Karl Henrik Johansson

In several multi agent control problems, the convergence properties and speed of the system depend on the algebraic connectivity of the graph. We discuss a particular event-triggered consensus scenario, and show that the availability of an estimate of the algebraic connectivity could be used for adapting the behavior of the average consensus algorithm. We present a novel distributed algorithm for estimating the algebraic connectivity, that relies on the distributed computation of the powers of matrices. We provide proofs of convergence, convergence rate, and upper and lower bounds at each iteration of the estimated algebraic connectivity.


Robotics and Autonomous Systems | 2008

Switching visual control based on epipoles for mobile robots

Gonzalo López-Nicolás; Carlos Sagüés; José Jesús Guerrero; Danica Kragic; Patric Jensfelt

In this paper, we present a visual control approach consisting in a switching control scheme based on the epipolar geometry. The method facilitates a classical teach-by-showing approach where a reference image is used to control the robot to the desired pose (position and orientation). As a result of our proposal a mobile robot carries out a smooth trajectory towards the target and the epipolar geometry model is used through the whole motion. The control scheme developed considers the motion constraints of the mobile platform in a framework based on the epipolar geometry that does not rely on artificial markers or specific models of the environment. The proposed method is designed in order to cope with the degenerate estimation case of the epipolar geometry with short baseline. Experimental evaluation has been performed in realistic indoor and outdoor settings.


Robotics and Autonomous Systems | 2005

Visual correction for mobile robot homing

Carlos Sagüés; José Jesús Guerrero

Abstract We present a method to send a mobile robot to locations specified by images previously taken from these positions, which sometimes has been referred as homing. Classically this has been carried out using the fundamental matrix, but the fundamental matrix is ill conditioned with planar scenes, which are quite usual in man made environments. Many times in robot homing, small baseline images with high disparity due to rotation are compared, where the fundamental matrix also gives bad results. We use a monocular vision system and we compute motion through an homography obtained from automatically matched lines. In this work we compare the use of the homography and the fundamental matrix and we propose the correction of motion directly from the parameters of the 2D homography, which only needs one calibration parameter. It is shown that it is robust, sufficiently accurate and simple.


Sensors | 2013

Human-Computer Interaction Based on Hand Gestures Using RGB-D Sensors

José Manuel Palacios; Carlos Sagüés; Eduardo Montijano; Sergio Llorente

In this paper we present a new method for hand gesture recognition based on an RGB-D sensor. The proposed approach takes advantage of depth information to cope with the most common problems of traditional video-based hand segmentation methods: cluttered backgrounds and occlusions. The algorithm also uses colour and semantic information to accurately identify any number of hands present in the image. Ten different static hand gestures are recognised, including all different combinations of spread fingers. Additionally, movements of an open hand are followed and 6 dynamic gestures are identified. The main advantage of our approach is the freedom of the users hands to be at any position of the image without the need of wearing any specific clothing or additional devices. Besides, the whole method can be executed without any initial training or calibration. Experiments carried out with different users and in different environments prove the accuracy and robustness of the method which, additionally, can be run in real-time.


IEEE Transactions on Robotics | 2008

Localization and Matching Using the Planar Trifocal Tensor With Bearing-Only Data

José Jesús Guerrero; Ana C. Murillo; Carlos Sagüés

This paper addresses the robot and landmark localization problem from bearing-only data in three views, simultaneously to the robust association of this data. The localization algorithm is based on the 1-D trifocal tensor, which relates linearly the observed data and the robot localization parameters. The aim of this work is to bring this useful geometric construction from computer vision closer to robotic applications. One contribution is the evaluation of two linear approaches of estimating the 1-D tensor: the commonly used approach that needs seven bearing-only correspondences and another one that uses only five correspondences plus two calibration constraints. The results in this paper show that the inclusion of these constraints provides a simpler and faster solution and better estimation of robot and landmark locations in the presence of noise. Moreover, a new method that makes use of scene planes and requires only four correspondences is presented. This proposal improves the performance of the two previously mentioned methods in typical man-made scenarios with dominant planes, while it gives similar results in other cases. The three methods are evaluated with simulation tests as well as with experiments that perform automatic real data matching in conventional and omnidirectional images. The results show sufficient accuracy and stability to be used in robotic tasks such as navigation, global localization or initialization of simultaneous localization and mapping (SLAM) algorithms.


Robotics and Autonomous Systems | 2010

Omnidirectional visual control of mobile robots based on the 1D trifocal tensor

Hector M. Becerra; Gonzalo López-Nicolás; Carlos Sagüés

The precise positioning of robotic systems is of great interest particularly in mobile robots. In this context, the use of omnidirectional vision provides many advantages thanks to its wide field of view. This paper presents an image-based visual control to drive a mobile robot to a desired location, which is specified by a target image previously acquired. It exploits the properties of omnidirectional images to preserve the bearing information by using a 1D trifocal tensor. The main contribution of the paper is that the elements of the tensor are introduced directly in the control law and neither any a priori knowledge of the scene nor any auxiliary image are required. Our approach can be applied with any visual sensor obeying approximately a central projection model, presents good robustness to image noise, and avoids the problem of a short baseline by exploiting the information of three views. A sliding mode control law in a square system ensures stability and robustness for the closed loop. The good performance of the control system is proven via simulations and real world experiments with a hypercatadioptric imaging system.

Collaboration


Dive into the Carlos Sagüés's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

D. Paesa

University of Zaragoza

View shared research outputs
Top Co-Authors

Avatar

Youcef Mezouar

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge