Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adrián Canedo-Rodriguez is active.

Publication


Featured researches published by Adrián Canedo-Rodriguez.


Robotics and Autonomous Systems | 2012

Feature analysis for human recognition and discrimination: Application to a person-following behaviour in a mobile robot

Víctor Alvarez-Santos; Xosé M. Pardo; Roberto Iglesias; Adrián Canedo-Rodriguez; Carlos V. Regueiro

One of the most important abilities that personal robots need when interacting with humans is the ability to discriminate amongst them. In this paper, we carry out an in-depth study of the possibilities of a colour camera placed on top of a robot to discriminate between humans, and thus get a reliable person-following behaviour on the robot. In particular we have reviewed and analysed the possibility of using the most popular colour and texture features used in object and texture recognition, to identify and model the target (person being followed). Nevertheless, the real-time restrictions make necessary the selection of a reduced subset of these features to reduce the computational burden. This subset of features was selected after carrying out a redundancy analysis, and considering how these features perform when discriminating amongst similar human torsos. Finally, we also describe several scoring functions able to dynamically adjust the relevance of each feature considering the particular conditions of the environment where the robot moves, together with the characteristics of the clothes worn by the persons that are in the scene. The results of this in-depth study have been implemented in a novel and adaptive system (described in this paper), which is able to discriminate between humans to get reliable person-following behaviours in a mobile robot. The performance of our proposal is clearly shown through a set of experimental results obtained with a real robot working in real and difficult scenarios.


Information Fusion | 2016

Particle filter robot localisation through robust fusion of laser, WiFi, compass, and a network of external cameras

Adrián Canedo-Rodriguez; Víctor Alvarez-Santos; Carlos V. Regueiro; Roberto Iglesias; Senén Barro; Jesús María Rodríguez Presedo

Particle filter robot localisation fusing 2D laser, WiFi, compass and external cameras.Works with any sensor combination (even if unsynchronized or different data rates).Experiments in controlled situations and real operation in social events.Analysis and discussion of performance of each sensor and all sensor combinations.Best results obtained from the fusion of all the sensors (statistical significance). In this paper, we propose a multi-sensor fusion algorithm based on particle filters for mobile robot localisation in crowded environments. Our system is able to fuse the information provided by sensors placed on-board, and sensors external to the robot (off-board). We also propose a methodology for fast system deployment, map construction, and sensor calibration with a limited number of training samples. We validated our proposal experimentally with a laser range-finder, a WiFi card, a magnetic compass, and an external multi-camera network. We have carried out experiments that validate our deployment and calibration methodology. Moreover, we performed localisation experiments in controlled situations and real robot operation in social events. We obtained the best results from the fusion of all the sensors available: the precision and stability was sufficient for mobile robot localisation. No single sensor is reliable in every situation, but nevertheless our algorithm works with any subset of sensors: if a sensor is not available, the performance just degrades gracefully.


Sensors | 2012

Self-organized multi-camera network for a fast and easy deployment of ubiquitous robots in unknown environments.

Adrián Canedo-Rodriguez; Roberto Iglesias; Carlos V. Regueiro; Víctor Alvarez-Santos; Xosé M. Pardo

To bring cutting edge robotics from research centres to social environments, the robotics community must start providing affordable solutions: the costs must be reduced and the quality and usefulness of the robot services must be enhanced. Unfortunately, nowadays the deployment of robots and the adaptation of their services to new environments are tasks that usually require several days of expert work. With this in view, we present a multi-agent system made up of intelligent cameras and autonomous robots, which is easy and fast to deploy in different environments. The cameras will enhance the robot perceptions and allow them to react to situations that require their services. Additionally, the cameras will support the movement of the robots. This will enable our robots to navigate even when there are not maps available. The deployment of our system does not require expertise and can be done in a short period of time, since neither software nor hardware tuning is needed. Every system task is automatic, distributed and based on self-organization processes. Our system is scalable, robust, and flexible to the environment. We carried out several real world experiments, which show the good performance of our proposal.


international work-conference on the interplay between natural and artificial computation | 2013

Robust Multi-sensor System for Mobile Robot Localization

Adrián Canedo-Rodriguez; Víctor Alvarez-Santos; D. Santos-Saavedra; Cristina Gamallo; M. Fernández-Delgado; Roberto Iglesias; Carlos V. Regueiro

In this paper, we propose a localization system that can combine data supplied by different sensors, even if they are not synchronized, or if they do not provide data at all times. Particularly, we have used the following sensors: a 2D laser range finder, a Wi-Fi positioning system (designed by us), and a magnetic compass. Real world experiments have shown that our algorithm is accurate, robust, and fast, and that it can take advantage of the strengths of each sensor, and minimise its weaknesses.


Robotics and Autonomous Systems | 2013

Self-organized multi-camera network for ubiquitous robot deployment in unknown environments

Adrián Canedo-Rodriguez; Carlos V. Regueiro; Roberto Iglesias; Víctor Alvarez-Santos; Xosé M. Pardo

Abstract In this paper, we present a multi-agent system based on a network of intelligent cameras for the easy and fast deployment of mobile robots in unknown environments. The cameras are able to detect events which require the presence of the robots, calculate routes of cameras through which the robots can navigate, and support this navigation. A route is a list of cameras connected by neighbourhood relationships: the cameras may be neighbours if their Fields of View (FOVs) overlap, or if there exists a passable path among them (if their FOVs do not overlap). In our system, all coordination processes are fully distributed, based only on local-interactions, and self-organization. Our system is robust and redundant, and scales well with the size of the environment and the number of cameras and robots. Finally, it is flexible to the environment, to the number of agents used, and to their disposition. In the experimental section, we show the performance of this system in different real world settings.


Robotics and Autonomous Systems | 2015

Route learning and reproduction in a tour-guide robot

Víctor Alvarez-Santos; Adrián Canedo-Rodriguez; Roberto Iglesias; Xosé M. Pardo; Carlos V. Regueiro; M. Fernández-Delgado

Traditionally, route information is introduced in tour-guide robots by experts in robotics. In the tour-guide robot that we are developing, we allow the robot to learn new routes while following an instructor. In this paper we describe the route recording process that takes place while following a human, as well as, how those routes are later reproduced.A key element of both route recording and reproduction is a robust multi-sensorial localization algorithm that we have designed, which is able to combine various sources of information to obtain an estimate of the robots pose. In this work we detail how the algorithm works, and how we use it to record routes. Moreover, we describe how our robot reproduces routes, including path planning within route points, and dynamic obstacle avoidance for safe navigation. Finally, we show through several trajectories how the robot was able to learn and reproduce different routes. We present our tour-guide robot which is able to learn routes from humans.We detail the route recording and reproduction processes of our robot.We introduce a novel multi-sensorial algorithm for robot localization.We describe several demonstrations that we have carried out with our robot.


international work-conference on the interplay between natural and artificial computation | 2011

Online feature weighting for human discrimination in a person following robot

Víctor Alvarez-Santos; Xosé M. Pardo; Roberto Iglesias; Adrián Canedo-Rodriguez; Carlos V. Regueiro

A robust and adaptive person-following behaviour is an important ability that most service robots must have to be able to face challenging illumination conditions, and crowded spaces of nonstructured environments. In this paper, we propose a system which combines a laser based tracker with the support of a camera, acting as a discriminator between the target, and the other people present in the scene which might cause the laser tracker to fail. The discrimination is done using a online weighting of the feature space, based on the discriminability of each feature analysed.


iberian conference on pattern recognition and image analysis | 2015

Scene Recognition Invariant to Symmetrical Reflections and Illumination Conditions in Robotics

D. Santos-Saavedra; Xosé M. Pardo; Roberto Iglesias; Adrián Canedo-Rodriguez; Víctor Alvarez-Santos

Scene understanding is still an important challenge in robotics. In this paper we analyse the impact of several global and local image representations to solve the task of scene recognition. The performance of the different alternatives were compared using a two benchmarks of images: (a) the public database KTH_IDOL and, (b) a base of images taken in the Centro Singular de Investigacion en Tecnoloxias da Informacion (CITIUS), at the University of Santiago de Compostela. The results are promising not only regarding the accuracy achieved, but mostly because we have found a combination of an holistic representation and local information that allows a correct classification of images robust to specular reflections, illumination conditions, changes of viewpoint, etc.


Sensors | 2015

Mobile Robot Positioning with 433-MHz Wireless Motes with Varying Transmission Powers and a Particle Filter

Adrián Canedo-Rodriguez; José M. Rodríguez; Víctor Alvarez-Santos; Roberto Iglesias; Carlos V. Regueiro

In wireless positioning systems, the transmitters power is usually fixed. In this paper, we explore the use of varying transmission powers to increase the performance of a wireless localization system. To this extent, we have designed a robot positioning system based on wireless motes. Our motes use an inexpensive, low-power sub-1-GHz system-on-chip (CC1110) working in the 433-MHz ISM band. Our localization algorithm is based on a particle filter and infers the robot position by: (1) comparing the power received with the expected one; and (2) integrating the robot displacement. We demonstrate that the use of transmitters that vary their transmission power over time improves the performance of the wireless positioning system significantly, with respect to a system that uses fixed power transmitters. This opens the door for applications where the robot can localize itself actively by requesting the transmitters to change their power in real time.


international work-conference on the interplay between natural and artificial computation | 2015

Scene Recognition for Robot Localization in Difficult Environments

D. Santos-Saavedra; Adrián Canedo-Rodriguez; Xosé M. Pardo; Roberto Iglesias; Carlos V. Regueiro

Scene understanding is still an important challenge in robotics. In this paper we analyze the utility of scene recognition to determine the localization of a robot. We assume that multi-sensor localization systems may be very useful in crowded environments where there will be many people around the robot but not many changes of the furniture. In our localization system we categorize the sensors in two groups: accurate sensor models able to determine the pose of the robot accurately but which are sensible to noise or the presence of people. Robust sensor modalities able to provide rough information about the pose of the robot in almost any condition. The performance of our localization strategy was analyzed through two experiments realized in the Centro Singular de Investigacion en Tecnoloxias da Informacion (CITIUS), at the University of Santiago de Compostela.

Collaboration


Dive into the Adrián Canedo-Rodriguez's collaboration.

Top Co-Authors

Avatar

Roberto Iglesias

University of Santiago de Compostela

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Víctor Alvarez-Santos

University of Santiago de Compostela

View shared research outputs
Top Co-Authors

Avatar

Xosé M. Pardo

University of Santiago de Compostela

View shared research outputs
Top Co-Authors

Avatar

D. Santos-Saavedra

University of Santiago de Compostela

View shared research outputs
Top Co-Authors

Avatar

M. Fernández-Delgado

University of Santiago de Compostela

View shared research outputs
Top Co-Authors

Avatar

Cristina Gamallo

University of Santiago de Compostela

View shared research outputs
Top Co-Authors

Avatar

Jesús María Rodríguez Presedo

University of Santiago de Compostela

View shared research outputs
Top Co-Authors

Avatar

Senén Barro

University of Santiago de Compostela

View shared research outputs
Researchain Logo
Decentralizing Knowledge