Carlos V. Regueiro
University of A Coruña
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Carlos V. Regueiro.
IEEE Engineering in Medicine and Biology Magazine | 1998
Senén Barro; M. Fernández-Delgado; J.A. Vila-Sobrino; Carlos V. Regueiro; E. Sanchez
In this article the authors describe the application of a new artificial neural network model aimed at the morphological classification of heartbeats detected on a multichannel ECG signal. They emphasize the special characteristics of the algorithm as an adaptive classifier with the capacity to dynamically self-organize its response to the characteristics of the ECG input signal. They also present evaluation results based on traces from the MIT-BIH arrhythmia database.
Fuzzy Sets and Systems | 2003
Manuel Mucientes; Roberto Iglesias; Carlos V. Regueiro; Alberto Bugarín; Senén Barro
This paper describes a velocity controller implemented on a Nomad 200 mobile robot. The controller has been developed for wall-following behaviour, and its design is modularized into two blocks: angular and linear velocity control. A simple design and implementation was made for the former, with the aim of focusing the design efforts on the linear velocity control block, in order to remark the usefulness of this task. The latter has been implemented using an explicit model for knowledge representation and reasoning called fuzzy temporal rules (FTRs). This model enables to explicitly incorporate time as a variable, due to which the evolution of variables in a temporal reference can be described. Using this mechanism we obtain linear velocity values that are adapted to each different circumstance, and thus a higher average velocity as well as smoother and more robust behaviours are achieved.
systems man and cybernetics | 2001
Manuel Mucientes; Roberto Iglesias; Carlos V. Regueiro; Alberto Bugarín; Purificación Cariñena; Senén Barro
The paper describes a fuzzy control system for the avoidance of moving objects by a robot. The objects move with no type of restriction, varying their velocity and making turns. Due to the complex nature of this movement, it is necessary to realize temporal reasoning with the aim of estimating the trend of the moving object. A new paradigm of fuzzy temporal reasoning, which we call fuzzy temporal rules (FTRs), is used for this control task. The control system has over 117 rules, which reflects the complexity of the problem to be tackled. The controller has been subjected to an exhaustive validation process and examples are shown of the results obtained.
IEEE Transactions on Fuzzy Systems | 2004
Purificación Cariñena; Carlos V. Regueiro; Abraham Otero; Alberto Bugarín; Senén Barro
Detection of landmarks is essential in mobile robotics for navigation tasks like building topological maps or robot localization. Doors are one of the most common landmarks since they show the topological structure of indoor environments. In this paper, the novel paradigm of fuzzy temporal rules is used for detecting doors from the information of ultrasound sensors. This paradigm can be used both to model the necessary knowledge for detection and to consider the temporal variation of several sensor signals. Experimental results using a Nomad 200 mobile robot in a real environment produce 91% of doors were correctly detected, which show the reliability and robustness of the system.
Robotics and Autonomous Systems | 2012
Víctor Alvarez-Santos; Xosé M. Pardo; Roberto Iglesias; Adrián Canedo-Rodriguez; Carlos V. Regueiro
One of the most important abilities that personal robots need when interacting with humans is the ability to discriminate amongst them. In this paper, we carry out an in-depth study of the possibilities of a colour camera placed on top of a robot to discriminate between humans, and thus get a reliable person-following behaviour on the robot. In particular we have reviewed and analysed the possibility of using the most popular colour and texture features used in object and texture recognition, to identify and model the target (person being followed). Nevertheless, the real-time restrictions make necessary the selection of a reduced subset of these features to reduce the computational burden. This subset of features was selected after carrying out a redundancy analysis, and considering how these features perform when discriminating amongst similar human torsos. Finally, we also describe several scoring functions able to dynamically adjust the relevance of each feature considering the particular conditions of the environment where the robot moves, together with the characteristics of the clothes worn by the persons that are in the scene. The results of this in-depth study have been implemented in a novel and adaptive system (described in this paper), which is able to discriminate between humans to get reliable person-following behaviours in a mobile robot. The performance of our proposal is clearly shown through a set of experimental results obtained with a real robot working in real and difficult scenarios.
Robotics and Autonomous Systems | 2010
Cristina Gamallo; Carlos V. Regueiro; Pablo Quintía; Manuel Mucientes
Mobile robots operating in real and populated environments usually execute tasks that require accurate knowledge on their position. Monte Carlo Localization (MCL) algorithms have been successfully applied for laser range finders. However, vision-based approaches present several problems with occlusions, real-time operation, and environment modifications. In this article, an omnivision-based MCL algorithm that solves these drawbacks is presented. The algorithm works with a variable number of particles through the use of the Kullback-Leibler divergence (KLD). The measurement model is based on an omnidirectional camera with a fish-eye lens. This model uses a feature-based map of the environment and the feature extraction process makes it robust to occlusions and changes in the environment. Moreover, the algorithm is scalable and works in real-time. Results on tracking, global localization and kidnapped robot problem show the excellent performance of the localization system in a real environment. In addition, experiments under severe and continuous occlusions reflect the ability of the algorithm to localize the robot in crowded environments.
Information Fusion | 2016
Adrián Canedo-Rodriguez; Víctor Alvarez-Santos; Carlos V. Regueiro; Roberto Iglesias; Senén Barro; Jesús María Rodríguez Presedo
Particle filter robot localisation fusing 2D laser, WiFi, compass and external cameras.Works with any sensor combination (even if unsynchronized or different data rates).Experiments in controlled situations and real operation in social events.Analysis and discussion of performance of each sensor and all sensor combinations.Best results obtained from the fusion of all the sensors (statistical significance). In this paper, we propose a multi-sensor fusion algorithm based on particle filters for mobile robot localisation in crowded environments. Our system is able to fuse the information provided by sensors placed on-board, and sensors external to the robot (off-board). We also propose a methodology for fast system deployment, map construction, and sensor calibration with a limited number of training samples. We validated our proposal experimentally with a laser range-finder, a WiFi card, a magnetic compass, and an external multi-camera network. We have carried out experiments that validate our deployment and calibration methodology. Moreover, we performed localisation experiments in controlled situations and real robot operation in social events. We obtained the best results from the fusion of all the sensors available: the precision and stability was sufficient for mobile robot localisation. No single sensor is reliable in every situation, but nevertheless our algorithm works with any subset of sensors: if a sensor is not available, the performance just degrades gracefully.
industrial and engineering applications of artificial intelligence and expert systems | 1998
Roberto Iglesias; Carlos V. Regueiro; José Correa; Senén Barro
In this work we describe the design of a control approach in which, by way of supervised reinforcement learning, the learning potential is combined with the previous knowledge of the task in question, obtaining as a result rapid convergence to the desired behaviour as well as an increase in the stability of the process. We have tested the application of our approach in the design of a basic behaviour pattern in mobile robotics, such as that of wall following. We have carried out several experiments obtaining goods results which confirm the utility and advantages derived from the use of our approach.
IEEE Transactions on Education | 2009
Xoán C. Pardo; María J. Martín; José Sanjurjo; Carlos V. Regueiro
This paper describes a practical experience of adapting the teaching of a course in Computer Technology (CT) to the new demands of the European Higher Education Area (EHEA). CT is a core course taught in the first year of the degree program Technical Engineering in Management Computing in the Faculty of Computer Science at the University of A Coruna (UDC), Spain. The contents of this course are mainly devoted to the design of digital systems. The main purpose of the adaptation has been to focus more on students, clearly defining the abilities they will develop during the course and suggesting activities that facilitate the development of those abilities. The aim of this work is to describe how this adaptation was performed, the materials and activities prepared, the difficulties encountered, the goals achieved and the response of students and teachers to these changes.
Sensors | 2012
Adrián Canedo-Rodriguez; Roberto Iglesias; Carlos V. Regueiro; Víctor Alvarez-Santos; Xosé M. Pardo
To bring cutting edge robotics from research centres to social environments, the robotics community must start providing affordable solutions: the costs must be reduced and the quality and usefulness of the robot services must be enhanced. Unfortunately, nowadays the deployment of robots and the adaptation of their services to new environments are tasks that usually require several days of expert work. With this in view, we present a multi-agent system made up of intelligent cameras and autonomous robots, which is easy and fast to deploy in different environments. The cameras will enhance the robot perceptions and allow them to react to situations that require their services. Additionally, the cameras will support the movement of the robots. This will enable our robots to navigate even when there are not maps available. The deployment of our system does not require expertise and can be done in a short period of time, since neither software nor hardware tuning is needed. Every system task is automatic, distributed and based on self-organization processes. Our system is scalable, robust, and flexible to the environment. We carried out several real world experiments, which show the good performance of our proposal.