Eugenio Aguirre
University of Granada
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Eugenio Aguirre.
Image and Vision Computing | 2007
Rafael Muñoz-Salinas; Eugenio Aguirre; Miguel García-Silvente
People detection and tracking are important capabilities for applications that desire to achieve a natural human-machine interaction. Although the topic has been extensively explored using a single camera, the availability and low price of new commercial stereo cameras makes them an attractive sensor to develop more sophisticated applications that take advantage of depth information. This work presents a system able to visually detect and track multiple people using a stereo camera placed at an under-head position. This camera position is especially appropriated for human-machine applications that require interacting with people or to analyze human facial gestures. The system models the background as height map that is employed to easily extract foreground objects among which people are found using a face detector. Once a person has been spotted, the system is capable of tracking him while is still looking for more people. Our system tracks people combining color and position information (using the Kalman filter). Tracking based exclusively on position information is unreliable when people establish close interactions. Thus, we also include color information about the people clothes in order to increase the tracking robustness. The system has been extensively tested and the results show that the use of color greatly reduces the errors of the tracking system. Besides, the people detection technique employed, based on combining plan-view map information and a face detector, has proved in our experimentation to avoid false detections in the tests performed. Finally, the low computing time required for the detection and tracking process makes it suitable to be employed in real time applications.
International Journal of Approximate Reasoning | 2000
Eugenio Aguirre; Antonio González
Abstract The implementation of complex behavior generation for artificial systems can be overcome by decomposing the global tasks into simpler, well-specified behaviors which are easier to design and can be tuned independently of each other. Robot behavior can be implemented as a set of fuzzy rules which mimic expert knowledge in specific tasks in order to model expert knowledge. These behaviors are included in the lowest level of a hybrid deliberative–reactive architecture which is aimed at an efficient integration of planning and reactive control. In this work, we briefly present the architecture and attention is focused on the design, coordination and fusion of the elementary behaviors. The design is based on regulatory control using fuzzy logic control and the coordination is defined by fuzzy metarules which define the context of applicability for each behavior. Regarding action fusion, two combination methods for fusing the preferences from each behavior are used in the experiments. In order to validate the system, several measures are also proposed, and thus the performance of the architecture and combination/arbitration algorithms have been demonstrated in both the simulated and the real world. The robot achieves every control objective and the trajectory is smooth in spite of the interaction between several behaviors, unexpected obstacles and the presence of noisy data. When the results of the experimentation from both methods are taken into account, the influence of the combination method appears to be of prime importance when attempting to achieve the best trade-off among the preferences of every behavior.
Autonomous Robots | 2006
Rafael Muñoz-Salinas; Eugenio Aguirre; Miguel García-Silvente
Doors are common objects in indoor environments and their detection can be used in robotic tasks such as map-building, navigation and positioning. This work presents a new approach to door-detection in indoor environments using computer vision. Doors are found in gray-level images by detecting the borders of their architraves. A variation of the Hough Transform is used in order to extract the segments in the image after applying the Canny edge detector. Features like length, direction, or distance between segments are used by a fuzzy system to analyze whether the relationship between them reveals the existence of doors. The system has been designed to detect rectangular doors typical of many indoor environments by the use of expert knowledge. Besides, a tuning mechanism based on a genetic algorithm is proposed to improve the performance of the system according to the particularities of the environment in which it is going to be employed. A large database of images containing doors of our building, seen from different angles and distances, has been created to test the performance of the system before and after the tuning process. The system has shown the ability to detect rectangular doors under heavy perspective deformations and it is fast enough to be used for real-time applications in a mobile robot.
Applied Intelligence | 2003
Eugenio Aguirre; Antonio González
The way of understanding the role of perception along the intelligent robotic systems has evolved greatly since classic approaches to the reactive behavior-based approaches. Classic approaches tried to model the environment using a high level of accuracy while in reactive systems usually the perception is related to the actions that the robot needs to undertake so that such complex models are not generally necessary. Regarding hybrid approaches is likewise important to understand the role that has been assigned to the perception in order to assure the success of the system. In this work a new perceptual model based on fuzzy logic is proposed to be used in a hybrid deliberative-reactive architecture. This perceptual model deals with the uncertainty and vagueness underlying to the ultrasound sensor data, it is useful to carry out the data fusion from different sensors and it allows us to establish various levels of interpretation in the sensor data. Furthermore, using this perceptual model an approximate world model can be built so that the robot can plan its motions for navigating in an office-like environment. Then the navigation is accomplished using the hybrid deliberative-reactive architecture and taking into account the perceptual model to represent the robots beliefs about the world. Experiments in simulation and in an real office-like environment are shown for validating the perceptual model integrated into the navigation architecture.
IEEE Transactions on Fuzzy Systems | 2008
Rafael Muñoz-Salinas; Eugenio Aguirre; Oscar Cordón; Miguel García-Silvente
One of the main advantages of fuzzy systems is their ability to design comprehensible models of real-world systems, thanks to the use of a fuzzy rule structure easily interpretable by human beings. This is especially useful for the design of fuzzy logic controllers, where the knowledge base can be extracted from expert knowledge. Even more, the availability of a readable structure allows the human expert to customize the fuzzy controller to different environments by manually tuning its components. Nevertheless, this tuning task is usually a time-consuming procedure when done manually, especially when several measures are considered to evaluate the controller performance, and thus the interest in the design of automatic tuning procedures for fuzzy systems has increased along the last few years. In this paper, we tackle the tuning of the fuzzy membership functions of a fuzzy visual system for autonomous robots. This fuzzy visual system is based on a hierarchical structure of three different fuzzy classifiers, whose combined action allows the robot to detect the presence of doors in the images captured by its camera. Although the global knowledge represented in the fuzzy system knowledge base makes it perform properly in the door detection task, its adaptation to the specific conditions of the environment where the robot is operating can significantly improve the classification accuracy. However, the tuning procedure is complex as two different performance indexes are involved in the optimization process (true positive and false positive detections), thus becoming a multiobjective problem. Hence, in order to automatically put the fuzzy system tuning into effect, different single and multiobjective evolutionary algorithms are considered to optimize the two criteria, and their behavior in problem solving is compared.
mexican international conference on artificial intelligence | 2005
Rafael Muñoz-Salinas; Eugenio Aguirre; Miguel García-Silvente; Antonio González
In this document we present an agent for people detection and tracking through stereo vision. The agent makes use of the active vision to perform the people tracking with a robotic head on which the vision system is installed. Initially, a map of the surrounding environment is created including its motionless characteristics. This map will later on be used to detect objects in motion, and to search people among them by using a face detector. Once a person has been spotted, the agent is capable of tracking them through the robotic head that allows the stereo system to rotate. In order to achieve a robust tracking we have used the Kalman filter. The agent focuses on the person at all times by framing their head and arms on the image. This task could be used by other agents that might need to analyze gestures and expressions of potential application users in order to facilitate the human-robot interaction.
International Journal of Approximate Reasoning | 2012
Rui Paúl; Eugenio Aguirre; Miguel García-Silvente; Rafael Muòoz-Salinas
This paper describes a system capable of detecting and tracking various people using a new approach based on colour, stereo vision and fuzzy logic. Initially, in the people detection phase, two fuzzy systems are used to filter out false positives of a face detector. Then, in the tracking phase, a new fuzzy logic based particle filter (FLPF) is proposed to fuse stereo and colour information assigning different confidence levels to each of these information sources. Information regarding depth and occlusion is used to create these confidence levels. This way, the system is able to keep track of people, in the reference camera image, even when either stereo information or colour information is confusing or not reliable. To carry out the tracking, the new FLPF is used, so that several particles are generated while several fuzzy systems compute the possibility that some of the generated particles correspond to the new position of people. Our technique outperforms two well known tracking approaches, one based on the method from Nummiaro et al. [1] and other based on the Kalman/meanshift tracker method in Comaniciu and Ramesh [2]. All these approaches were tested using several colour-with-distance sequences simulating real life scenarios. The results show that our system is able to keep track of people in most of the situations where other trackers fail, as well as to determine the size of their projections in the camera image. In addition, the method is fast enough for real time applications.
Robotica | 2005
Rafael Muñoz-Salinas; Eugenio Aguirre; Miguel García-Silvente; Moisés Gómez
A multi-agent system based on behaviour for controlling the navigation task of a mobile robot in office-like environments is presented. The set of agents is structured into a three-layer hybrid architecture. A high level of abstraction plan is created using a topological map of the environment in the Deliberative layer. It is composed by the sequence of rooms and corridors to traverse and doors to cross in order to reach a desired room. The Execution and Monitoring layer translates the plan into a sequence of available skills in order to achieve the desired goal and monitors the execution of the plan. In the Control layer there is a set of agents that implements fuzzy and visual behaviours that run concurrently to guide the robot. Fuzzy behavior manages the vagueness and uncertainty of the range sensor information allowing to navigate safely in the environment. Visual behavior locates a required door to cross and fixate it, indicating the appropriate direction to reach it. Artificial landmarks are placed beside the doors to show its position. The system has been implemented in a Nomad 200 mobile robot and has been validated in numerous experiments in a real office-like environment.
International Journal of Intelligent Systems | 2002
Eugenio Aguirre; Antonio González
In behavior‐based robots, planning is necessary to elaborate abstract plans that resolve complex navigational tasks. Usually maps of the environment are used to plan the robot motion and to resolve the navigational tasks. Two types of maps have been mainly used: metric and topological maps. Both types present advantages and weakness so that several integration approaches have been proposed in literature. However, in many approaches the integration is conducted to build a global representation model, and the planning and navigational techniques have not been fitted to profit from both kinds of information. We propose the integration of topological and metric models into a hybrid deliberative‐reactive architecture through a path planning algorithm based on A* and a hierarchical map with two levels of abstraction. The hierarchical map contains the required information to take advantage of both kinds of modeling. On one hand, the topological model is based on a fuzzy perceptual model that allows the robot to classify the environment in distinguished places, and on the other hand, the metric map is built using regions of possibility with the shape of fuzzy segments, which are used later to build fuzzy grid‐based maps. The approach allows the robot to decide on the use of the most appropriate model to navigate the world depending on minimum‐cost and safety criteria. Experiments in simulation and in a real office‐like environment are shown for validating the proposed approach integrated into the navigational architecture.
Robot | 2014
Eugenio Aguirre; Miguel García-Silvente; Javier Plata
People detection and tracking is an essential skill to obtain social and interactive robots. Computer vision has been widely used to solve this task but images are affected by noise and illumination changes. Laser range finder is robust against illumination changes so that it can bring useful information to carry out the detection and tracking. In fact, multisensor approaches are showing the best results. In this work, we present a new method to detect and track people using a laser range finder. Patterns of leg are learnt from 2d laser data using supervised learning. Unlike others leg detection approaches, people can be still or moving at the surroundings of the robot. The method of leg detection is used as observation model in a particle filter to track the motion of a person. Experiments in a real indoor environment have been carried out to validate the proposal.