Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pablo Bustos is active.

Publication


Featured researches published by Pablo Bustos.


simulation modeling and programming for autonomous robots | 2010

RoboComp: a tool-based robotics framework

Luis J. Manso; Pilar Bachiller; Pablo Bustos; Pedro Núñez; Ramón Cintas; Luis Vicente Calderita

This paper presents RoboComp, an open-source componentoriented robotics framework. Ease of use and low development effort has proven to be two of the key issues to take into account when building frameworks. Due to the crucial role of development tools in these questions, this paper deeply describes the tools that make RoboComp more than just a middleware. To provide an overview of the developer experience, some examples are given throughout the text. It is also compared to the most open-source relevant projects with similar goals, specifying its weaknesses and strengths.


Sensors | 2014

Muecas: A Multi-Sensor Robotic Head for Affective Human Robot Interaction and Imitation

Felipe Cid; J. Moreno; Pablo Bustos; Pedro Núñez

This paper presents a multi-sensor humanoid robotic head for human robot interaction. The design of the robotic head, Muecas, is based on ongoing research on the mechanisms of perception and imitation of human expressions and emotions. These mechanisms allow direct interaction between the robot and its human companion through the different natural language modalities: speech, body language and facial expressions. The robotic head has 12 degrees of freedom, in a human-like configuration, including eyes, eyebrows, mouth and neck, and has been designed and built entirely by IADeX (Engineering, Automation and Design of Extremadura) and RoboLab. A detailed description of its kinematics is provided along with the design of the most complex controllers. Muecas can be directly controlled by FACS (Facial Action Coding System), the de facto standard for facial expression recognition and synthesis. This feature facilitates its use by third party platforms and encourages the development of imitation and of goal-based systems. Imitation systems learn from the user, while goal-based ones use planning techniques to drive the user towards a final desired state. To show the flexibility and reliability of the robotic head, the paper presents a software architecture that is able to detect, recognize, classify and generate facial expressions in real time using FACS. This system has been implemented using the robotics framework, RoboComp, which provides hardware-independent access to the sensors in the head. Finally, the paper presents experimental results showing the real-time functioning of the whole system, including recognition and imitation of human facial expressions.


Sensors | 2013

Model-Based Reinforcement of Kinect Depth Data for Human Motion Capture Applications

Luis Vicente Calderita; Juan Pedro Bandera; Pablo Bustos; Andreas Skiadopoulos

Motion capture systems have recently experienced a strong evolution. New cheap depth sensors and open source frameworks, such as OpenNI, allow for perceiving human motion on-line without using invasive systems. However, these proposals do not evaluate the validity of the obtained poses. This paper addresses this issue using a model-based pose generator to complement the OpenNI human tracker. The proposed system enforces kinematics constraints, eliminates odd poses and filters sensor noise, while learning the real dimensions of the performers body. The system is composed by a PrimeSense sensor, an OpenNI tracker and a kinematics-based filter and has been extensively tested. Experiments show that the proposed system improves pure OpenNI results at a very low computational cost.


intelligent robots and systems | 2013

A real time and robust facial expression recognition and imitation approach for affective human-robot interaction using Gabor filtering

Felipe Cid; José Augusto Prado; Pablo Bustos; Pedro Núñez

Facial expressions are a rich source of communicative information about human behavior and emotion. This paper presents a real-time system for recognition and imitation of facial expressions in the context of affective Human Robot Interaction. The proposed method achieves a fast and robust facial feature extraction based on consecutively applying filters to the gradient image. An efficient Gabor filter is used, along with a set of morphological and convolutional filters to reduce the noise and the light dependence of the image acquired by the robot. Then, a set of invariant edge-based features are extracted and used as input to a Dynamic Bayesian Network classifier in order to estimate a human emotion. The output of this classifier updates a geometric robotic head model, which is used as a bridge between the human expressiveness and the robotic head. Experimental results demonstrate the accuracy and robustness of the proposed approach compared to similar systems.


Archive | 2013

Ursus: A Robotic Assistant for Training of Children with Motor Impairments

C. Suárez Mejías; C. Echevarría; Pedro Núñez; Luis J. Manso; Pablo Bustos; S. Leal; C. Parra

In this paper we present our results and work in progress in relation to the use of a social robot as assistant for training and rehabilitation in paediatric patients with motor disorders due to Hemiplegic Cerebral Palsy and Obstetric Brachial Plexus. The abilities addressed by a robot in the rehabilitation procedures without therapeutic contact include: active perception, sensor fusion, navigation, human movement capture, voice synthesis and plan execution, among others. We propose an ambitious approach to non-contact rehabilitation therapies with paediatric patients that present motor impairments, as well as an evaluation methodology to determine the effect of using social robots as therapy conductors. An experimental study was performed with six paediatric patients and results are explained. Finally, new challenges are exposed to develop in the future.


international conference on pattern recognition | 2000

Motion estimation using the differential epipolar equation

Luis Baumela; Lourdes de Agapito; Pablo Bustos; Ian D. Reid

We consider the motion estimation problem in the case of very closely spaced views. We revisit the differential epipolar equation providing an interpretation of it. On the basis of this interpretation we introduce a cost function to estimate the parameters of the differential epipolar equation, which enables us to compute the camera extrinsics and some intrinsics. In the synthetic tests performed we compare this continuous method with traditional discrete motion estimation and, contrary to previous findings by Vieville et al. (1996), show that the continuous method did not perceive any computational advantage.


Archive | 2008

Attentional Selection for Action in Mobile Robots

Pilar Bachiller; Pablo Bustos; Luis J. Manso

During the last few years attention has become an important issue in machine vision. Studies of attentional mechanisms in biological vision have inspired many computational models (Tsotsos et al., 1995; Itti & Koch, 2000; Frintrop et al., 2005; Torralba et al., 2006; Navalpakkan & Itti, 2006). Most of them follow the assumption of limited capacity associated to the role of attention from psychological proposals (Broadbent, 1958; Laberge, 1995). These theories hypothesize that the visual system has limited processing capacity and that attention acts as a filter selecting the information that should be processed. This assumption has been criticized by many authors who affirm that the human perceptual system processing capacity is enormous (Neumann et al., 1986; Allport, 1987). From this point of view, a stage selecting the information to be processed is not needed. Instead, they claim the role of attention from the perspective of selection for action (Allport, 1987). According to this new conception, the function of attention is to avoid behavioural disorganization by selecting the appropriate information to drive task execution. Such a notion of attention is very interesting in robotics, where the aim is to build autonomous robots that interact with complex environments, keeping multiple behavioural objectives. Attentional selection for action can guide robot behaviours by focusing on relevant visual targets while avoiding distracting elements. Moreover, it can be conceived as a coordination mechanism, since stimuli selection allows serializing the actions of, potentially, multiple active behaviours. To exploit these ideas, a visual attention system based on the selection for action theory has been developed. The system is a central component of a control architecture from which complex behaviours emerge according to different attention-action links. It has been designed and tested on a mobile robot endowed with a stereo vision head. Figure 1 shows the proposed control model. Sensory-motor abilities of the robot are divided into two groups that lead to two subsystems: the visual attention system, which includes the mechanisms that give rise to the selection of visual information, and the set of high-level behaviours that use visual information to accomplish their goals. Both subsystems are connected to the motor control system, which is in charge of effectively executing motor responses generated by the other two subsystems. Each high-level behaviour modulates the visual system in a specific way in order to get the necessary visual information. The incoming flow of information affects high-level


ieee international conference on autonomous robot systems and competitions | 2015

Testing a Fully Autonomous Robotic Salesman in Real Scenarios

Adrián Romero-Garcés; Luis Vicente Calderita; Jesus Martínez-Gómez; Juan Pedro Bandera; Rebeca Marfil; Luis J. Manso; Antonio Bandera; Pablo Bustos

Over the past decades, the number of robots deployed in museums, trade shows and exhibitions have grown steadily. This new application domain has become a key research topic in the robotics community. Therefore, new robots are designed to interact with people in these domains, using natural and intuitive channels. Visual perception and speech processing have to be considered for these robots, as they should be able to detect people in their environment, recognize their degree of accessibility and engage them in social conversations. They also need to safely navigate around dynamic, uncontrolled environments. They must be equipped with planning and learning components, that allow them to adapt to different scenarios. Finally, they must attract the attention of the people, be kind and safe to interact with. In this paper, we describe our experience with Gualzru, a salesman robot endowed with the cognitive architecture RoboCog. This architecture synchronizes all previous processes in a social robot, using a common inner representation as the core of the system. The robot has been tested in crowded, public daily life environments, where it interacted with people that had never seen it before nor had a clue about its functionality. Experimental results presented in this paper demonstrate the capabilities of the robot and its limitations in these real scenarios, and define future improvement actions.


simulation modeling and programming for autonomous robots | 2010

Improving a robotics framework with real-time and high-performance features

Jesús Martínez; Adrián Romero-Garcés; Luis J. Manso; Pablo Bustos

Middleware has a key role in modern and object-oriented robotics frameworks, which aim at developing reusable, scalable and maintainable systems using different platforms and programming languages. However, complex robotics software falls into the category of distributed real-time systems with stringent requirements in terms of throughput, latency and jitter. This paper introduces and analyzes a methodology to improve an existing robotics framework with real-time and high-performance features using a recently adopted standard: the Data Distribution Service (DDS).


International Journal of Social Robotics | 2017

Evaluating the Child–Robot Interaction of the NAOTherapist Platform in Pediatric Rehabilitation

José Carlos Pulido; José Carlos González; Cristina Suárez-Mejías; Antonio Bandera; Pablo Bustos; Fernando Fernández

NAOTherapist is a cognitive robotic architecture whose main goal is to develop non-contact upper-limb rehabilitation sessions autonomously with a social robot for patients with physical impairments. In order to achieve a fluent interaction and an active engagement with the patients, the system should be able to adapt by itself in accordance with the perceived environment. In this paper, we describe the interaction mechanisms that are necessary to supervise and help the patient to carry out the prescribed exercises correctly. We also provide an evaluation focused on the child-robot interaction of the robotic platform with a large number of schoolchildren and the experience of a first contact with three pediatric rehabilitation patients. The results presented are obtained through questionnaires, video analysis and system logs, and have proven to be consistent with the hypotheses proposed in this work.

Collaboration


Dive into the Pablo Bustos's collaboration.

Top Co-Authors

Avatar

Luis J. Manso

University of Extremadura

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pedro Núñez

University of Extremadura

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pilar Bachiller

University of Extremadura

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

D. Guinea

Spanish National Research Council

View shared research outputs
Researchain Logo
Decentralizing Knowledge