Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Juan Pedro Bandera is active.

Publication


Featured researches published by Juan Pedro Bandera.


Pattern Recognition Letters | 2006

Mean shift based clustering of Hough domain for fast line segment detection

Antonio Bandera; J.M. Pérez-Lorenzo; Juan Pedro Bandera; F. Sandoval

This paper proposes a new algorithm for extracting line segments from edge images. Basically, the method performs two consecutive stages. In the first stage, the algorithm follows a line segment random window randomized Hough transform (RWRHT) based approach. This approach provides a mechanism for finding more favorable line segments from a global point of view. In our case, the RWRHT based approach is used to actualise an accurate Hough parameter space. In the second stage, items of this parameter space are unsupervisedly clustered in a set of classes using a variable bandwidth mean shift algorithm. Cluster modes provided by this algorithm constitute a set of base lines. Thus, clustering process allows using accurate Hough parameters and, however, detecting only one line when pixels along it are not exactly collinear. Edge pixels lying on the lines grouped to generate each base line are projected onto this base line. A fast and purely local grouping algorithm is employed to merge points along each base line into line segments. We have performed several experiments to compare the performance of our method with that of other methods. Experimental results show that the performance of the proposed method is very high in terms of line segment detection ability and execution time.


International Journal of Humanoid Robotics | 2012

A SURVEY OF VISION-BASED ARCHITECTURES FOR ROBOT LEARNING BY IMITATION

Juan Pedro Bandera; J.A. Rodrı́guez; L. Molina-Tanco; Antonio Bandera

Learning by imitation is a natural and intuitive way to teach social robots new behaviors. While these learning systems can use different sensory inputs, vision is often their main or even their only source of input data. However, while many vision-based robot learning by imitation (RLbI) architectures have been proposed in the last decade, they may be difficult to compare due to the absence of a common, structured description. The first contribution of this survey is the definition of a set of standard components that can be used to describe any RLbI architecture. Once these components have been defined, the second contribution of the survey is an analysis of how different vision-based architectures implement and connect them. This bottom–up, structural analysis of architectures allows to compare different solutions, highlighting their main advantages and drawbacks, from a more flexible perspective than the comparison of monolithic systems.


Pattern Recognition Letters | 2009

Fast gesture recognition based on a two-level representation

Juan Pedro Bandera; Rebeca Marfil; Antonio Bandera; J.A. Rodrı́guez; L. Molina-Tanco; F. Sandoval

Towards developing an interface for human-robot interaction, this paper proposes a two-level approach to recognise gestures which are composed of trajectories followed by different body parts. In a first level, individual trajectories are described by a set of key-points. These points are chosen as the corners of the curvature function associated to the trajectory, which will be estimated using and adaptive, non-iterative scheme. This adaptive representation allows removing noise while preserving detail in curvature at different scales. In a second level, gestures are characterised through global properties of the trajectories that compose them. Gesture recognition is performed using a confidence value that integrates both levels. Experimental results show that the performance of the proposed method is high in terms of computational cost and memory consumption, and gesture recognition ability.


intelligent robots and systems | 2005

Real-time human motion analysis for human-robot interaction

L. Molina-Tanco; Juan Pedro Bandera; Rebeca Marfil; F. Sandoval

This paper introduces a novel real-time human motion analysis system based on hierarchical tracking and inverse kinematics. This work constitutes a first step towards our goal of implementing a mechanism of human-machine interaction that allows a robot to provide feedback to a teacher in an imitation learning framework. In particular, we have developed a computer-vision based, upper-body motion analysis system that works without special devices or markers. Since such system is unstable and can only acquire partial information because of self-occlusions and depth ambiguity, we have employed a model-based pose estimation method based on inverse kinematics. The resulting system can estimate upper-body human postures with limited perceptual cues, such as centroid coordinates and disparity of head and hands.


Sensors | 2013

Model-Based Reinforcement of Kinect Depth Data for Human Motion Capture Applications

Luis Vicente Calderita; Juan Pedro Bandera; Pablo Bustos; Andreas Skiadopoulos

Motion capture systems have recently experienced a strong evolution. New cheap depth sensors and open source frameworks, such as OpenNI, allow for perceiving human motion on-line without using invasive systems. However, these proposals do not evaluate the validity of the obtained poses. This paper addresses this issue using a model-based pose generator to complement the OpenNI human tracker. The proposed system enforces kinematics constraints, eliminates odd poses and filters sensor noise, while learning the real dimensions of the performers body. The system is composed by a PrimeSense sensor, an OpenNI tracker and a kinematics-based filter and has been extensively tested. Experiments show that the proposed system improves pure OpenNI results at a very low computational cost.


Pattern Recognition Letters | 2013

Part-based object detection into a hierarchy of image segmentations combining color and topology

Esther Antúnez; Rebeca Marfil; Juan Pedro Bandera; Antonio Bandera

Object detection is one of the key components in computer vision systems. Current research on this topic has shifted from holistic approaches to representations of individual object parts linked by structural information. Along this line of research, this paper presents a novel part-based approach for automatic object detection using 2D images. The approach encodes the visual structures of the object to be detected and the image by a 2D combinatorial map and a combinatorial pyramid, respectively. Within this framework, we propose to perform the searching of the object as an error-tolerant submap isomorphism that will be conducted at the different layers of the pyramid. The approach has been applied to the detection of visual landmarks for mobile robotics self-localization. Experimental results show the good performance and robustness of the approach in the presence of partial occlusions, uneven illumination and 3-dimensional rotations.


Sensors | 2014

Audio-visual perception system for a humanoid robotic head.

Raquel Viciana-Abad; Rebeca Marfil; José Manuel Pérez-Lorenzo; Juan Pedro Bandera; Adrián Romero-Garcés; Pedro Reche-Lopez

One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework.


ieee international conference on autonomous robot systems and competitions | 2015

Testing a Fully Autonomous Robotic Salesman in Real Scenarios

Adrián Romero-Garcés; Luis Vicente Calderita; Jesus Martínez-Gómez; Juan Pedro Bandera; Rebeca Marfil; Luis J. Manso; Antonio Bandera; Pablo Bustos

Over the past decades, the number of robots deployed in museums, trade shows and exhibitions have grown steadily. This new application domain has become a key research topic in the robotics community. Therefore, new robots are designed to interact with people in these domains, using natural and intuitive channels. Visual perception and speech processing have to be considered for these robots, as they should be able to detect people in their environment, recognize their degree of accessibility and engage them in social conversations. They also need to safely navigate around dynamic, uncontrolled environments. They must be equipped with planning and learning components, that allow them to adapt to different scenarios. Finally, they must attract the attention of the people, be kind and safe to interact with. In this paper, we describe our experience with Gualzru, a salesman robot endowed with the cognitive architecture RoboCog. This architecture synchronizes all previous processes in a social robot, using a common inner representation as the core of the system. The robot has been tested in crowded, public daily life environments, where it interacted with people that had never seen it before nor had a clue about its functionality. Experimental results presented in this paper demonstrate the capabilities of the robot and its limitations in these real scenarios, and define future improvement actions.


Archive | 2007

Robot Learning by Active Imitation

Juan Pedro Bandera; Rebeca Marfil; L. Molina-Tanco; Antonio Bandera; F. Sandoval

A key area of robotics research is concerned with developing social robots for assisting humans in everyday tasks. Many of the motion skills required by the robot to perform such tasks can be pre-programmed. However, it is normally agreed that a truly useful robotic companion should be equipped with some learning capabilities, in order to adapt to unknown environments, or, what is most difficult, learn to perform new tasks. Many learning algorithms have been proposed for robotics applications. However, these learning algorithms are often task specific, and only work if the learning task is predefined in a delicate representation, and a set of pre-collected training samples is available. Besides, the distributions of training and test samples have to be identical and the world model is totally or partially given (Tan et al., 2005). In a human world, these conditions are commonly impossible to achieve. Therefore, these learning algorithms involve a process of optimization in a large search space in order to find the best behaviour fitting the observed samples, as well as some prior knowledge. If the task becomes more complicated or multiple tasks are involved, the search process is often incapable of satisfying real-time responses. Learning by observation and imitation constitute two important mechanisms for learning behaviours socially in humans and other animal species, e.g. dolphins, chimpanzees and other apes (Dautenhahn & Nehaniv, 2002). Inspired by nature, and in order to speed up the learning process in complex motor systems, Stefan Schaal appealed for a pragmatic view of imitation (Schaal, 1999) as a tool to improve the learning process. Current work has demonstrated that learning by observation and imitation is a powerful tool to acquire new abilities, which encourages social interaction and cultural transfer. It permits robots to quickly learn new skills and tasks from natural human instructions and few demonstrations (Alissandrakis et al., 2002, Breazeal et al., 2005, Demiris & Hayes, 2002, Sauser & Billard, 2005). In robotics, the ability to imitate relies upon the robot having many perceptual, cognitive and motor capabilities. The impressive advance of research and development in robotics over the past few years has led to the development of this type of robots, e.g. Sarcos (Ijspeert et al., 2002) or Kenta (Inaba et al., 2003). However, even if a robot has the necessary skills to imitate the human movement, most published work focus on specific components of an imitation system (Lopes & Santos-Victor, 2005). The development of a complete imitation architecture is difficult. Some of the main challenges are: how to identify which features of an action are important; how to reproduce such action; and how to evaluate the performance of the imitation process (Breazeal & Scassellati, 2002).


ieee-ras international conference on humanoid robots | 2006

Robot learning of upper-body human motion by active imitation

Juan Pedro Bandera; Rebeca Marfil; L. Molina-Tanco; J.A. Rodrı́guez; Antonio Bandera; F. Sandoval

This paper presents a general architecture that allows a humanoid robot to imitate upper-body movements of a human demonstrator. This architecture integrates a mechanism to memorize novel behaviours executed by a human demonstrator, with a module to recognize and generate its own interpretation of already observed behaviours. Our imitator includes three biologically plausible components: i) an attention mechanism to autonomously extract relevant information from the visual input; ii) a supra-modal representation of the motion of observed body parts to map visual and motor domains; and iii) an active imitation module which involves the motor systems in the behaviour recognition process. Experimental results with a real humanoid robot demonstrate the ability of the proposed architecture to acquire novel behaviours and to recognize and reproduce previously memorized ones

Collaboration


Dive into the Juan Pedro Bandera's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pablo Bustos

University of Extremadura

View shared research outputs
Top Co-Authors

Avatar

Luis J. Manso

University of Extremadura

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge