Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Luis J. Manso is active.

Publication


Featured researches published by Luis J. Manso.


simulation modeling and programming for autonomous robots | 2010

RoboComp: a tool-based robotics framework

Luis J. Manso; Pilar Bachiller; Pablo Bustos; Pedro Núñez; Ramón Cintas; Luis Vicente Calderita

This paper presents RoboComp, an open-source componentoriented robotics framework. Ease of use and low development effort has proven to be two of the key issues to take into account when building frameworks. Due to the crucial role of development tools in these questions, this paper deeply describes the tools that make RoboComp more than just a middleware. To provide an overview of the developer experience, some examples are given throughout the text. It is also compared to the most open-source relevant projects with similar goals, specifying its weaknesses and strengths.


Archive | 2013

Ursus: A Robotic Assistant for Training of Children with Motor Impairments

C. Suárez Mejías; C. Echevarría; Pedro Núñez; Luis J. Manso; Pablo Bustos; S. Leal; C. Parra

In this paper we present our results and work in progress in relation to the use of a social robot as assistant for training and rehabilitation in paediatric patients with motor disorders due to Hemiplegic Cerebral Palsy and Obstetric Brachial Plexus. The abilities addressed by a robot in the rehabilitation procedures without therapeutic contact include: active perception, sensor fusion, navigation, human movement capture, voice synthesis and plan execution, among others. We propose an ambitious approach to non-contact rehabilitation therapies with paediatric patients that present motor impairments, as well as an evaluation methodology to determine the effect of using social robots as therapy conductors. An experimental study was performed with six paediatric patients and results are explained. Finally, new challenges are exposed to develop in the future.


Archive | 2008

Attentional Selection for Action in Mobile Robots

Pilar Bachiller; Pablo Bustos; Luis J. Manso

During the last few years attention has become an important issue in machine vision. Studies of attentional mechanisms in biological vision have inspired many computational models (Tsotsos et al., 1995; Itti & Koch, 2000; Frintrop et al., 2005; Torralba et al., 2006; Navalpakkan & Itti, 2006). Most of them follow the assumption of limited capacity associated to the role of attention from psychological proposals (Broadbent, 1958; Laberge, 1995). These theories hypothesize that the visual system has limited processing capacity and that attention acts as a filter selecting the information that should be processed. This assumption has been criticized by many authors who affirm that the human perceptual system processing capacity is enormous (Neumann et al., 1986; Allport, 1987). From this point of view, a stage selecting the information to be processed is not needed. Instead, they claim the role of attention from the perspective of selection for action (Allport, 1987). According to this new conception, the function of attention is to avoid behavioural disorganization by selecting the appropriate information to drive task execution. Such a notion of attention is very interesting in robotics, where the aim is to build autonomous robots that interact with complex environments, keeping multiple behavioural objectives. Attentional selection for action can guide robot behaviours by focusing on relevant visual targets while avoiding distracting elements. Moreover, it can be conceived as a coordination mechanism, since stimuli selection allows serializing the actions of, potentially, multiple active behaviours. To exploit these ideas, a visual attention system based on the selection for action theory has been developed. The system is a central component of a control architecture from which complex behaviours emerge according to different attention-action links. It has been designed and tested on a mobile robot endowed with a stereo vision head. Figure 1 shows the proposed control model. Sensory-motor abilities of the robot are divided into two groups that lead to two subsystems: the visual attention system, which includes the mechanisms that give rise to the selection of visual information, and the set of high-level behaviours that use visual information to accomplish their goals. Both subsystems are connected to the motor control system, which is in charge of effectively executing motor responses generated by the other two subsystems. Each high-level behaviour modulates the visual system in a specific way in order to get the necessary visual information. The incoming flow of information affects high-level


ieee international conference on autonomous robot systems and competitions | 2015

Testing a Fully Autonomous Robotic Salesman in Real Scenarios

Adrián Romero-Garcés; Luis Vicente Calderita; Jesus Martínez-Gómez; Juan Pedro Bandera; Rebeca Marfil; Luis J. Manso; Antonio Bandera; Pablo Bustos

Over the past decades, the number of robots deployed in museums, trade shows and exhibitions have grown steadily. This new application domain has become a key research topic in the robotics community. Therefore, new robots are designed to interact with people in these domains, using natural and intuitive channels. Visual perception and speech processing have to be considered for these robots, as they should be able to detect people in their environment, recognize their degree of accessibility and engage them in social conversations. They also need to safely navigate around dynamic, uncontrolled environments. They must be equipped with planning and learning components, that allow them to adapt to different scenarios. Finally, they must attract the attention of the people, be kind and safe to interact with. In this paper, we describe our experience with Gualzru, a salesman robot endowed with the cognitive architecture RoboCog. This architecture synchronizes all previous processes in a social robot, using a common inner representation as the core of the system. The robot has been tested in crowded, public daily life environments, where it interacted with people that had never seen it before nor had a clue about its functionality. Experimental results presented in this paper demonstrate the capabilities of the robot and its limitations in these real scenarios, and define future improvement actions.


simulation modeling and programming for autonomous robots | 2010

Improving a robotics framework with real-time and high-performance features

Jesús Martínez; Adrián Romero-Garcés; Luis J. Manso; Pablo Bustos

Middleware has a key role in modern and object-oriented robotics frameworks, which aim at developing reusable, scalable and maintainable systems using different platforms and programming languages. However, complex robotics software falls into the category of distributed real-time systems with stringent requirements in terms of throughput, latency and jitter. This paper introduces and analyzes a methodology to improve an existing robotics framework with real-time and high-performance features using a recently adopted standard: the Data Distribution Service (DDS).


Robot | 2016

A Navigation Agent for Mobile Manipulators

Mario Haut; Luis J. Manso; Daniel Gallego; Mercedes E. Paoletti; Pablo Bustos; Antonio Bandera; Adrián Romero-Garcés

Robot navigation and manipulation in partially known indoor environments is usually organized as two complementary activities, local displacement control and global path planning. Both activities have to be connected across different space and time scales in order to obtain a smooth and responsive system that follows the path and adapts to the unforeseen situations imposed by the real world. There is not a clear consensus in how to do this and some important problems are still open. In this paper we present the first steps towards a new navigation agent controlling both the robot’s base and the arm. We address several of theses problems in the design of this agent, including robust localization integrating several information sources, incremental learning of free navigation and manipulation space, hand visual servoing in camera space to reduce backslash and calibration errors, and internal path representation as an elastic band that is projected to the real world through measurements of the sensors. A set of experiments are presented with the robot Ursus in real and simulated scenarios showing some encouraging results.


International Journal of Advanced Robotic Systems | 2015

A Perception-aware Architecture for Autonomous Robots

Luis J. Manso; Pablo Bustos; Pilar Bachiller; Pedro Núñez

Service robots are required to operate in indoor environments to help humans in their daily lives. To achieve the tasks that they might be assigned, the robots must be able to autonomously model an...


Cognitive Processing | 2018

Integrating planning perception and action for informed object search

Luis J. Manso; Marco Antonio Gutiérrez; Pablo Bustos; Pilar Bachiller

This paper presents a method to reduce the time spent by a robot with cognitive abilities when looking for objects in unknown locations. It describes how machine learning techniques can be used to decide which places should be inspected first, based on images that the robot acquires passively. The proposal is composed of two concurrent processes. The first one uses the aforementioned images to generate a description of the types of objects found in each object container seen by the robot. This is done passively, regardless of the task being performed. The containers can be tables, boxes, shelves or any other kind of container of known shape whose contents can be seen from a distance. The second process uses the previously computed estimation of the contents of the containers to decide which is the most likely container having the object to be found. This second process is deliberative and takes place only when the robot needs to find an object, whether because it is explicitly asked to locate one or because it is needed as a step to fulfil the mission of the robot. Upon failure to guess the right container, the robot can continue making guesses until the object is found. Guesses are made based on the semantic distance between the object to find and the description of the types of the objects found in each object container. The paper provides quantitative results comparing the efficiency of the proposed method and two base approaches.


Sensors | 2017

A Passive Learning Sensor Architecture for Multimodal Image Labeling: An Application for Social Robots

Marco Antonio Gutiérrez; Luis J. Manso; Harit Pandya; Pedro Núñez

Object detection and classification have countless applications in human–robot interacting systems. It is a necessary skill for autonomous robots that perform tasks in household scenarios. Despite the great advances in deep learning and computer vision, social robots performing non-trivial tasks usually spend most of their time finding and modeling objects. Working in real scenarios means dealing with constant environment changes and relatively low-quality sensor data due to the distance at which objects are often found. Ambient intelligence systems equipped with different sensors can also benefit from the ability to find objects, enabling them to inform humans about their location. For these applications to succeed, systems need to detect the objects that may potentially contain other objects, working with relatively low-resolution sensor data. A passive learning architecture for sensors has been designed in order to take advantage of multimodal information, obtained using an RGB-D camera and trained semantic language models. The main contribution of the architecture lies in the improvement of the performance of the sensor under conditions of low resolution and high light variations using a combination of image labeling and word semantics. The tests performed on each of the stages of the architecture compare this solution with current research labeling techniques for the application of an autonomous social robot working in an apartment. The results obtained demonstrate that the proposed sensor architecture outperforms state-of-the-art approaches.


Robot | 2017

CLARC: A Cognitive Robot for Helping Geriatric Doctors in Real Scenarios

Dimitri Voilmy; Cristina Suárez; Adrián Romero-Garcés; Cristian Reuther; José Carlos Pulido; Rebeca Marfil; Luis J. Manso; Karine Lan Hing Ting; Ana Iglesias; José Carlos González; Javier García; Ángel García-Olaya; Raquel Fuentetaja; Fernando Fernández; Alvaro Dueñas; Luis Vicente Calderita; Pablo Bustos; T. Barile; Juan Pedro Bandera; Antonio Bandera

Comprehensive Geriatric Assessment (CGA) is an integrated clinical process to evaluate the frailty of elderly persons in order to create therapy plans that improve their quality of life. For robotizing these tests, we are designing and developing CLARC, a mobile robot able to help the physician to capture and manage data during the CGA procedures, mainly by autonomously conducting a set of predefined evaluation tests. Built around a shared internal representation of the outer world, the architecture is composed of software modules able to plan and generate a stream of actions, to execute actions emanated from the representation or to update this by including/removing items at different abstraction levels. Percepts, actions and intentions coming from all software modules are grounded within this unique representation. This allows the robot to react to unexpected events and to modify the course of action according to the dynamics of a scenario built around the interaction with the patient. The paper describes the architecture of the system as well as the preliminary user studies and evaluation to gather new user requirements.

Collaboration


Dive into the Luis J. Manso's collaboration.

Top Co-Authors

Avatar

Pablo Bustos

University of Extremadura

View shared research outputs
Top Co-Authors

Avatar

Pedro Núñez

University of Extremadura

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pilar Bachiller

University of Extremadura

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge