Luis Vicente Calderita
University of Extremadura
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Luis Vicente Calderita.
simulation modeling and programming for autonomous robots | 2010
Luis J. Manso; Pilar Bachiller; Pablo Bustos; Pedro Núñez; Ramón Cintas; Luis Vicente Calderita
This paper presents RoboComp, an open-source componentoriented robotics framework. Ease of use and low development effort has proven to be two of the key issues to take into account when building frameworks. Due to the crucial role of development tools in these questions, this paper deeply describes the tools that make RoboComp more than just a middleware. To provide an overview of the developer experience, some examples are given throughout the text. It is also compared to the most open-source relevant projects with similar goals, specifying its weaknesses and strengths.
Sensors | 2013
Luis Vicente Calderita; Juan Pedro Bandera; Pablo Bustos; Andreas Skiadopoulos
Motion capture systems have recently experienced a strong evolution. New cheap depth sensors and open source frameworks, such as OpenNI, allow for perceiving human motion on-line without using invasive systems. However, these proposals do not evaluate the validity of the obtained poses. This paper addresses this issue using a model-based pose generator to complement the OpenNI human tracker. The proposed system enforces kinematics constraints, eliminates odd poses and filters sensor noise, while learning the real dimensions of the performers body. The system is composed by a PrimeSense sensor, an OpenNI tracker and a kinematics-based filter and has been extensively tested. Experiments show that the proposed system improves pure OpenNI results at a very low computational cost.
ieee international conference on autonomous robot systems and competitions | 2015
Adrián Romero-Garcés; Luis Vicente Calderita; Jesus Martínez-Gómez; Juan Pedro Bandera; Rebeca Marfil; Luis J. Manso; Antonio Bandera; Pablo Bustos
Over the past decades, the number of robots deployed in museums, trade shows and exhibitions have grown steadily. This new application domain has become a key research topic in the robotics community. Therefore, new robots are designed to interact with people in these domains, using natural and intuitive channels. Visual perception and speech processing have to be considered for these robots, as they should be able to detect people in their environment, recognize their degree of accessibility and engage them in social conversations. They also need to safely navigate around dynamic, uncontrolled environments. They must be equipped with planning and learning components, that allow them to adapt to different scenarios. Finally, they must attract the attention of the people, be kind and safe to interact with. In this paper, we describe our experience with Gualzru, a salesman robot endowed with the cognitive architecture RoboCog. This architecture synchronizes all previous processes in a social robot, using a common inner representation as the core of the system. The robot has been tested in crowded, public daily life environments, where it interacted with people that had never seen it before nor had a clue about its functionality. Experimental results presented in this paper demonstrate the capabilities of the robot and its limitations in these real scenarios, and define future improvement actions.
Robot | 2017
Dimitri Voilmy; Cristina Suárez; Adrián Romero-Garcés; Cristian Reuther; José Carlos Pulido; Rebeca Marfil; Luis J. Manso; Karine Lan Hing Ting; Ana Iglesias; José Carlos González; Javier García; Ángel García-Olaya; Raquel Fuentetaja; Fernando Fernández; Alvaro Dueñas; Luis Vicente Calderita; Pablo Bustos; T. Barile; Juan Pedro Bandera; Antonio Bandera
Comprehensive Geriatric Assessment (CGA) is an integrated clinical process to evaluate the frailty of elderly persons in order to create therapy plans that improve their quality of life. For robotizing these tests, we are designing and developing CLARC, a mobile robot able to help the physician to capture and manage data during the CGA procedures, mainly by autonomously conducting a set of predefined evaluation tests. Built around a shared internal representation of the outer world, the architecture is composed of software modules able to plan and generate a stream of actions, to execute actions emanated from the representation or to update this by including/removing items at different abstraction levels. Percepts, actions and intentions coming from all software modules are grounded within this unique representation. This allows the robot to react to unexpected events and to modify the course of action according to the dynamics of a scenario built around the interaction with the patient. The paper describes the architecture of the system as well as the preliminary user studies and evaluation to gather new user requirements.
Robot | 2016
Pablo Bustos; Luis J. Manso; Juan Pedro Bandera; Adrián Romero-Garcés; Luis Vicente Calderita; Rebeca Marfil; Antonio Bandera
Enabling autonomous mobile manipulators to collaborate with people is a challenging research field with a wide range of applications. Collaboration means working with a partner to reach a common goal and it involves performing both, individual and joint actions, with her. Human-robot collaboration requires, at least, two conditions to be efficient: a) a common plan, usually under-defined, for all involved partners; and b) for each partner, the capability to infer the intentions of the other in order to coordinate the common behavior. This is a hard problem for robotics since people can change their minds on their envisaged goal or interrupt a task without delivering legible reasons. Also, collaborative robots should select their actions taking into account human-aware factors such as safety, reliability and comfort. Current robotic cognitive systems are usually limited in this respect as they lack the rich dynamic representations and the flexible human-aware planning capabilities needed to succeed in these collaboration tasks. In this paper, we address this problem by proposing and discussing a deep hybrid representation, DSR, which will be geometrically ordered at several layers of abstraction (deep) and will merge symbolic and geometric information (hybrid). This representation is part of a new agents-based robotics cognitive architecture called CORTEX. The agents that form part of CORTEX are in charge of high-level functionalities, reactive and deliberative, and share this representation among them. They keep it synchronized with the real world through sensor readings, and coherent with the internal domain knowledge by validating each update.
International Workshop on Brain-Inspired Computing | 2015
Luis J. Manso; Pablo Bustos; Juan Pedro Bandera; Adrián Romero-Garcés; Luis Vicente Calderita; Rebeca Marfil; Antonio Bandera
Collaboration is an essential feature of human social interaction. Briefly, when two or more people agree on a common goal and a joint intention to reach that goal, they have to coordinate their actions to engage in joint actions, planning their courses of actions according to the actions of the other partners. The same holds for teams where the partners are people and robots, resulting on a collection of technical questions difficult to answer. Human-robot collaboration requires the robot to coordinate its behavior to the behaviors of the humans at different levels, e.g., the semantic level, the level of the content and behavior selection in the interaction, and low-level aspects such as the temporal dynamics of the interaction. This forces the robot to internalize information about the motions, actions and intentions of the rest of partners, and about the state of the environment. Furthermore, collaborative robots should select their actions taking into account additional human-aware factors such as safety, reliability and comfort. Current cognitive systems are usually limited in this respect as they lack the rich dynamic representations and the flexible human-aware planning capabilities needed to succeed in tomorrow human-robot collaboration tasks. Within this paper, we provide a tool for addressing this problem by using the notion of deep hybrid representations and the facilities that this common state representation offers for the tight coupling of planners on different layers of abstraction. Deep hybrid representations encode the robot and environment state, but also a robot-centric perspective of the partners taking part in the joint activity.
international conference on pervasive computing | 2013
Luis Vicente Calderita; Pablo Bustos; C. Suárez Mejías; F. Fernández; Antonio Bandera
NEUROTECHNIX | 2013
Luis Vicente Calderita; Pablo Bustos; Cristina Suárez-Mejías; Begoña Ferrer-González; Antonio Bandera
Archive | 2015
Adrián Romero-Garcés; Luis Vicente Calderita; Jesus Martínez-Gómez; Juan Pedro Bandera-Rubio; Rebeca Marfil; Luis J. Manso; Pablo Bustos; Antonio Jesus Bandera-Rubio
Revista Iberoamericana De Automatica E Informatica Industrial | 2015
Luis Vicente Calderita; Pablo Bustos; C. Suárez Mejías; Fernando Fernández; R. Viciana; Antonio Bandera