Jose Manuel Peula
University of Málaga
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jose Manuel Peula.
IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2013
Cristina Urdiales; E.J. Perez; Gloria Peinado; Manuel Fdez-Carmona; Jose Manuel Peula; Roberta Annicchiarico; F. Sandoval; Carlo Caltagirone
Assisted wheelchair navigation is of key importance for persons with severe disabilities. The problem has been solved in different ways, usually based on the shared control paradigm. This paradigm consists of giving the user more or less control on a need basis. Naturally, these approaches require personalization: each wheelchair user has different skills and needs and it is hard to know a priori from diagnosis how much assistance must be provided. Furthermore, since there is no such thing as an average user, sometimes it is difficult to quantify the benefits of these systems. This paper proposes a new method to extract a prototype user profile using real traces based on more than 70 volunteers presenting different physical and cognitive skills. These traces are clustered to determine the average behavior that can be expected from a wheelchair user in order to cope with significant situations. Processed traces provide a prototype user model for comparison purposes, plus a simple method to obtain without supervision a skill-based navigation profile for any user while he/she is driving. This profile is useful for benchmarking but also to determine the situations in which a given user might require more assistance after evaluating how well he/she compares to the benchmark. Profile-based shared control has been successfully tested by 18 volunteers affected by left or right brain stroke at Fondazione Santa Lucia, in Rome, Italy.
international conference on robotics and automation | 2010
Cristina Urdiales; Manuel Fernández-Carmona; Jose Manuel Peula; R. Annicchiaricco; F. Sandoval; Carlo Caltagirone
This work presents a new approach to shared control to assist wheelchair driving. Rather than swapping control from human to robot either by request or on a need basis, the system estimates how much help is needed in a reactive fashion and continuously produces an emergent motory command in combination with human input. To provide time stability and integration, instant commands are modulated by a factor depending on human efficiency in a shifting time window. Thus, the better the person drives, the more control he/she is awarded with. The approach has been tested at Fondazione Santa Lucia (FSL) in Rome with volunteers presenting different disabilities. All volunteers managed to finish a mildly complicated trajectory with door crossing and major turns and the proposed system increased efficiency in all cases.
Robotics and Autonomous Systems | 2009
Jose Manuel Peula; Cristina Urdiales; Ignacio Herrero; Isabel Sanchez-Tato; F. Sandoval
A traditional problem in robotics is adaptation of developed algorithms to different platforms and sensors, as each of them has its specifics and associated errors. Hierarchical control architectures deal with the problem through division of the system into layers, where deliberative processing is performed at high level and low level layers are in charge of dealing with reactive behaviors and adaptation to platform and sensor hardware. Specifically, approaches based on the Emergent Behavior Theory rely on building high level behaviors by combining simpler ones that provide intuitive reactive responses to sensory instance. This combination is controlled by higher layers in order to obtain more complex behaviors. Unfortunately, low level behaviors might be difficult to develop, specially when dealing with legged robots and sensors like video cameras, where resulting motion is heavily influenced by the robot kinematics and dynamics and sensory input is affected by external conditions, transformations, distortions, noise and motion itself (e.g. the camera bouncing problem). In this paper, we propose a new learning based method to solve most of these problems. It basically consists of creating a reactive behavior by supervisedly driving a robot for a time. During that time, its visual input is reactively associated to commands sent to the robot through a Case Based Reasoning (CBR) behavior builder. Thus, the robot learns what the person would do in its situation to achieve a certain goal. This approach has two advantages. First, humans are particularly good at adapting and taking into account the specifics of a given mobile after some use. Thus, kinematics and dynamics are absorbed into the casebase along with how the person thinks they should be dealt with by that particular robot. Similarly, commands are associated to the input sensor as is, so systematic errors in sensors and motors are also implicitly learnt in the casebase (camera bouncing, distorsions, noise ...). Also, different reactive strategies to reach a simple goal can be programmed into the robot by showing, rather than by coding. This is particularly useful because some reactive behaviors are ill-fitted to equations. Naturally, CBR allows online adaptation to potential changes after supervised training, so the system is able to learn by itself when working autonomously too. The proposed system has been successfully tested in a 4-legged Aibo robot in a controlled environment. To prove that it is adequate to create low level layers for hybrid architectures, two different CBR reactive behaviors have been tested and combined into an emergent one. A deliberative layer could be used to extent the system to more complex environments.
international conference on robotics and automation | 2011
Gloria Peinado; Cristina Urdiales; Jose Manuel Peula; M. Fdez-Carmona; Roberta Annicchiarico; F. Sandoval; Carlo Caltagirone
This work presents a new approach to proactive collaborative wheelchair control. The system is based on estimating how much help the person needs at each situation and providing just the correct amount. This is achieved by combining robot and human control commands in a reactive fashion after weighting them by their respective local efficiency. Thus, the better the person drives, the more control he/she is awarded with. In order to predict how much help users may need in advance rather than waiting for them to decrease in efficiency, their skills to deal with each situation are estimated with respect to a baseline driver profile to increase assistance when needed. Situations are characterized at reactive level to keep a reduced set. This profile has been extracted from real traces of more than 70 inpatients presenting different physical and cognitive skills via clustering. The approach has been successfully tested at Fondazione Santa Lucia (FSL) in Rome.
ieee international conference on rehabilitation robotics | 2009
Cristina Urdiales; Jose Manuel Peula; Manuel Fernández-Carmona; R. Annicchiaricco; F. Sandoval; Carlo Caltagirone
This work presents a new approach to shared control driving a robotic wheelchair for persons with disabilities. The proposal is based on weighting the robot and human commands by their respective efficiencies to obtain an emergent command in a reactive way. It was tested with a robotized Meyra wheelchair at Fondazione Santa Lucia (FSL) in Rome with volunteers presenting different disabilities and we observed that the system seemed to help less persons with better cognitive skills. This seemed to be due to disagreement between the users and the machine when they realized that they were being helped. In order to improve that, we added a Case Based Reasoning module to absorb how the user drives to replace the robot navigation algorithm. New tests with the adaptive system showed an increase in efficiency in all cases.
ieee international conference on rehabilitation robotics | 2009
Manuel Fernández-Carmona; Blanca Fernandez-Espejo; Jose Manuel Peula; Cristina Urdiales; F. Sandoval
This paper addresses a new collaborative control method for robotic wheelchairs. The original method was specifically designed for disabled people who are unable to drive a robotic wheelchair on their own. Its main novelty was that the wheelchair just provided the amount of help needed at each moment, to avoid loss of residual abilities. This wheelchair was tested by volunteering in-patients in Casa Agevole at Fondazione Santa Lucia (FSL) in Rome. However we found that in-patients with severe cognitive impairment were not able to complete complex trajectories despite wheelchair help. Thus, this works presents the improvement of these control techniques, more suitable for severe patients. We present a modified efficiency based collaborative control scheme based on modulation of assistance using biometric sensors, as well as preliminary results of this technique.
ambient intelligence | 2009
L. Duran; Manuel Fernández-Carmona; Cristina Urdiales; Jose Manuel Peula; F. Sandoval
In nowadays aging society, power wheelchairs provide assistance for non-pedestrian mobility, but inside narrow indoor spaces, holonomic ones are required. While they adapt well to complex environments, it is harder to control them via a conventional joystick. Thus, extra buttons and/or knobs are included to decide what to do. To make control more intuitive, we propose to use a Wiimote for holonomic wheelchair control. Experiments in a narrow environment have been succesful and prove that Wiimote requires less interaction to achieve the same results that a conventional joystick. This has been reported to reduce mental workload and, hence, allow more relaxed interaction with the wheelchair.
international conference on case-based reasoning | 2013
Cristina Urdiales; Jose Manuel Peula; Manuel Fernández-Carmona; F. Sandoval
Mobility assistance is of key importance for people with disabilities to remain autonomous in their preferred environments. In severe cases, assistance can be provided by robotized wheelchairs that can perform complex maneuvers and/or correct the user’s commands. User’s acceptance is of key importance, as some users do not like their commands to be modified. This work presents a solution to improve acceptance. It consists of making the robot learn how the user drives so corrections will not be so noticeable to the user. Case Based Reasoning (CBR) is used to acquire a user’s driving model reactive level. Experiments with volunteers at Fondazione Santa Lucia (FSL) have proven that, indeed, this customized approach at assistance increases acceptance by the user.
international work-conference on artificial and natural neural networks | 2015
Manuel Fernández-Carmona; Jose Manuel Peula; Cristina Urdiales; F. Sandoval
This paper presents a novel shared control algorithm for robotized wheelchairs. The proposed algorithm is a new method to extend autonomous navigation techniques into the shared control domain. It reactively combines user’s and robot’s commands into a continuous function that approximates a classic Navigation Function (NF) by weighting input commands with NF constraints. Our approach overcomes the main drawbacks of NFs -calculus complexity and limitations on environment modeling- so it can be used in dynamic unstructured environments. It also benefits from NF properties: convergence to destination, smooth paths and safe navigation. Due to the user’s contribution to control, our function is not strictly a NF, so we call it a pseudo-navigation function (PNF) instead.
intelligent robots and systems | 2013
Jose Manuel Peula; Cristina Urdiales; Ignacio Herrero; F. Sandoval
Multi-robot systems (MRS) are a very active and important research topic nowadays. One of the main problems of these systems is the large number of variables to take into account. Due to this, robot behaviors are sometimes learnt instead of calculated via analytical expressions. A typical learning mechanism, specially for biomimetic robots, is Learning from demonstration (LfD). This paper proposes a LfD approach for implicit coordinated navigation using combination of Case-Based Reasoning (CBR) behaviors. During a training stage, CBR is used to learn simple behaviors that associate positions of other robots and/or objects to motion commands for each robot. Thus, human operators only need to concentrate on achieving their robots goal as efficiently as possible in the operating conditions. Then, in running stage, each robot will achieve a different coordinate navigation strategy depending on the triggered behaviors. This system has been successfully tested with three Aibo-ERS7 robots in a RobCup-like environment.