Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Luis Payá is active.

Publication


Featured researches published by Luis Payá.


Sensors | 2010

Map Building and Monte Carlo Localization Using Global Appearance of Omnidirectional Images

Luis Payá; Lorenzo Fernández; Arturo Gil; Oscar Reinoso

In this paper we deal with the problem of map building and localization of a mobile robot in an environment using the information provided by an omnidirectional vision sensor that is mounted on the robot. Our main objective consists of studying the feasibility of the techniques based in the global appearance of a set of omnidirectional images captured by this vision sensor to solve this problem. First, we study how to describe globally the visual information so that it represents correctly locations and the geometrical relationships between these locations. Then, we integrate this information using an approach based on a spring-mass-damper model, to create a topological map of the environment. Once the map is built, we propose the use of a Monte Carlo localization approach to estimate the most probable pose of the vision system and its trajectory within the map. We perform a comparison in terms of computational cost and error in localization. The experimental results we present have been obtained with real indoor omnidirectional images.


Sensors | 2014

Performance of global-appearance descriptors in map building and localization using omnidirectional vision.

Luis Payá; Francisco Amorós; Lorenzo Fernández; Oscar Reinoso

Map building and localization are two crucial abilities that autonomous robots must develop. Vision sensors have become a widespread option to solve these problems. When using this kind of sensors, the robot must extract the necessary information from the scenes to build a representation of the environment where it has to move and to estimate its position and orientation with robustness. The techniques based on the global appearance of the scenes constitute one of the possible approaches to extract this information. They consist in representing each scene using only one descriptor which gathers global information from the scene. These techniques present some advantages comparing to other classical descriptors, based on the extraction of local features. However, it is important a good configuration of the parameters to reach a compromise between computational cost and accuracy. In this paper we make an exhaustive comparison among some global appearance descriptors to solve the mapping and localization problem. With this aim, we make use of several image sets captured in indoor environments under realistic working conditions. The datasets have been collected using an omnidirectional vision sensor mounted on the robot.


Engineering Applications of Artificial Intelligence | 2010

A hybrid solution to the multi-robot integrated exploration problem

Miguel Juliá; íscar Reinoso; Arturo Gil; Mónica Ballesta; Luis Payá

In this paper we present a hybrid reactive/deliberative approach to the multi-robot integrated exploration problem. In contrast to other works, the design of the reactive and deliberative processes is exclusively oriented to the exploration having both the same importance level. The approach is based on the concepts of expected safe zone and gateway cell. The reactive exploration of the expected safe zone of the robot by means of basic behaviours avoids the presence of local minima. Simultaneously, a planner builds up a decision tree in order to decide between exploring the current expected safe zone or changing to other zone by means of travelling to a gateway cell. Furthermore, the model takes into account the degree of localization of the robots to return to previously explored areas when it is necessary to recover the certainty in the position of the robots. Several simulations demonstrate the validity of the approach.


Sensors | 2010

Estimation of Visual Maps with a Robot Network Equipped with Vision Sensors

Arturo Gil; Oscar Reinoso; Mónica Ballesta; Miguel Juliá; Luis Payá

In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment.


iberian conference on pattern recognition and image analysis | 2005

Monte carlo localization using SIFT features

Arturo Gil; Oscar Reinoso; Asunción Vicente; Cèsar Fernández; Luis Payá

The ability of finding its situation in a given environment is crucial for an autonomous agent. While navigating through a space, a mobile robot must be capable of finding its location in a map of the environment (i.e. its pose ), otherwise, the robot will not be able to complete its task. This problem becomes specially challenging if the robot does not possess any external measure of its global position. Typically, dead-reckoning systems do fail in the estimation of robots pose when working for long periods of time. In this paper we present a localization method based on the Monte Carlo algorithm. During the last decade this method has been extensively tested in the field of mobile Robotics, proving to be both robust and efficient. On the other hand, our approach takes advantage from the use of a vision sensor. In particular, we have chosen to use SIFT features as visual landmarks finding them suitable for the global localization of a mobile robot. We have succesfully tested our approach in a B21r mobile robot, achieving to globally localize the robot in few iterations. The technique is suitable for office-like environments and behaves correctly in the presence of people and moving objects.


Computer Applications in Engineering Education | 2015

Development and deployment of a new robotics toolbox for education

Arturo Gil; Oscar Reinoso; José María Marín; Luis Payá; Javier Ruiz

This paper presents a new toolbox focussed on the teaching of robotic manipulators. The library works under Matlab and has been designed to strengthen the theoretical concepts explained during the theory lectures. The educational approach is focussed on teaching the main concepts through developing math modeling and simulation. In order to do this, the toolbox aims at the fulfillment of a set of practical sessions that allow the students to test most of the concepts of an introductory course in robotic manipulators. In addition, the library possesses features that typically needed the usage of proprietary software, such as the visualization of a realistic 3D representation of commercial robotic arms and the programming of those arms in an industrial language. The practices include the concepts of direct and inverse kinematics, inverse and direct dynamics, path planning and robot programming. As a transversal practice, during the sessions, the student is asked to choose and integrate a new robotic arm in the library, proposing a particular solution to the direct and inverse kinematic problem, as well as the inclusion of other important parameters. The library has been deployed during the last year in bachelor and master studies and has received a nice acceptance. Finally, the library has been assessed in terms of usefulness, design and usage by means of a student survey. In addition, the surveys were designed to establish a relation between the student perception of the system, the time spent on the tool and their learning achievements.


Sensors | 2015

Position Estimation and Local Mapping Using Omnidirectional Images and Global Appearance Descriptors

Yerai Berenguer; Luis Payá; Mónica Ballesta; Oscar Reinoso

This work presents some methods to create local maps and to estimate the position of a mobile robot, using the global appearance of omnidirectional images. We use a robot that carries an omnidirectional vision system on it. Every omnidirectional image acquired by the robot is described only with one global appearance descriptor, based on the Radon transform. In the work presented in this paper, two different possibilities have been considered. In the first one, we assume the existence of a map previously built composed of omnidirectional images that have been captured from previously-known positions. The purpose in this case consists of estimating the nearest position of the map to the current position of the robot, making use of the visual information acquired by the robot from its current (unknown) position. In the second one, we assume that we have a model of the environment composed of omnidirectional images, but with no information about the location of where the images were acquired. The purpose in this case consists of building a local map and estimating the position of the robot within this map. Both methods are tested with different databases (including virtual and real images) taking into consideration the changes of the position of different objects in the environment, different lighting conditions and occlusions. The results show the effectiveness and the robustness of both methods.


Journal of Sensors | 2016

Using Omnidirectional Vision to Create a Model of the Environment: A Comparative Evaluation of Global-Appearance Descriptors

Luis Payá; Oscar Reinoso; Yerai Berenguer; David Úbeda

Nowadays, the design of fully autonomous mobile robots is a key discipline. Building a robust model of the unknown environment is an important ability the robot must develop. Using this model, this robot must be able to estimate its current position and to navigate to the target points. The use of omnidirectional vision sensors is usual to solve these tasks. When using this source of information, the robot must extract relevant information from the scenes both to build the model and to estimate its position. The possible frameworks include the classical approach of extracting and describing local features or working with the global appearance of the scenes, which has emerged as a conceptually simple and robust solution. While feature-based techniques have been extensively studied in the literature, appearance-based ones require a full comparative evaluation to reveal the performance of the existing methods and to tune correctly their parameters. This work carries out a comparative evaluation of four global-appearance techniques in map building tasks, using omnidirectional visual information as the only source of data from the environment.


Industrial Robot-an International Journal | 2008

Mechanisms for collaborative teleoperation with a team of cooperative robots

Oscar Reinoso; Arturo Gil; Luis Payá; Miguel Juliá

Purpose – This paper aims to present a teleoperation system that allows one to control a group of mobile robots in a collaborative manner. In order to show the capabilities of the collaborative teleoperation system, it seeks to present a task where the operator collaborates with a robot team to explore a remote environment in a coordinated manner. The system implements human‐robot interaction by means of natural language interfaces, allowing one to teleoperate multiple mobile robots in an unknown, unstructured environment. With the supervision of the operator, the robot team builds a map of the environment with a vision‐based simultaneous localization and mapping (SLAM) technique. The approach is well suited for search and rescue tasks and other applications where the operator may guide the exploration of the robots to certain areas in the map.Design/methodology/approach – In opposition with a master‐slave scheme of teleoperation, an exploration mechanism is proposed that allows one to integrate the comma...


emerging technologies and factory automation | 2008

Analysis of Map Alignment techniques in visual SLAM systems

Mónica Ballesta; Oscar Reinoso; Arturo Gil; Miguel Juliá; Luis Payá

In a multi-robot system, in which each of the robots constructs its own local map, it is necessary to perform the fusion of these maps into a global one. This task is normally performed in two different steps: by aligning the maps and then merging the data. This paper focusses on the first step: Map Alignment, which consists in obtaining the transformation between the local maps built independently. In this way, these local maps will have a common reference frame. In this paper, a collection of algorithms for solving the map alignment are analyzed under different conditions of noise in the data and intersection between local maps. This study is performed in a visual SLAM context, in which the robots construct landmark-based maps. The landmarks consist in 3D points captured from the environment and characterized by a visual descriptor.

Collaboration


Dive into the Luis Payá's collaboration.

Top Co-Authors

Avatar

Oscar Reinoso

Universidad Miguel Hernández de Elche

View shared research outputs
Top Co-Authors

Avatar

Arturo Gil

Universidad Miguel Hernández de Elche

View shared research outputs
Top Co-Authors

Avatar

Luis M. Jiménez

Universidad Miguel Hernández de Elche

View shared research outputs
Top Co-Authors

Avatar

Adrián Peidró

Universidad Miguel Hernández de Elche

View shared research outputs
Top Co-Authors

Avatar

José María Marín

Universidad Miguel Hernández de Elche

View shared research outputs
Top Co-Authors

Avatar

Lorenzo Fernández

Universidad Miguel Hernández de Elche

View shared research outputs
Top Co-Authors

Avatar

Miguel Juliá

Universidad Miguel Hernández de Elche

View shared research outputs
Top Co-Authors

Avatar

Mónica Ballesta

Universidad Miguel Hernández de Elche

View shared research outputs
Top Co-Authors

Avatar

Yerai Berenguer

Universidad Miguel Hernández de Elche

View shared research outputs
Top Co-Authors

Avatar

David Valiente

Universidad Miguel Hernández de Elche

View shared research outputs
Researchain Logo
Decentralizing Knowledge