Mónica Ballesta
Universidad Miguel Hernández de Elche
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mónica Ballesta.
machine vision applications | 2010
Arturo Gil; Oscar Martinez Mozos; Mónica Ballesta; Oscar Reinoso
In this paper we compare the behavior of different interest point detectors and descriptors under the conditions needed to be used as landmarks in vision-based simultaneous localization and mapping (SLAM). We evaluate the repeatability of the detectors, as well as the invariance and distinctiveness of the descriptors, under different perceptual conditions using sequences of images representing planar objects as well as 3D scenes. We believe that this information will be useful when selecting an appropriate landmark detector and descriptor for visual SLAM.
Robotics and Autonomous Systems | 2010
Arturo Gil; íscar Reinoso; Mónica Ballesta; Miguel Juliá
This paper describes an approach to solve the Simultaneous Localization and Mapping (SLAM) problem with a team of cooperative autonomous vehicles. We consider that each robot is equipped with a stereo camera and is able to observe visual landmarks in the environment. The SLAM approach presented here is feature-based, thus the map is represented by a set of 3D landmarks each one defined by a global position in space and a visual descriptor. The robots move independently along different trajectories and make relative measurements to landmarks in the environment in order to jointly build a common map using a Rao-Blackwellized particle filter. We show results obtained in a simulated environment that validate the SLAM approach. The process of observing a visual landmark is simulated in the following way: first, the relative measurement obtained by the robot is corrupted with Gaussian noise, using a noise model for a standard stereo camera. Second, the visual description of the landmark is altered by noise, simulating the changes in the descriptor which may occur when the robot observes the same landmark under different scales and viewpoints. In addition, the noise in the odometry of the robots also takes values obtained from real robots. We propose an approach to manage data associations in the context of visual features. Different experiments have been performed, with variations in the path followed by the robots and the parameters in the particle filter. Finally, the results obtained in simulation demonstrate that the approach is suitable for small robot teams.
Current Topics in Artificial Intelligence | 2007
Oscar Martinez Mozos; Arturo Gil; Mónica Ballesta; Oscar Reinoso
In this paper we present several interest points detectors and we analyze their suitability when used as landmark extractors for vision-based simultaneous localization and mapping (vSLAM). For this purpose, we evaluate the detectors according to their repeatability under changes in viewpoint and scale. These are the desired requirements for visual landmarks. Several experiments were carried out using sequence of images captured with high precision. The sequences represent planar objects as well as 3D scenes.
Engineering Applications of Artificial Intelligence | 2010
Miguel Juliá; íscar Reinoso; Arturo Gil; Mónica Ballesta; Luis Payá
In this paper we present a hybrid reactive/deliberative approach to the multi-robot integrated exploration problem. In contrast to other works, the design of the reactive and deliberative processes is exclusively oriented to the exploration having both the same importance level. The approach is based on the concepts of expected safe zone and gateway cell. The reactive exploration of the expected safe zone of the robot by means of basic behaviours avoids the presence of local minima. Simultaneously, a planner builds up a decision tree in order to decide between exploring the current expected safe zone or changing to other zone by means of travelling to a gateway cell. Furthermore, the model takes into account the degree of localization of the robots to return to previously explored areas when it is necessary to recover the certainty in the position of the robots. Several simulations demonstrate the validity of the approach.
Sensors | 2010
Arturo Gil; Oscar Reinoso; Mónica Ballesta; Miguel Juliá; Luis Payá
In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment.
Sensors | 2015
Yerai Berenguer; Luis Payá; Mónica Ballesta; Oscar Reinoso
This work presents some methods to create local maps and to estimate the position of a mobile robot, using the global appearance of omnidirectional images. We use a robot that carries an omnidirectional vision system on it. Every omnidirectional image acquired by the robot is described only with one global appearance descriptor, based on the Radon transform. In the work presented in this paper, two different possibilities have been considered. In the first one, we assume the existence of a map previously built composed of omnidirectional images that have been captured from previously-known positions. The purpose in this case consists of estimating the nearest position of the map to the current position of the robot, making use of the visual information acquired by the robot from its current (unknown) position. In the second one, we assume that we have a model of the environment composed of omnidirectional images, but with no information about the location of where the images were acquired. The purpose in this case consists of building a local map and estimating the position of the robot within this map. Both methods are tested with different databases (including virtual and real images) taking into consideration the changes of the position of different objects in the environment, different lighting conditions and occlusions. The results show the effectiveness and the robustness of both methods.
emerging technologies and factory automation | 2008
Mónica Ballesta; Oscar Reinoso; Arturo Gil; Miguel Juliá; Luis Payá
In a multi-robot system, in which each of the robots constructs its own local map, it is necessary to perform the fusion of these maps into a global one. This task is normally performed in two different steps: by aligning the maps and then merging the data. This paper focusses on the first step: Map Alignment, which consists in obtaining the transformation between the local maps built independently. In this way, these local maps will have a common reference frame. In this paper, a collection of algorithms for solving the map alignment are analyzed under different conditions of noise in the data and intersection between local maps. This study is performed in a visual SLAM context, in which the robots construct landmark-based maps. The landmarks consist in 3D points captured from the environment and characterized by a visual descriptor.
Revista Iberoamericana De Automatica E Informatica Industrial | 2010
Mónica Ballesta; Arturo Gil; Oscar Reinoso; David Úbeda
The aim of this paper is to find a visual feature extractor that can be used in the process of SLAM (Simultaneous Localization and Mapping). This feature extractor is the combination of a detector which extracts significant points from the environment, and a local descriptor which characterizes those points. This paper presents the comparions of a set of interest point detectors and local descriptors that are used as visual landmarks in a SLAM context. The comparative analysis is divided into two diferent steps: detection and description. We evaluate the repeatability of the detectors and the invariance of the descriptors to changes in viewpoint, scale and illumination. The experiments have been carried out with sequences of indoor (building with offices) and outdoor images, having different imaging condition changes (position and illumination). In this way, the typical environments of robot navigation tasks is represented. We consider that the results obtained in this work can be useful when selecting a suitable landmark in visual SLAM, in indoor and outdoor environments.
international conference on knowledge-based and intelligent information and engineering systems | 2007
Luis Payá; Oscar Reinoso; Arturo Gil; Jose Manuel Pedrero; Mónica Ballesta
The appearance-based approach in visual robot navigation applied to the following of pre-recorded routes supposes several advantages, such as its application to non-structured environments and the relatively simple extraction of control laws. The classical approaches consist on two separated phases: learning and navigation, so, one robot cannot start navigation until learning has finished, and if some locations have to be added once learning has finished, the database must be created from the scratch. This work presents how an incremental PCA model can be used to overcome these limitations. With this approach, the follower robot can start while the database is still being built. Comparing the current view with those in the database, it can localize and navigate using a fuzzy controller. This approach can be applied to collaborative tasks where a team of robots must follow a guide one or in vigilance tasks where a route in a building has to be repeated continuously.
advanced concepts for intelligent vision systems | 2017
David Valiente; Oscar Reinoso; Arturo Gil; Luis Payá; Mónica Ballesta
This article presents a visual localization technique based solely on the use of omnidirectional images, within the framework of mobile robotics. The proposal makes use of the epipolar constraint, adapted to the omnidirectional reference, in order to deal with matching point detection, which ultimately determines a motion transformation for localizing the robot. The principal contributions lay on the propagation of the current uncertainty to the matching. Besides, a Bayesian regression technique is also implemented, in order te reinforce the robustness. As a result, we provide a reliable adaptive matching, which proves its stability and consistency against non-linear and dynamic effects affecting the image frame, and consequently the final application. In particular, the search for matching points is highly reduced, thus aiding in the search and avoiding false correspondes. The final outcome is reflected by real data experiments, which confirm the benefit of these contributions, and also test the suitability of the localization when it is embedded on a vSLAM application.