Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pilar Bachiller is active.

Publication


Featured researches published by Pilar Bachiller.


simulation modeling and programming for autonomous robots | 2010

RoboComp: a tool-based robotics framework

Luis J. Manso; Pilar Bachiller; Pablo Bustos; Pedro Núñez; Ramón Cintas; Luis Vicente Calderita

This paper presents RoboComp, an open-source componentoriented robotics framework. Ease of use and low development effort has proven to be two of the key issues to take into account when building frameworks. Due to the crucial role of development tools in these questions, this paper deeply describes the tools that make RoboComp more than just a middleware. To provide an overview of the developer experience, some examples are given throughout the text. It is also compared to the most open-source relevant projects with similar goals, specifying its weaknesses and strengths.


Archive | 2008

Attentional Selection for Action in Mobile Robots

Pilar Bachiller; Pablo Bustos; Luis J. Manso

During the last few years attention has become an important issue in machine vision. Studies of attentional mechanisms in biological vision have inspired many computational models (Tsotsos et al., 1995; Itti & Koch, 2000; Frintrop et al., 2005; Torralba et al., 2006; Navalpakkan & Itti, 2006). Most of them follow the assumption of limited capacity associated to the role of attention from psychological proposals (Broadbent, 1958; Laberge, 1995). These theories hypothesize that the visual system has limited processing capacity and that attention acts as a filter selecting the information that should be processed. This assumption has been criticized by many authors who affirm that the human perceptual system processing capacity is enormous (Neumann et al., 1986; Allport, 1987). From this point of view, a stage selecting the information to be processed is not needed. Instead, they claim the role of attention from the perspective of selection for action (Allport, 1987). According to this new conception, the function of attention is to avoid behavioural disorganization by selecting the appropriate information to drive task execution. Such a notion of attention is very interesting in robotics, where the aim is to build autonomous robots that interact with complex environments, keeping multiple behavioural objectives. Attentional selection for action can guide robot behaviours by focusing on relevant visual targets while avoiding distracting elements. Moreover, it can be conceived as a coordination mechanism, since stimuli selection allows serializing the actions of, potentially, multiple active behaviours. To exploit these ideas, a visual attention system based on the selection for action theory has been developed. The system is a central component of a control architecture from which complex behaviours emerge according to different attention-action links. It has been designed and tested on a mobile robot endowed with a stereo vision head. Figure 1 shows the proposed control model. Sensory-motor abilities of the robot are divided into two groups that lead to two subsystems: the visual attention system, which includes the mechanisms that give rise to the selection of visual information, and the set of high-level behaviours that use visual information to accomplish their goals. Both subsystems are connected to the motor control system, which is in charge of effectively executing motor responses generated by the other two subsystems. Each high-level behaviour modulates the visual system in a specific way in order to get the necessary visual information. The incoming flow of information affects high-level


portuguese conference on artificial intelligence | 2005

Overt visual attention inside JDE control architecture

José María Cañas; M.M. de la Casa; Pablo Bustos; Pilar Bachiller

In this paper a visual overt attention mechanism is presented. It builds and keeps updated a scene representation of all the relevant objects around a robot, especially if they are far away of each other and do not lie in the same camera image. The algorithm chooses the next fixation point for a monocular camera, which is mounted over a pantilt unit. Our approach is based on two related dynamics: liveliness and saliency. The liveliness of each relevant object diminishes in time but increases with new observations of such object. The position of each valid object is a possible fixation point for the camera. The saliency of each fixation point increases in time but is reset after the camera visit such location. Real experiments with a pioneer robot endowed with a firewire camera on a pantilt unit are displayed


International Journal of Advanced Robotic Systems | 2015

A Perception-aware Architecture for Autonomous Robots

Luis J. Manso; Pablo Bustos; Pilar Bachiller; Pedro Núñez

Service robots are required to operate in indoor environments to help humans in their daily lives. To achieve the tasks that they might be assigned, the robots must be able to autonomously model an...


Cognitive Processing | 2018

Integrating planning perception and action for informed object search

Luis J. Manso; Marco Antonio Gutiérrez; Pablo Bustos; Pilar Bachiller

This paper presents a method to reduce the time spent by a robot with cognitive abilities when looking for objects in unknown locations. It describes how machine learning techniques can be used to decide which places should be inspected first, based on images that the robot acquires passively. The proposal is composed of two concurrent processes. The first one uses the aforementioned images to generate a description of the types of objects found in each object container seen by the robot. This is done passively, regardless of the task being performed. The containers can be tables, boxes, shelves or any other kind of container of known shape whose contents can be seen from a distance. The second process uses the previously computed estimation of the contents of the containers to decide which is the most likely container having the object to be found. This second process is deliberative and takes place only when the robot needs to find an object, whether because it is explicitly asked to locate one or because it is needed as a step to fulfil the mission of the robot. Upon failure to guess the right container, the robot can continue making guesses until the object is found. Guesses are made based on the semantic distance between the object to find and the description of the types of the objects found in each object container. The paper provides quantitative results comparing the efficiency of the proposed method and two base approaches.


Archive | 2011

Attentional Behaviors for Environment Modeling by a Mobile Robot

Pilar Bachiller; Pablo Bustos; Luis J. Manso

Building robots capable of interacting in an effective and autonomous way with their environments requires to provide them with the ability to model the world. That is to say, the robot must interpret the environment not as a set of points, but as an organization of more complex structures with human-likemeaning. Among the variety of sensory inputs that could be used to equip a robot, vision is one of the most informative ones. Through vision, the robot can analyze the appearance of objects. The use of stereo vision also gives the possibility to extract spatial information of the environment, allowing to determine the structure of the different elements composing it. However, vision suffers from some limitations when it is considered in isolation. On one hand, cameras have a limited field of view that can only be compensated through camera movements. On the other hand, the world is formed by non-convex structures that can only be interpreted by actively exploring the environment. Hence, the robot must move its head and body to give meaning to perceived elements composing its environment. The combination of stereo vision and active exploration provides a means to model the world. While the robot explores the environment perceived regions can be clustered, forming more complex structures like walls and objects on the floor. Nevertheless, even in simple scenarios with few rooms and obstacles, the robot must be endowed with different abilities to successfully solve the task. For instance, during exploration, the robot must be able to decide where to look at while selecting where to go, avoiding obstacles and detecting what is that it is looking at. From the point of view of perception, there are different visual behaviors that take part in this process, such as those related to look towards what the robot can recognize and model, or those dedicated to maintain itself within safety limits. From the action perspective, the robot has to move in different ways depending on internal states (i.e. the status of the modeling process) and external situations (i.e. obstacles in the way to a target position). Both perception and action should influence each other in such away that decidingwhere to look at depends on what the robot is doing, but also in a way that what is being perceived determines what the robot can or can not do. Our solution to all these questions relies heavily on visual attention. Specifically, the foundation of our proposal is that attention can organize the perceptual and action processes by acting as an intermediary between both of them. The attentional connection allows, on one hand, to drive the perceptual process according to the behavioral requirements and, on the other hand, to modulate actions on the basis of the perceptual results of the attentional control. Thus, attention solves the where to look problem and, additionally, attention prevents Attentional Behaviors for Environment Modeling by a Mobile Robot


computational intelligence | 1999

Optimal Hidden Structure for Feedforward Neural Networks

Pilar Bachiller; Rosa María Pérez Utrero; Pablo Martínez Cobo; Pedro Luis Aguilar Mateos; P. Díaz

The selection of an adequate hidden structure of a feedforward neural network is a very important issue of its design. When the hidden structure of the network is too large and complex for the model being developed, the network may tend to memorize input and output sets rather than learning relationships between them. In addition, training time will significantly increase when the network is unnecessarily large. We propose two methods to optimize the size of feedforward neural networks using orthogonal transformations. These two approaches avoid the retraining process of the reduced-size network, which is necessary in any pruning technique.


The Third International Workshop on Multi-Agent Robotic Systems | 2016

An Experiment in Distributed Visual Attention

Pilar Bachiller; Pablo Bustos; José María Cañas; R. Royo


ECMR | 2011

An Incremental Hybrid Approach to Indoor Modeling.

Marco Antonio Gutiérrez; Pilar Bachiller; Luis J. Manso; Pablo Bustos; Pedro Núñez


Archive | 2014

PechaKuchaPolitec: trabajos de clase en 6:40 para todos

Pilar Bachiller; Alberto Gómez; Pedro Núñez Trujillo; Carmen Ortiz-Caraballo; Encarna Sosa Sánchez

Collaboration


Dive into the Pilar Bachiller's collaboration.

Top Co-Authors

Avatar

Pablo Bustos

University of Extremadura

View shared research outputs
Top Co-Authors

Avatar

Luis J. Manso

University of Extremadura

View shared research outputs
Top Co-Authors

Avatar

Pedro Núñez

University of Extremadura

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alberto Gómez

University of Extremadura

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge