Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alejandro Bordallo is active.

Publication


Featured researches published by Alejandro Bordallo.


intelligent robots and systems | 2015

Counterfactual reasoning about intent for interactive navigation in dynamic environments

Alejandro Bordallo; Fabio Previtali; Nantas Nardelli; Subramanian Ramamoorthy

Many modern robotics applications require robots to function autonomously in dynamic environments including other decision making agents, such as people or other robots. This calls for fast and scalable interactive motion planning. This requires models that take into consideration the other agents intended actions in ones own planning. We present a real-time motion planning framework that brings together a few key components including intention inference by reasoning counterfactually about potential motion of the other agents as they work towards different goals. By using a light-weight motion model, we achieve efficient iterative planning for fluid motion when avoiding pedestrians, in parallel with goal inference for longer range movement prediction. This inference framework is coupled with a novel distributed visual tracking method that provides reliable and robust models for the current belief-state of the monitored environment. This combined approach represents a computationally efficient alternative to previously studied policy learning methods that often require significant offline training or calibration and do not yet scale to densely populated environments. We validate this framework with experiments involving multi-robot and human-robot navigation. We further validate the tracker component separately on much larger scale unconstrained pedestrian data sets.


international conference on robotics and automation | 2017

Physical symbol grounding and instance learning through demonstration and eye tracking

Svetlin Penkov; Alejandro Bordallo; Subramanian Ramamoorthy

It is natural for humans to work with abstract plans which are often an intuitive and concise way to represent a task. However, high level task descriptions contain symbols and concepts which need to be grounded within the environment if the plan is to be executed by an autonomous robot. The problem of learning the mapping between abstract plan symbols and their physical instances in the environment is known as the problem of physical symbol grounding. In this paper, we propose a framework for Grounding and Learning Instances through Demonstration and Eye tracking (GLIDE). We associate traces of task demonstration to a sequence of fixations which we call fixation programs and exploit their properties in order to perform physical symbol grounding. We formulate the problem as a probabilistic generative model and present an algorithm for computationally feasible inference over the proposed model. A key aspect of our work is that we estimate fixation locations within the environment which enables the appearance of symbol instances to be learnt. Instance learning is a crucial ability when the robot does not have any knowledge about the model or the appearance of the symbols referred to in the plan instructions. We have conducted human experiments and demonstrate that GLIDE successfully grounds plan symbols and learns the appearance of their instances, thus enabling robots to autonomously execute tasks in initially unknown environments.


robotics science and systems | 2016

Task Variant Allocation in Distributed Robotics

José Cano; David White; Alejandro Bordallo; Ciaran McCreesh; Patrick Prosser; Jeremy Singer; Vijay Nagarajan

This paper tackles the problem of allocating tasks to a distributed heterogeneous robotic system, where tasks---named *task variants* in the paper---can vary in terms of trade-off between resource requirements and quality of service provided. Three different methods (constraint programming, greedy, and metaheuristic) are proposed to solve such a problem and are evaluated both in simulation and in a real scenario, showing the goodness of the constraint programming method.


intelligent robots and systems | 2016

Automatic configuration of ROS applications for near-optimal performance

José Cano; Alejandro Bordallo; Vijay Nagarajan; Subramanian Ramamoorthy; Sethu Vijayakumar

The performance of a ROS application is a function of the individual performance of its constituent nodes. Since ROS nodes are typically configurable (parameterised), the specific parameter values adopted will determine the level of performance generated. In addition, ROS applications may be distributed across multiple computation devices, thus providing different options for node allocation. We address two configuration problems that the typical ROS user is confronted with: i) Determining parameter values and node allocations for maximising performance; ii) Determining node allocations for minimising hardware resources that can guarantee the desired performance. We formalise these problems with a mathematical model, a constrained form of a multiple-choice multiple knapsack problem. We propose a greedy algorithm for optimising each problem, using linear regression for predicting the performance of an individual ROS node over a continuum set of parameter combinations. We evaluate the algorithms through simulation and we validate them in a real ROS scenario, showing that the expected performance levels only deviate from the real measurements by an average of 2.5%.


Autonomous Robots | 2018

Solving the task variant allocation problem in distributed robotics

José Cano; David White; Alejandro Bordallo; Ciaran McCreesh; Anna Lito Michala; Jeremy Singer; Vijay Nagarajan

We consider the problem of assigning software processes (or tasks) to hardware processors in distributed robotics environments. We introduce the notion of a task variant, which supports the adaptation of software to specific hardware configurations. Task variants facilitate the trade-off of functional quality versus the requisite capacity and type of target execution processors. We formalise the problem of assigning task variants to processors as a mathematical model that incorporates typical constraints found in robotics applications; the model is a constrained form of a multi-objective, multi-dimensional, multiple-choice knapsack problem. We propose and evaluate three different solution methods to the problem: constraint programming, a constructive greedy heuristic and a local search metaheuristic. Furthermore, we demonstrate the use of task variants in a real instance of a distributed interactive multi-agent navigation system, showing that our best solution method (constraint programming) improves the system’s quality of service, as compared to the local search metaheuristic, the greedy heuristic and a randomised solution, by an average of 16, 31 and 56% respectively.


international conference on machine learning and applications | 2016

Predicting Future Agent Motions for Dynamic Environments

Fabio Previtali; Alejandro Bordallo; Luca Iocchi; Subramanian Ramamoorthy

Understanding activities of people in a monitored environment is a topic of active research, motivated by applications requiring context-awareness. Inferring future agent motion is useful not only for improving tracking accuracy, but also for planning in an interactive motion task. Despite rapid advances in the area of activity forecasting, many state-of-the-art methods are still cumbersome for use in realistic robots. This is due to the requirement of having good semantic scene and map labelling, as well as assumptions made regarding possible goals and types of motion. Many emerging applications require robots with modest sensory and computational ability to robustly perform such activity forecasting in high density and dynamic environments. We address this by combining a novel multi-camera tracking method, efficient multi-resolution representations of state and a standard Inverse Reinforcement Learning (IRL) technique, to demonstrate performance that is better than the state-of-the-art in the literature. In this framework, the IRL method uses agent trajectories from a distributed tracker and estimates a reward function within a Markov Decision Process (MDP) model. This reward function can then be used to estimate the agents motion in future novel task instances. We present empirical experiments using data gathered in our own lab and external corpora (VIRAT), based on which we find that our algorithm is not only efficiently implementable on a resource constrained platform but is also competitive in terms of accuracy with state-of-the-art alternatives (e.g., up to 20% better than the results reported in [1]).


robotics science and systems | 2016

Inverse eye tracking for intention inference and symbol grounding in human-robot collaboration

Svetlin Penkov; Alejandro Bordallo; Subramanian Ramamoorthy


arXiv: Robotics | 2018

Efficient Computation of Collision Probabilities for Safe Motion Planning.

Andrew Blake; Alejandro Bordallo; Majd Hawasly; Svetlin Penkov; Subramanian Ramamoorthy; Alexandre Silva


Archive | 2016

Robotics: Science and Systems Workshop on Planning for Human-Robot Interaction, 2016.

Svetlin Penkov; Alejandro Bordallo; Ram Ramamoorthy


international conference on robotics and automation | 2015

IRL-based prediction of goals for dynamic environments

Fabio Previtali; Alejandro Bordallo; Subramanian Ramamoorthy

Collaboration


Dive into the Alejandro Bordallo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fabio Previtali

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

José Cano

Polytechnic University of Valencia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David White

University College London

View shared research outputs
Top Co-Authors

Avatar

Majd Hawasly

University of Edinburgh

View shared research outputs
Researchain Logo
Decentralizing Knowledge