Enric Galceran
ETH Zurich
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Enric Galceran.
Autonomous Robots | 2017
Enric Galceran; Alexander G. Cunningham; Ryan M. Eustice; Edwin Olson
This paper reports on an integrated inference and decision-making approach for autonomous driving that models vehicle behavior for both our vehicle and nearby vehicles as a discrete set of closed-loop policies. Each policy captures a distinct high-level behavior and intention, such as driving along a lane or turning at an intersection. We first employ Bayesian changepoint detection on the observed history of nearby cars to estimate the distribution over potential policies that each nearby car might be executing. We then sample policy assignments from these distributions to obtain high-likelihood actions for each participating vehicle, and perform closed-loop forward simulation to predict the outcome for each sampled policy assignment. After evaluating these predicted outcomes, we execute the policy with the maximum expected reward value. We validate behavioral prediction and decision-making using simulated and real-world experiments.
intelligent robots and systems | 2016
Mark Pfeiffer; Ulrich Schwesinger; Hannes Sommer; Enric Galceran; Roland Siegwart
This paper reports on a data-driven motion planning approach for interaction-aware, socially-compliant robot navigation among human agents. Autonomous mobile robots navigating in workspaces shared with human agents require motion planning techniques providing seamless integration and smooth navigation in such. Smooth integration in mixed scenarios calls for two abilities of the robot: predicting actions of others and acting predictably for them. The former requirement requests trainable models of agent behaviors in order to accurately forecast their actions in the future, taking into account their reaction on the robots decisions. A human-like navigation style of the robot facilitates other agents-most likely not aware of the underlying planning technique applied-to predict the robot motion vice versa, resulting in smoother joint navigation. The approach presented in this paper is based on a feature-based maximum entropy model and is able to guide a robot in an unstructured, real-world environment. The model is trained to predict joint behavior of heterogeneous groups of agents from onboard data of a mobile platform. We evaluate the benefit of interaction-aware motion planning in a realistic public setting with a total distance traveled of over 4 km. Interestingly the motion models learned from human-human interaction did not hold for robot-human interaction, due to the high attention and interest of pedestrians in testing basic braking functionality of the robot.
international symposium on experimental robotics | 2016
Timo Hinzmann; Thomas Stastny; Gianpaolo Conte; Patrick Doherty; Piotr Rudol; Mariusz Wzorek; Enric Galceran; Roland Siegwart; Igor Gilitschenski
This paper demonstrates how a heterogeneous fleet of unmanned aerial vehicles (UAVs) can support human operators in search and rescue (SaR) scenarios. We describe a fully autonomous delegation framework that interprets the top-level commands of the rescue team and converts them into actions of the UAVs. In particular, the UAVs are requested to autonomously scan a search area and to provide the operator with a consistent georeferenced 3D reconstruction of the environment to increase the environmental awareness and to support critical decision-making. The mission is executed based on the individual platform and sensor capabilities of rotary- and fixed-wing UAVs (RW-UAV and FW-UAV respectively): With the aid of an optical camera, the FW-UAV can generate a sparse point-cloud of a large area in a short amount of time. A LiDAR mounted on the autonomous helicopter is used to refine the visual point-cloud by generating denser point-clouds of specific areas of interest. In this context, we evaluate the performance of point-cloud registration methods to align two maps that were obtained by different sensors. In our validation, we compare classical point-cloud alignment methods to a novel probabilistic data association approach that specifically takes the individual point-cloud densities into consideration.
Autonomous Robots | 2016
Stephen M. Chaves; Ayoung Kim; Enric Galceran; Ryan M. Eustice
This paper reports on an active SLAM framework for performing large-scale inspections with an underwater robot. We propose a path planning algorithm integrated with visual SLAM that plans loop-closure paths in order to decrease navigation uncertainty. While loop-closing revisit actions bound the robot’s uncertainty, they also lead to redundant area coverage and increased path length. Our proposed opportunistic framework leverages sampling-based techniques and information filtering to plan revisit paths that are coverage efficient. We employ Gaussian process regression for modeling the prediction of camera registrations and use a two-step optimization procedure for selecting revisit actions. We show that the proposed method offers many benefits over existing solutions and good performance for bounding navigation uncertainty in long-term autonomous operations with hybrid simulation experiments and real-world field trials performed by an underwater inspection robot.
Archive | 2019
Alexander G. Cunningham; Enric Galceran; Dhanvin Mehta; Gonzalo Ferrer; Ryan M. Eustice; Edwin Olson
This chapter presents multi-policy decision-making (MPDM): a novel approach to navigating in dynamic multi-agent environments. Rather than planning the trajectory of the robot explicitly, the planning process selects one of a set of closed-loop behaviors whose utility can be predicted through forward simulation that captures the complex interactions between the actions of these agents. These polices capture different high-level behavior and intentions, such as driving along a lane, turning at an intersection, or following pedestrians. We present two different scenarios where MPDM has been applied successfully: an autonomous driving environment models vehicle behavior for both our vehicle and nearby vehicles and a social environment, where multiple agents or pedestrians configure a dynamic environment for autonomous robot navigation. We present extensive validation for MPDM on both scenarios, using simulated and real-world experiments.
Workshop on Sensing and Control for Autonomous Vehicles: Applications to Land, Water and Air Vehicles, 2017 | 2017
Stephen M. Chaves; Enric Galceran; Paul Ozog; Jeffrey M. Walls; Ryan M. Eustice
This chapter reviews the concept of pose-graph simultaneous localization and mapping (SLAM) for underwater navigation . We show that pose-graph SLAM is a generalized framework that can be applied to many diverse underwater navigation problems in marine robotics . We highlight three specific examples as applied in the areas of autonomous ship hull inspection and multi-vehicle cooperative navigation .
intelligent robots and systems | 2016
Helen Oleynikova; Michael Burri; Zachary Taylor; Juan I. Nieto; Roland Siegwart; Enric Galceran
Journal of Field Robotics | 2017
Gregory Hitz; Enric Galceran; Marie-Ève Garneau; François Pomerleau; Roland Siegwart
international symposium on experimental robotics | 2016
Marija Popovic; Gregory Hitz; Juan I. Nieto; Roland Siegwart; Enric Galceran
Archive | 2016
Edwin Olson; Enric Galceran; Alexander G. Cunningham; Ryan M. Eustice; James R. McBride