Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rowan McAllister is active.

Publication


Featured researches published by Rowan McAllister.


Journal of Field Robotics | 2014

Learned Stochastic Mobility Prediction for Planning with Control Uncertainty on Unstructured Terrain

Thierry Peynot; Sin-Ting Lui; Rowan McAllister; Robert Fitch; Salah Sukkarieh

Motion planning for planetary rovers must consider control uncertainty in order to maintain the safety of the platform during navigation. Modelling such control uncertainty is difficult due to the complex interaction between the platform and its environment. In this paper, we propose a motion planning approach whereby the outcome of control actions is learned from experience and represented statistically using a Gaussian process regression model. This mobility prediction model is trained using sample executions of motion primitives on representative terrain, and predicts the future outcome of control actions on similar terrain. Using Gaussian process regression allows us to exploit its inherent measure of prediction uncertainty in planning. We integrate mobility prediction into a Markov decision process framework and use dynamic programming to construct a control policy for navigation to a goal region in a terrain map built using an on-board depth sensor. We consider both rigid terrain, consisting of uneven ground, small rocks, and non-traversable rocks, and also deformable terrain. We introduce two methods for training the mobility prediction model from either proprioceptive or exteroceptive observations, and report results from nearly 300 experimental trials using a planetary rover platform in a Mars-analogue environment. Our results validate the approach and demonstrate the value of planning under uncertainty for safe and reliable navigation.


distributed autonomous robotic systems | 2013

Hierarchical planning for self-reconfiguring robots using module kinematics

Robert Fitch; Rowan McAllister

Reconfiguration allows a self-reconfiguring modular robot to adapt to its environment. The reconfiguration planning problem is one of the key algorithmic challenges in realizing self-reconfiguration. Many existing successful approaches rely on grouping modules together to act as meta-modules. However, we are interested in reconfiguration planning that does not impose fixed meta-module relationships but instead forms cooperative relationships between modules dynamically. This approach avoids the need to hand-code meta-module motions and potentially allows reconfiguration with fewer modules. In this paper we present a general two level reconfiguration framework. The top level plans in module-connector space using distributed dynamic programming. The lower level accepts a transition function for the kinematic model of the chosen module type as input. As an example, we implement such a transition function for the 3R, SuperBot-style module. Although not explored in this paper, this general approach is naturally extended to consider power use, clock time, or other quantities of interest.


international joint conference on artificial intelligence | 2017

Concrete Problems for Autonomous Vehicle Safety: Advantages of Bayesian Deep Learning

Rowan McAllister; Yarin Gal; Alex Kendall; Mark van der Wilk; Amar Shah; Roberto Cipolla; Adrian Weller

Adrian Weller acknowledges support by the Alan Turing Institute under the EPSRC grant EP/N510129/1, and by the Leverhulme Trust via the CFI.


Science & Engineering Faculty | 2013

Resilient Navigation through Probabilistic Modality Reconfiguration

Thierry Peynot; Robert Fitch; Rowan McAllister; Alen Alempijevic

This paper proposes an approach to achieve resilient navigation for indoor mobile robots. Resilient navigation seeks to mitigate the impact of control, localisation, or map errors on the safety of the platform while enforcing the robot’s ability to achieve its goal. We show that resilience to unpredictable errors can be achieved by combining the benefits of independent and complementary algorithmic approaches to navigation, or modalities, each tuned to a particular type of environment or situation. In this paper, the modalities comprise a path planning method and a reactive motion strategy. While the robot navigates, a Hidden Markov Model continually estimates the most appropriate modality based on two types of information: context (information known a priori) and monitoring (evaluating unpredictable aspects of the current situation). The robot then uses the recommended modality, switching between one and another dynamically. Experimental validation with a SegwayRMP- based platform in an office environment shows that our approach enables failure mitigation while maintaining the safety of the platform. The robot is shown to reach its goal in the presence of: 1) unpredicted control errors, 2) unexpected map errors and 3) a large injected localisation fault.


neural information processing systems | 2017

Data-Efficient Reinforcement Learning in Continuous State-Action Gaussian-POMDPs

Rowan McAllister; Carl Edward Rasmussen

We present a data-efficient reinforcement learning method for continuous state-action systems under significant observation noise. Data-efficient solutions under small noise exist, such as PILCO which learns the cartpole swing-up task in 30s. PILCO evaluates policies by planning state-trajectories using a dynamics model. However, PILCO applies policies to the observed state, therefore planning in observation space. We extend PILCO with filtering to instead plan in belief space, consistent with partially observable Markov decisions process (POMDP) planning. This enables data-efficient learning under significant observation noise, outperforming more naive methods such as post-hoc application of a filter to policies optimised by the original (unfiltered) PILCO algorithm. We test our method on the cartpole swing-up task, which involves nonlinear dynamics and requires nonlinear control.


Archive | 2017

Bayesian Learning for Data-Efficient Control

Rowan McAllister

Applications to learn control of unfamiliar dynamical systems with increasing autonomy are ubiquitous. From robotics, to finance, to industrial processing, autonomous learning helps obviate a heavy reliance on experts for system identification and controller design. Often real world systems are nonlinear, stochastic, and expensive to operate (e.g. slow, energy intensive, prone to wear and tear). Ideally therefore, nonlinear systems can be identified with minimal system interaction. This thesis considers data efficient autonomous learning of control of nonlinear, stochastic systems. Data efficient learning critically requires probabilistic modelling of dynamics. Traditional control approaches use deterministic models, which easily overfit data, especially small datasets. We use probabilistic Bayesian modelling to learn systems from scratch, similar to the PILCO algorithm, which achieved unprecedented data efficiency in learning control of several benchmarks. We extend PILCO in three principle ways. First, we learn control under significant observation noise by simulating a filtered control process using a tractably analytic framework of Gaussian distributions. In addition, we develop the ‘latent variable belief Markov decision process’ when filters must predict under real-time constraints. Second, we improve PILCO’s data efficiency by directing exploration with predictive loss uncertainty and Bayesian optimisation, including a novel approximation to the Gittins index. Third, we take a step towards data efficient learning of high-dimensional control using Bayesian neural networks (BNN). Experimentally we show although filtering mitigates adverse effects of observation noise, much greater performance is achieved when optimising controllers with evaluations faithful to reality: by simulating closed-loop filtered control if executing closed-loop filtered control. Thus, controllers are optimised w.r.t. how they are used, outperforming filters applied to systems optimised by unfiltered simulations. We show directed exploration improves data efficiency. Lastly, we show BNN dynamics models are almost as data efficient as Gaussian process models. Results show data efficient learning of high-dimensional control is possible as BNNs scale to high-dimensional state inputs.


arXiv: Learning | 2018

Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models.

Kurtland Chua; Roberto Calandra; Rowan McAllister; Sergey Levine


IAS (2) | 2012

Resilient Navigation through Probabilistic Modality Reconfiguration.

Thierry Peynot; Robert Fitch; Rowan McAllister; Alen Alempijevic


arXiv: Learning | 2018

Deep Imitative Models for Flexible Inference, Planning, and Control

Nicholas Rhinehart; Rowan McAllister; Sergey Levine


arXiv: Machine Learning | 2016

Data-Efficient Reinforcement Learning in Continuous-State POMDPs

Rowan McAllister; Carl Edward Rasmussen

Collaboration


Dive into the Rowan McAllister's collaboration.

Top Co-Authors

Avatar

Thierry Peynot

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sergey Levine

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kurtland Chua

University of California

View shared research outputs
Top Co-Authors

Avatar

Roberto Calandra

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alex Kendall

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge