Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christopher Burbridge is active.

Publication


Featured researches published by Christopher Burbridge.


Robotics and Autonomous Systems | 2007

Instantaneous robot self-localization and motion estimation with omnidirectional vision

Libor Spacek; Christopher Burbridge

This paper presents two related methods for autonomous visual guidance of robots: localization by trilateration, and interframe motion estimation. Both methods use coaxial omnidirectional stereopsis (omnistereo), which returns the range r to objects or guiding points detected in the images. The trilateration method achieves self-localization using r from the three nearest objects at known positions. The interframe motion estimation is more general, being able to use any features in an unknown environment. The guiding points are detected automatically on the basis of their perceptual significance and thus they need not have either special markings or be placed at known locations. The interframe motion estimation does not require previous motion history, making it well suited for detecting acceleration (in 20th of a second) and thus supporting dynamic models of robots motion which will gain in importance when autonomous robots achieve useful speeds. An initial estimate of the robots rotation ω (the visual compass) is obtained from the angular optic flow in an omnidirectional image. A new noniterative optic flow method has been developed for this purpose. Adding ω to all observed (robot relative) bearings θ gives true bearings towards objects (relative to a fixed coordinate frame). The rotation ω and the r,θ coordinates obtained at two frames for a single fixed point at unknown location are sufficient to estimate the translation of the robot. However, a large number of guiding points are typically detected and matched in most real images. Each such point provides a solution for the robots translation. The solutions are combined by a robust clustering algorithm Clumat that reduces rotation and translation errors. Simulator experiments are included for all the presented methods. Real images obtained from ScitosG5 autonomously moving robot were used to test the interframe rotation and to show that the presented vision methods are applicable to real images in real robotics scenarios.


IEEE Robotics & Automation Magazine | 2017

The STRANDS Project: Long-Term Autonomy in Everyday Environments

Nick Hawes; Christopher Burbridge; Ferdian Jovan; Lars Kunze; Bruno Lacerda; Lenka Mudrová; Jay Young; Jeremy L. Wyatt; Denise Hebesberger; Tobias Körtner; Rares Ambrus; Nils Bore; John Folkesson; Patric Jensfelt; Lucas Beyer; Alexander Hermans; Bastian Leibe; Aitor Aldoma; Thomas Faulhammer; Michael Zillich; Markus Vincze; Eris Chinellato; Muhannad Al-Omari; Paul Duckworth; Yiannis Gatsoulis; David C. Hogg; Anthony G. Cohn; Christian Dondrup; Jaime Pulido Fentanes; Tomas Krajnik

Thanks to the efforts of the robotics and autonomous systems community, the myriad applications and capacities of robots are ever increasing. There is increasing demand from end users for autonomous service robots that can operate in real environments for extended periods. In the Spatiotemporal Representations and Activities for Cognitive Control in Long-Term Scenarios (STRANDS) project (http://strandsproject.eu), we are tackling this demand head-on by integrating state-of-the-art artificial intelligence and robotics research into mobile service robots and deploying these systems for long-term installations in security and care environments. Our robots have been operational for a combined duration of 104 days over four deployments, autonomously performing end-user-defined tasks and traversing 116 km in the process. In this article, we describe the approach we used to enable long-term autonomous operation in everyday environments and how our robots are able to use their long run times to improve their own performance.


international conference on robotics and automation | 2017

Autonomous Learning of Object Models on a Mobile Robot

Thomas Faulhammer; Rares Ambrus; Christopher Burbridge; Michael Zillich; John Folkesson; Nick Hawes; Patric Jensfelt; Markus Vincze

In this article, we present and evaluate a system, which allows a mobile robot to autonomously detect, model, and re-recognize objects in everyday environments. While other systems have demonstrated one of these elements, to our knowledge, we present the first system, which is capable of doing all of these things, all without human interaction, in normal indoor scenes. Our system detects objects to learn by modeling the static part of the environment and extracting dynamic elements. It then creates and executes a view plan around a dynamic element to gather additional views for learning. Finally, these views are fused to create an object model. The performance of the system is evaluated on publicly available datasets as well as on data collected by the robot in both controlled and uncontrolled scenarios.


intelligent robots and systems | 2014

Combining Top-down Spatial Reasoning and Bottom-up Object Class Recognition for Scene Understanding

Lars Kunze; Christopher Burbridge; Marina Alberti; Akshaya Thippur; John Folkesson; Patric Jensfelt; Nick Hawes

Many robot perception systems are built to only consider intrinsic object features to recognise the class of an object. By integrating both top-down spatial relational reasoning and bottom-up objec ...Long-term autonomous learning of human environments entails modelling and generalizing over distinct variations in: object instances in different scenes, and different scenes with respect to space and time. It is crucial for the robot to recognize the structure and context in spatial arrangements and exploit these to learn models which capture the essence of these distinct variations. Table-tops posses a typical structure repeatedly seen in human environments and are identified by characteristics of being personal spaces of diverse functionalities and dynamically changing due to human interactions. In this paper, we present a 3D dataset of 20 office table-tops manually observed and scanned 3 times a day as regularly as possible over 19 days (461 scenes) and subsequently, manually annotated with 18 different object classes, including multiple instances. We analyse the dataset to discover spatial structures and patterns in their variations. The dataset can, for example, be used to study the spatial relations between objects and long-term environment models for applications such as activity recognition, context and functionality estimation and anomaly detection.


Robotics and Autonomous Systems | 2014

Manipulation planning using learned symbolic state abstractions

Richard Dearden; Christopher Burbridge

We present an approach for planning robotic manipulation tasks that uses a learned mapping between geometric states and logical predicates. Manipulation planning, because it requires task-level and geometric reasoning, requires such a mapping to convert between the two. Consider a robot tasked with putting several cups on a tray. The robot needs to find positions for all the objects, and may need to nest one cup inside another to get them all on the tray. This requires translating back and forth between symbolic states that the planner uses, such as stacked (cup1,cup2), and geometric states representing the positions and poses of the objects. We learn the mapping from labelled examples, and importantly learn a representation that can be used in both the forward (from geometric to symbolic) and reverse directions. This enables us to build symbolic representations of scenes the robot observes, but also to translate a desired symbolic state from a plan into a geometric state that the robot can achieve through manipulation. We also show how such a mapping can be used for efficient manipulation planning: the planner first plans symbolically, then applies the mapping to generate geometric positions that are then sent to a path planner.


robotics and biomimetics | 2011

Online unsupervised cumulative learning for life-long robot operation

Yiannis Gatsoulis; Christopher Burbridge; Tm McGinnity

The effective life-long operation of service robots and assistive companions depends on the robust ability of the system to learn cumulatively and in an unsupervised manner. For a cumulative learning robot there are particular characteristics that the system should have, such as being able to detect new perceptions, being able to learn online and without supervision, expand when required, etc. Bag-of-Words is a generic and compact representation of visual perceptions which has commonly and successfully been used in object recognition problems. However in its original form, it is unable to operate online and expand its vocabulary when required. This paper describes a novel method for cumulative unsupervised learning of objects by visual inspection, using an online and expanding when required Bag-of-Words. We present a set of experiments with a real-world robot, which cumulatively learns a series of objects. The results show that the system is able to learn cumulatively and recall correctly the objects it was trained on.


intelligent robots and systems | 2012

Learning operators for manipulation planning

Christopher Burbridge; Zeyn A. Saigol; Florian Schmidt; Christoph Borst; Richard Dearden

We describe a method for learning planning operators for manipulation tasks from hand-written programs to provide a high-level command interface to a robot manipulator that allows tasks to be specified simply as goals. This is made challenging by the fact that a manipulator is a hybrid system-any model of it consists of discrete variables such as “holding cup” and continuous variables such as the poses of objects and position of the robot. The approach relies on three novel techniques: the action learning from annotated code uses simulation to find PDDL action models corresponding to code fragments. To provide the geometric information needed we use supervised learning to produce a mapping from geometric to symbolic state. The mapping can also be used in reverse to produce a geometric state that makes a set of predicates true, thus allowing desired object positions to be generated during planning. Finally, during execution of the plan we use a partially observable Markov decision problem-based planner to repair the initial plan when unforeseen geometric constraints prevent actions from being executed.


conference towards autonomous robotic systems | 2012

Learning the Geometric Meaning of Symbolic Abstractions for Manipulation Planning

Christopher Burbridge; Richard Dearden

We present an approach for learning a mapping between geometric states and logical predicates. This mapping is a necessary part of any robotic system that requires task-level reasoning and path planning. Consider a robot tasked with putting a number of cups on a tray. To achieve the goal the robot needs to find positions for all the objects, and if necessary may need to stack one cup inside another to get them all on the tray. This requires translating back and forth between symbolic states that the planner uses such as “stacked(cup1,cup2)” and geometric states representing the positions and poses of the objects. The mapping we learn in this paper achieves this translation. We learn it from labelled examples, and significantly, learn a representation that can be used in both the forward (from geometric to symbolic) and reverse directions. This enables us to build symbolic representations of scenes the robot observes, and also to translate a desired symbolic state from a plan into a geometric state that the robot can actually achieve through manipulation. We also show how the approach can be used to generate significantly different geometric solutions to support backtracking. We evaluate the work both in simulation and on a robot arm.


robotics and biomimetics | 2012

Biologically inspired intrinsically motivated learning for service robots based on novelty detection and habituation

Yiannis Gatsoulis; Christopher Burbridge; T. Martin McGinnity

The effective operation of service robots relies on developmental programs that allow the robot to expand its knowledge about its dynamic operating environment. Motivation theories from neuroscience and neuropsychology study the underlying mechanisms that drive the engagement of biological creatures to certain activities, such as learning. This research uses a physical Willow Garage PR2 robot, which is equipped with a cumulative learning mechanism driven by the intrinsic motivation of novelty detection based on computational models of biological habituation. It cumulatively learns the 360° appearance of novel real-world objects by picking them up. This paper discusses the theoretical motivations and background information on intrinsic motivations as novelty detection. The results and conclusions from the experimental study are presented.


Archive | 2011

A Study of Enhanced Robot Autonomy in Telepresence

Lorenzo Riano; Christopher Burbridge; Tm McGinnity

Collaboration


Dive into the Christopher Burbridge's collaboration.

Top Co-Authors

Avatar

Nick Hawes

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Folkesson

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Patric Jensfelt

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Lars Kunze

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge