Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rohan Paul is active.

Publication


Featured researches published by Rohan Paul.


The International Journal of Robotics Research | 2009

The New College Vision and Laser Data Set

Mike Smith; Ian Alan Baldwin; Winston Churchill; Rohan Paul; Paul Newman

In this paper we present a large dataset intended for use in mobile robotics research. Gathered from a robot driving several kilometers through a park and campus, it contains a five-degree-of-freedom dead-reckoned trajectory, laser range/reflectance data and 20 Hz stereoscopic and omnidirectional imagery. All data is carefully timestamped and all data logs are in human readable form with the images in standard formats. We provide a set of tools to access the data and detailed tagging and segmentations to facilitate its use.


international conference on robotics and automation | 2010

FAB-MAP 3D: Topological mapping with spatial and visual appearance

Rohan Paul; Paul Newman

This paper describes a probabilistic framework for appearance based navigation and mapping using spatial and visual appearance data. Like much recent work on appearance based navigation we adopt a bag-of-words approach in which positive or negative observations of visual words in a scene are used to discriminate between already visited and new places. In this paper we add an important extra dimension to the approach. We explicitly model the spatial distribution of visual words as a random graph in which nodes are visual words and edges are distributions over distances. Care is taken to ensure that the spatial model is able to capture the multi-modal distributions of inter-word spacing and account for sensor errors both in word detection and distances. Crucially, these inter-word distances are viewpoint invariant and collectively constitute strong place signatures and hence the impact of using both spatial and visual appearance is marked. We provide results illustrating a tremendous increase in precision-recall area compared to a state-of-the-art visual appearance only systems.


intelligent robots and systems | 2013

Dealing with shadows: Capturing intrinsic scene appearance for image-based outdoor localisation

Peter Corke; Rohan Paul; Winston Churchill; Paul Newman

In outdoor environments shadows are common. These typically strong visual features cause considerable change in the appearance of a place, and therefore confound vision-based localisation approaches. In this paper we describe how to convert a colour image of the scene to a greyscale invariant image where pixel values are a function of underlying material property not lighting. We summarise the theory of shadow invariant images and discuss the modelling and calibration issues which are important for non-ideal off-the-shelf colour cameras. We evaluate the technique with a commonly used robotic camera and an autonomous car operating in an outdoor environment, and show that it can outperform the use of ordinary greyscale images for the task of visual localisation.


international conference on robotics and automation | 2013

Knowing when we don't know: Introspective classification for mission-critical decision making

Hugo Grimmett; Rohan Paul; Rudolph Triebel; Ingmar Posner

Classification precision and recall have been widely adopted by roboticists as canonical metrics to quantify the performance of learning algorithms. This paper advocates that for robotics applications, which often involve mission-critical decision making, good performance according to these standard metrics is desirable but insufficient to appropriately characterise system performance. We introduce and motivate the importance of a classifiers introspective capacity: the ability to mitigate potentially overconfident classifications by an appropriate assessment of how qualified the system is to make a judgement on the current test datum. We provide an intuition as to how this introspective capacity can be achieved and systematically investigate it in a selection of classification frameworks commonly used in robotics: support vector machines, LogitBoost classifiers and Gaussian Process classifiers (GPCs). Our experiments demonstrate that for common robotics tasks a framework such as a GPC exhibits a superior introspective capacity while maintaining commensurate classification performance to more popular, alternative approaches.


intelligent robots and systems | 2012

Semantic categorization of outdoor scenes with uncertainty estimates using multi-class gaussian process classification

Rohan Paul; Rudolph Triebel; Daniela Rus; Paul Newman

This paper presents a novel semantic categorization method for 3D point cloud data using supervised, multiclass Gaussian Process (GP) classification. In contrast to other approaches, and particularly Support Vector Machines, which probably are the most used method for this task to date, GPs have the major advantage of providing informative uncertainty estimates about the resulting class labels. As we show in experiments, these uncertainty estimates can either be used to improve the classification by neglecting uncertain class labels or - more importantly - they can serve as an indication of the under-representation of certain classes in the training data. This means that GP classifiers are much better suited in a lifelong learning framework, where not all classes are represented initially, but instead new training data arrives during the operation of the robot.


ISRR | 2016

Driven Learning for Driving: How Introspection Improves Semantic Mapping

Rudolph Triebel; Hugo Grimmett; Rohan Paul; Ingmar Posner

This paper explores the suitability of commonly employed classification methods to action-selection tasks in robotics, and argues that a classifier’s introspective capacity is a vital but as yet largely under-appreciated attribute. As illustration we propose an active learning framework for semantic mapping in mobile robotics and demonstrate it in the context of autonomous driving. In this framework, data are selected for label disambiguation by a human supervisor using uncertainty sampling. Intuitively, an introspective classification framework—i.e. one which moderates its predictions by an estimate of how well it is placed to make a call in a particular situation—is particularly well suited to this task. To achieve an efficient implementation we extend the notion of introspection to a particular sparse Gaussian Process Classifier, the Informative Vector Machine (IVM). Furthermore, we leverage the information-theoretic nature of the IVM to formulate a principled mechanism for forgetting stale data, thereby bounding memory use and resulting in a truly life-long learning system. Our evaluation on a publicly available dataset shows that an introspective active learner asks more informative questions compared to a more traditional non-introspective approach like a Support Vector Machine (SVM) and in so doing, outperforms the SVM in terms of learning rate while retaining efficiency for practical use.


The International Journal of Robotics Research | 2016

Introspective classification for robot perception

Hugo Grimmett; Rudolph Triebel; Rohan Paul; Ingmar Posner

In robotics, the use of a classification framework which produces scores with inappropriate confidences will ultimately lead to the robot making dangerous decisions. In order to select a framework which will make the best decisions, we should pay careful attention to the ways in which it generates scores. Precision and recall have been widely adopted as canonical metrics to quantify the performance of learning algorithms, but for robotics applications involving mission-critical decision making, good performance in relation to these metrics is insufficient. We introduce and motivate the importance of a classifier’s introspective capacity: the ability to associate an appropriate assessment of confidence with any test case. We propose that a key ingredient for introspection is a framework’s potential to increase its uncertainty with the distance between a test datum its training data. We compare the introspective capacities of a number of commonly used classification frameworks in both classification and detection tasks, and show that better introspection leads to improved decision making in the context of tasks such as autonomous driving or semantic map generation.


international conference on robotics and automation | 2014

Visual precis generation using coresets

Rohan Paul; Dan Feldman; Daniela Rus; Paul Newman

Given an image stream, our on-line algorithm will select the semantically-important images that summarize the visual experience of a mobile robot. Our approach consists of data pre-clustering using coresets followed by a graph based incremental clustering procedure using a topic based image representation. A coreset for an image stream is a set of representative images that semantically compresses the data corpus, in the sense that every frame has a similar representative image in the coreset. We prove that our algorithm efficiently computes the smallest possible coreset under natural well-defined similarity metric and up to provably small approximation factor. The output visual summary is computed via a hierarchical tree of coresets for different parts of the image stream. This allows multi-resolution summarization (or a video summary of specified duration) in the batch setting and a memory-efficient incremental summary for the streaming case.


international conference on robotics and automation | 2011

Self help: Seeking out perplexing images for ever improving navigation

Rohan Paul; Paul Newman

This paper is a demonstration of how a robot can, through introspection and then targeted data retrieval, improve its own performance. It is a step in the direction of lifelong learning and adaptation and is motivated by the desire to build robots that have plastic competencies which are not baked in. They should react to and benefit from use. We consider a particular instantiation of this problem in the context of place recognition. Based on a topic based probabilistic model of images, we use a measure of perplexity to evaluate how well a working set of background images explain the robots online view of the world. Offline, the robot then searches an external resource to seek out additional background images that bolster its ability to localise in its environment when used next. In this way the robot adapts and improves performance through use.


The International Journal of Robotics Research | 2013

Self-help: Seeking out perplexing images for ever improving topological mapping

Rohan Paul; Paul Newman

In this work, we present a novel approach that allows a robot to improve its own navigation performance through introspection and then targeted data retrieval. It is a step in the direction of life-long learning and adaptation and is motivated by the desire to build robots that have plastic competencies which are not baked in. They should react to and benefit from use. We consider a particular instantiation of this problem in the context of place recognition. Based on a topic-based probabilistic representation for images, we use a measure of perplexity to evaluate how well a working set of background images explain the robot’s online view of the world. Offline, the robot then searches an external resource to seek out additional background images that bolster its ability to localize in its environment when used next. In this way the robot adapts and improves performance through use. We demonstrate this approach using data collected from a mobile robot operating in outdoor workspaces.

Collaboration


Dive into the Rohan Paul's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniela Rus

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Corke

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dan Feldman

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge