Lionel Ott
University of Sydney
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lionel Ott.
international conference on image processing | 2016
Alex Bewley; ZongYuan Ge; Lionel Ott; Fabio Ramos; Ben Upcroft
This paper explores a pragmatic approach to multiple object tracking where the main focus is to associate objects efficiently for online and realtime applications. To this end, detection quality is identified as a key factor influencing tracking performance, where changing the detector can improve tracking by up to 18.9%. Despite only using a rudimentary combination of familiar techniques such as the Kalman Filter and Hungarian algorithm for the tracking components, this approach achieves an accuracy comparable to state-of-the-art online trackers. Furthermore, due to the simplicity of our tracking method, the tracker updates at a rate of 260 Hz which is over 20x faster than other state-of-the-art trackers.
robotics science and systems | 2015
Fabio Ramos; Lionel Ott
The vast amount of data robots can capture today motivates the development of fast and scalable statistical tools to model the environment the robot operates in. We devise a new technique for environment representation through continuous occupancy mapping that improves on the popular occupancy grip maps in two fundamental aspects: 1) it does not assume an a priori discretisation of the world into grid cells and therefore can provide maps at an arbitrary resolution; 2) it captures statistical relationships between measurements naturally, thus being more robust to outliers and possessing better generalisation performance. The technique, named Hilbert maps, is based on the computation of fast kernel approximations that project the data in a Hilbert space where a logistic regression classifier is learnt. We show that this approach allows for efficient stochastic gradient optimisation where each measurement is only processed once during learning in an online manner. We present results with three types of approximations, Random Fourier, Nystrom and a novel sparse projection. We also show how to extend the approach to accept probability distributions as inputs, i.e. when there is uncertainty over the position of laser scans due to sensor or localisation errors. Experiments demonstrate the benefits of the approach in popular benchmark datasets with several thousand laser scans.
international conference on robotics and automation | 2012
Lionel Ott; Fabio Ramos
We present an approach to automatically learn the visual appearance of an environment in terms of object classes. The procedure is totally unsupervised, incremental, and can be executed in real time. The traversability property of an unseen object is also learnt without human supervision by the interaction between the robot and the environment. An incremental version of affinity propagation, a state-of-the-art clustering procedure, is used to cluster image patches into groups of similar visual appearance. For each of these clusters, we obtain the probability of representing an obstacle through the interaction of the robot with the environment. This information then allows the robot to navigate safely through the environment based solely on visual information. Experimental results show that our method extracts meaningful clusters from the images and learns the appearance of objects efficiently. We show that the approach generalises well to both indoor and outdoor environments and that the amount of learning reduces as the robot explores the environment. This is a fundamental property for autonomous adaptation and long-term autonomy.
international conference on robotics and automation | 2014
Jefferson R. Souza; Roman Marchant; Lionel Ott; Denis F. Wolf; Fabio Ramos
A key challenge for long-term autonomy is to enable a robot to automatically model properties of the environment while actively searching for better decisions to accomplish its task. This amounts to the problem of exploration-exploitation in the context of active perception. This paper addresses active perception and presents a technique to incrementally model the roughness of the terrain a robot navigates on while actively searching for waypoints that reduce the overall vibration experienced during travel. The approach employs Gaussian processes in conjunction with Bayesian optimisation for decision making. The algorithms are executed in real-time on the robot while it explores the environment. We present experiments with an outdoor vehicle navigating over several types of terrains demonstrating the properties and effectiveness of the approach.
international conference on robotics and automation | 2016
Alex Bewley; Lionel Ott; Fabio Ramos; Ben Upcroft
This paper presents a self-supervised approach for learning to associate object detections in a video sequence as often required in tracking-by-detection systems. In this paper we focus on learning an affinity model to estimate the data association cost, which can adapt to different situations by exploiting the sequential nature of video data. We also propose a framework for gathering additional training samples at test time with high variation in visual appearance, naturally inherent in large temporal windows. Reinforcing the model with these difficult samples greatly improves the affinity model compared to standard similarity measures such as cosine similarity. We experimentally demonstrate the efficacy of the resulting affinity model on several multiple object tracking (MOT) benchmark sequences. Using the affinity model alone places this approach in the top 25 state-of-the-art trackers with an average rank of 21.3 across 11 test sequences and an overall multiple object tracking accuracy (MOTA) of 17%. This is considerable as our simple approach only uses the appearance of the detected regions in contrast to other techniques with global optimisation or complex motion models.
The International Journal of Robotics Research | 2016
Fabio Ramos; Lionel Ott
The vast amount of data robots can capture today motivates the development of fast and scalable statistical tools to model the space the robot operates in. We devise a new technique for environment representation through continuous occupancy mapping that improves on the popular occupancy grip maps in two fundamental aspects: (1) it does not assume an a priori discrimination of the world into grid cells and therefore can provide maps at an arbitrary resolution; (2) it captures spatial relationships between measurements naturally, thus being more robust to outliers and possessing better generalization performance. The technique, named Hilbert maps, is based on the computation of fast kernel approximations that project the data in a Hilbert space where a logistic regression classifier is learnt. We show that this approach allows for efficient stochastic gradient optimization where each measurement is only processed once during learning in an online manner. We present results with three types of approximations: random Fourier; Nyström; and a novel sparse projection. We also extend the approach to accept probability distributions as inputs, for example, due to uncertainty over the position of laser scans due to sensor or localization errors. In this extended version, experiments were conducted in two dimensions and three dimensions, using popular benchmark datasets. Furthermore, an analysis of the adaptive capabilities of the technique to handle large changes in the data, such as trajectory update before and after loop closure during simultaneous localization and mapping, is also included.
international conference on robotics and automation | 2017
Gilad Francis; Lionel Ott; Fabio Ramos
Safe path planning is a crucial component in autonomous robotics. The many approaches to find a collision free path can be categorically divided into trajectory optimizers and sampling-based methods. When planning using occupancy maps, the sampling-based approach is the prevalent method. The main drawback of such techniques is that the reasoning about the expected cost of a plan is limited to the search heuristic used by each method. We introduce a novel planning method based on trajectory optimization to plan safe and efficient paths in continuous occupancy maps. We extend the expressiveness of the state-of-the-art functional gradient optimization methods by devising a stochastic gradient update rule to optimize a path represented as a Gaussian process. This approach avoids the need to commit to a specific resolution of the path representation, whether spatial or parametric. We utilize a continuous occupancy map representation in order to define our optimization objective, which enables fast computation of occupancy gradients. We show that this approach is essential in order to ensure convergence to the optimal path, and present results and comparisons to other planning methods in both simulation and with real laser data. The experiments demonstrate the benefits of using this technique when planning for safe and efficient paths in continuous occupancy maps.
intelligent robots and systems | 2016
Charika De Alvis; Lionel Ott; Fabio Ramos
Robots typically possess sensors of different modalities, such as colour cameras, inertial measurement units, and 3D laser scanners. Often, solving a particular problem becomes easier when more than one modality is used. However, while there are undeniable benefits to combine sensors of different modalities the process tends to be complicated. Segmenting scenes observed by the robot into a discrete set of classes is a central requirement for autonomy as understanding the scene is the first step to reason about future situations. Scene segmentation is commonly performed using either image data or 3D point cloud data. In computer vision many successful methods for scene segmentation are based on conditional random fields (CRF) where the maximum a posteriori (MAP) solution to the segmentation can be obtained by inference. In this paper we devise a new CRF inference method for scene segmentation that incorporates global constraints, enforcing the sets of nodes are assigned the same class label. To do this efficiently, the CRF is formulated as a relaxed quadratic program whose MAP solution is found using a gradient-based optimisation approach. The proposed method is evaluated on images and 3D point cloud data gathered in urban environments where image data provides the appearance features needed by the CRF, while the 3D point cloud data provides global spatial constraints over sets of nodes. Comparisons with belief propagation, conventional quadratic programming relaxation, and higher order potential CRF show the benefits of the proposed method.
intelligent robots and systems | 2013
Lionel Ott; Fabio Ramos
Current robotic systems carry many diverse sensors such as laser scanners, cameras and inertial measurement units just to name a few. Typically such data is fused by engineering a feature that weights the different sensors against each other in perception tasks. However, in a long-term autonomy setting the sensor readings may change drastically over time which makes a manual feature design impractical. A method that can automatically combine features of different data sources would be highly desirable for adaptation to different environments. In this paper, we propose a novel clustering method, coined Layered Affinity Propagation, for automatic clustering of observations that only requires the definition of features on individual data sources. How to combine these features to obtain a good clustering solution is left to the algorithm, removing the need to create and tune a complicated feature encompassing all sources. We evaluate the proposed method on data containing two very common sensor modalities, images and range information. In a first experiment we show the capability of the method to perform scene segmentation on Kinect data. A second experiment shows how this novel method handles the task of clustering segmented colour and depth data obtained from a Velodyne and camera in an urban environment.
international conference on robotics and automation | 2017
Charika De Alvis; Lionel Ott; Fabio Ramos
Scene understanding is a crucial requirement for robot navigation. Conditional Random Fields (CRF) are commonly used to solve the scene labelling problem since they represent contextual information efficiently and provide efficient inference methods. However, when a robot navigates through an unknown environment, it is often necessary to adjust the parameters of the CRF online to maintain the same level of accuracy under changes no predicted during the training phase. Online parameter learning can be challenging since ground truth information is not available for newly encountered scenes. To address this issue, this paper proposes a stochastic gradient descent (SGD) method to learn the parameters of a constrained CRF (cCRF) in an online fashion. By leveraging the information from laser scans and image data the complexity of the labelling problem can be significantly reduced. The parameters are estimated by optimising a novel loss function that takes into account highly confident labels as a reference while eliminating the need for manual labelling. These labels are obtained purely based on the information from camera and laser sensors, in a self-supervised manner. Sensor data is pre-processed using methods such as convolutional nets, discriminant analysis, and Euclidean distance based clustering to extract reference labels. We show that this online parameter learning is robust to changes in the data distribution by selecting the learning rate appropriately. Experimental results are presented on the KITTI data set demonstrating the benefits of online CRF training.