Chi Hay Tong
University of Oxford
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chi Hay Tong.
international conference on robotics and automation | 2017
Martin Engelcke; Dushyant Rao; Dominic Zeng Wang; Chi Hay Tong; Ingmar Posner
This paper proposes a computationally efficient approach to detecting objects natively in 3D point clouds using convolutional neural networks (CNNs). In particular, this is achieved by leveraging a feature-centric voting scheme to implement novel convolutional layers which explicitly exploit the sparsity encountered in the input. To this end, we examine the trade-off between accuracy and speed for different architectures and additionally propose to use an L1 penalty on the filter activations to further encourage sparsity in the intermediate representations. To the best of our knowledge, this is the first work to propose sparse convolutional layers and L1 regularisation for efficient large-scale processing of 3D data. We demonstrate the efficacy of our approach on the KITTI object detection benchmark and show that VoteSDeep models with as few as three layers outperform the previous state of the art in both laser and laser-vision based approaches by margins of up to 40% while remaining highly competitive in terms of processing time.
The International Journal of Robotics Research | 2013
Chi Hay Tong; Paul Timothy Furgale; Timothy D. Barfoot
In this paper, we present Gaussian Process Gauss–Newton (GPGN), an algorithm for non-parametric, continuous-time, nonlinear, batch state estimation. This work adapts the methods of Gaussian process (GP) regression to address the problem of batch simultaneous localization and mapping (SLAM) by using the Gauss–Newton optimization method. In particular, we formulate the estimation problem with a continuous-time state model, along with the more conventional discrete-time measurements. Two derivations are presented in this paper, reflecting both the weight-space and function-space approaches from the GP regression literature. Validation is conducted through simulations and a hardware experiment, which utilizes the well-understood problem of two-dimensional SLAM as an illustrative example. The performance is compared with the traditional discrete-time batch Gauss–Newton approach, and we also show that GPGN can be employed to estimate motion with only range/bearing measurements of landmarks (i.e. no odometry), even when there are not enough measurements to constrain the pose at a given timestep.
The International Journal of Robotics Research | 2013
Chi Hay Tong; David Gingras; Kevin Larose; Timothy D. Barfoot; Erick Dupuis
This paper describes a collection of 272 three-dimensional laser scans gathered at two unique planetary analogue rover test facilities in Canada, which offer emulated planetary terrain at manageable scales for algorithmic development. This dataset is subdivided into four individual subsets, each gathered using panning laser rangefinders on different mobile rover platforms. This data should be of interest to field robotics researchers developing rover navigation algorithms suitable for use in three-dimensional, unstructured, natural terrain. All of the data are presented in human-readable text files, and are accompanied by Matlab parsing scripts to facilitate use thereof. This paper provides an overview of the available data.
Journal of Field Robotics | 2012
Chi Hay Tong; Timothy D. Barfoot; Erick Dupuis
In this paper, we present a robust framework suitable for conducting three-dimensional simultaneous localization and mapping (3D SLAM) in a planetary work site environment. Operation in a planetary environment imposes sensing restrictions, as well as challenges due to the rugged terrain. Utilizing a laser rangefinder mounted on a rover platform, we have demonstrated an approach that is able to create globally consistent maps of natural, unstructured 3D terrain. The framework presented in this paper utilizes a sparse-feature-based approach and conducts data association using a combination of feature constellations and dense data. Because of feature scarcity, odometry measurements are also incorporated to provide additional information in feature-poor regions. To maintain global consistency, these measurements are resolved using a batch alignment algorithm, which is reinforced with heterogeneous outlier rejection to improve its robustness to outliers in either measurement type (i.e., laser or odometry). Finally, a map is created from the alignment estimates and the dense data. Extensive validation of the framework is provided using data gathered at two different planetary analogue facilities, which consist of 50 and 102 3D scans, respectively. At these sites, root-mean-squared mapping errors of 4.3 and 8.9 cm were achieved. Relative metrics are utilized for localization accuracy and map quality, which facilitate detailed analysis of the performance, including failure modes and possible future improvements.
robotics science and systems | 2014
Timothy D. Barfoot; Chi Hay Tong; Simo Särkkä
In this paper, we revisit batch state estimation through the lens of Gaussian process (GP) regression. We consider continuous-discrete estimation problems wherein a trajectory is viewed as a one-dimensional GP, with time as the independent variable. Our continuous-time prior can be defined by any linear, time-varying stochastic differential equation driven by white noise; this allows the possibility of smoothing our trajectory estimates using a variety of vehicle dynamics models (e.g., ‘constant-velocity’). We show that this class of prior results in an inverse kernel matrix (i.e., covariance matrix between all pairs of measurement times) that is exactly sparse (block-tridiagonal) and that this can be exploited to carry out GP regression (and interpolation) very efficiently. Though the prior is continuous, we consider measurements to occur at discrete times. When the measurement model is also linear, this GP approach is equivalent to classical, discrete-time smoothing (at the measurement times). When the measurement model is nonlinear, we iterate over the whole trajectory (as is common in vision and robotics) to maximize accuracy. We test the approach experimentally on a simultaneous trajectory estimation and mapping problem using a mobile robot dataset.
Autonomous Robots | 2015
Sean Anderson; Timothy D. Barfoot; Chi Hay Tong; Simo Särkkä
In this paper, we revisit batch state estimation through the lens of Gaussian process (GP) regression. We consider continuous-discrete estimation problems wherein a trajectory is viewed as a one-dimensional GP, with time as the independent variable. Our continuous-time prior can be defined by any nonlinear, time-varying stochastic differential equation driven by white noise; this allows the possibility of smoothing our trajectory estimates using a variety of vehicle dynamics models (e.g. ‘constant-velocity’). We show that this class of prior results in an inverse kernel matrix (i.e., covariance matrix between all pairs of measurement times) that is exactly sparse (block-tridiagonal) and that this can be exploited to carry out GP regression (and interpolation) very efficiently. When the prior is based on a linear, time-varying stochastic differential equation and the measurement model is also linear, this GP approach is equivalent to classical, discrete-time smoothing (at the measurement times); when a nonlinearity is present, we iterate over the whole trajectory to maximize accuracy. We test the approach experimentally on a simultaneous trajectory estimation and mapping problem using a mobile robot dataset.
Journal of Field Robotics | 2014
Chi Hay Tong; Sean Anderson; Hang Dong; Timothy D. Barfoot
In this paper, we present two methods for obtaining visual odometry VO estimates using a scanning laser rangefinder. Although common VO implementations utilize stereo camera imagery, passive cameras are dependent on ambient light. In contrast, actively illuminated sensors such as laser rangefinders work in a variety of lighting conditions, including full darkness. We leverage previous successes by applying sparse appearance-based methods to laser intensity images, and we address the issue of motion distortion by considering the timestamps of the interest points detected in each image. To account for the unique timestamps, we introduce two estimator formulations. In the first method, we extend the conventional discrete-time batch estimation formulation by introducing a novel frame-to-frame linear interpolation scheme, and in the second method, we consider the estimation problem by starting with a continuous-time process model. This is facilitated by Gaussian process Gauss-Newton GPGN, an algorithm for nonparametric, continuous-time, nonlinear, batch state estimation. Both laser-based VO methods are compared and validated using datasets obtained by two experimental configurations. These datasets consist of 11 km of field data gathered by a high-frame-rate scanning lidar and a 365 m traverse using a sweeping planar laser rangefinder. Statistical analysis shows a 5.3% average translation error as a percentage of distance traveled for linear interpolation and 4.4% for GPGN in the high-frame-rate scenario.
canadian conference on computer and robot vision | 2012
Chi Hay Tong; Paul Timothy Furgale; Timothy D. Barfoot
In this paper, we present Gaussian Process Gauss-Newton (GPGN), an algorithm for non-parametric, continuous-time, nonlinear, batch state estimation. This work adapts the methods of Gaussian Process regression to the problem of batch state estimation by using the Gauss-Newton method. In particular, we formulate the estimation problem with a continuous-time state model, along with the more conventional discrete-time measurements. Our derivation utilizes a basis function approach, but through algebraic manipulations, returns to a non-parametric form by replacing the basis functions with covariance functions (i.e., the kernel trick). The algorithm is validated through hardware-based experiments utilizing the well-understood problem of 2D rover localization using a known map as an illustrative example, and is compared to the traditional discrete-time batch Gauss-Newton approach.
international conference on robotics and automation | 2011
Chi Hay Tong; Timothy D. Barfoot
In this paper, we present an infrastructure-based ground-truth localization system suitable for deployment in large worksite environments. In particular, the system is low-cost, simple-to-deploy, and is able to provide full six-degree-of-freedom relative localization for three-dimensional laser scanners with centimetre-level accuracy in translation, and half-degree accuracy in orientation. This system utilizes common laser scanner hardware, and exploits the fact that retroreflective material is easily identified based on the return intensity. This enables the use of simple rectangular signs placed around the scene as landmarks. An uncertainty model is presented that accounts for the shape of the landmarks, and a batch alignment algorithm is formulated that efficiently considers the structure of the problem. Lastly, characterization of the accuracy of the system is provided through small-scale testing in an indoor lab, and examples for a large-scale setup.
The International Journal of Robotics Research | 2015
Paul Timothy Furgale; Chi Hay Tong; Timothy D. Barfoot; Gabe Sibley
Roboticists often formulate estimation problems in discrete time for the practical reason of keeping the state size tractable; however, the discrete-time approach does not scale well for use with high-rate sensors, such as inertial measurement units, rolling-shutter cameras, or sweeping laser imaging sensors. The difficulty lies in the fact that a pose variable is typically included for every time at which a measurement is acquired, rendering the dimension of the state impractically large for large numbers of measurements. This issue is exacerbated for the simultaneous localization and mapping problem, which further augments the state to include landmark variables. To address this tractability issue, we propose to move the full Maximum-a-Posteriori estimation problem into continuous time and use temporal basis functions to keep the state size manageable. We present a full probabilistic derivation of the continuous-time estimation problem, derive an estimator based on the assumption that the densities and processes involved are Gaussian and show how the coefficients of a relatively small number of basis functions can form the state to be estimated, making the solution efficient. Our derivation is presented in steps of increasingly specific assumptions, opening the door to the development of other novel continuous-time estimation algorithms through the application of different assumptions at any point. We use the simultaneous localization and mapping problem as our motivation throughout the paper, although the approach is not specific to this application. Results from two experiments are provided to validate the approach: (i) self-calibration involving a camera and a high-rate inertial measurement unit, and (ii) perspective localization with a rolling-shutter camera.