Stefan B. Williams
University of Sydney
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Stefan B. Williams.
intelligent robots and systems | 2002
FrBdkric Bourgault; Alexei Makarenko; Stefan B. Williams; Ben Grocholsky; Hugh F. Durrant-Whyte
Exploration involving mapping and concurrent localization in an unknown environment is a pervasive task in mobile robotics. In general, the accuracy of the mapping process depends directly on the accuracy of the localization process. This paper address the problem of maximizing the accuracy of the map building process during exploration by adaptively selecting control actions that maximize localisation accuracy. The map building and exploration task is modeled using an Occupancy Grid (OG) with concurrent localisation performed using a feature-based Simultaneous Localisation And Mapping (SLAM) algorithm. Adaptive sensing aims at maximizing the map information by simultaneously maximizing the expected Shannon information gain (Mutual Information) on the OG map and minimizing the uncertainty of the vehicle pose and map feature uncertainty in the SLAM process. The resulting map building system is demonstrated in an indoor environment using data from a laser scanner mounted on a mobile platform.
computer vision and pattern recognition | 2013
Donald G. Dansereau; Oscar Pizarro; Stefan B. Williams
Plenoptic cameras are gaining attention for their unique light gathering and post-capture processing capabilities. We describe a decoding, calibration and rectification procedure for lenselet-based plenoptic cameras appropriate for a range of computer vision applications. We derive a novel physically based 4D intrinsic matrix relating each recorded pixel to its corresponding ray in 3D space. We further propose a radial distortion model and a practical objective function based on ray reprojection. Our 15-parameter camera model is of much lower dimensionality than camera array models, and more closely represents the physics of lenselet-based cameras. Results include calibration of a commercially available camera using three calibration grid sizes over five datasets. Typical RMS ray reprojection errors are 0.0628, 0.105 and 0.363 mm for 3.61, 7.22 and 35.1 mm calibration grids, respectively. Rectification examples include calibration targets and real-world imagery.
intelligent robots and systems | 2005
Alex Brooks; Tobias Kaupp; Alexei Makarenko; Stefan B. Williams; Anders Orebäck
This paper gives an overview of component-based software engineering (CBSE), motivates its application to the field of mobile robotics, and proposes a particular component model. CBSE is an approach to system-building that aims to shift the emphasis from programming to composing systems from a mixture of off-the-shelf and custom-built software components. This paper argues that robotics is particularly well-suited for and in need of component-based ideas. Furthermore, now is the right time for their introduction. The paper introduces Orca - an open-source component-based software engineering framework proposed for mobile robotics with an associated repository of free, reusable components for building mobile robotic systems.
IEEE Transactions on Robotics | 2008
Ian Mahon; Stefan B. Williams; Oscar Pizarro; Matthew Johnson-Roberson
This paper presents a simultaneous localization and mapping algorithm suitable for large-scale visual navigation. The estimation process is based on the viewpoint augmented navigation (VAN) framework using an extended information filter. Cholesky factorization modifications are used to maintain a factor of the VAN information matrix, enabling efficient recovery of state estimates and covariances. The algorithm is demonstrated using data acquired by an autonomous underwater vehicle performing a visual survey of sponge beds. Loop-closure observations produced by a stereo vision system are used to correct the estimated vehicle trajectory produced by dead reckoning sensors.
international conference on robotics and automation | 2000
Stefan B. Williams; Paul Newman; Gamini Dissanayake; Hugh F. Durrant-Whyte
We present results of the application of a simultaneous localisation and map building (SLAM) algorithm to estimate the motion of a submersible vehicle. Scans obtained from an on-board sonar are processed to extract stable point features in the environment. These point features are then used to build up a map of the environment while simultaneously providing estimates of the vehicle location. Results are shown from deployment in a swimming pool at the University of Sydney as well as from field trials in a natural environment along Sydneys coast. This work represents the first instance of a deployable underwater implementation of the SLAM algorithm.
international conference on robotics and automation | 2004
Stefan B. Williams; Ian Mahon
This paper presents results of the application of the simultaneous localisation and mapping algorithm to data collected by an unmanned underwater vehicle operating on the Great Barrier Reef in Australia. By fusing information from the vehicles on-board sonar and vision systems, it is possible to use the highly textured reef to provide estimates of the vehicle motion as well as to generate models of the gross structure of the underlying reefs. Terrain-aided navigation promises to revolutionise the ability of marine systems to track underwater bodies in many applications. This work represents a crucial step in the development of underwater technologies capable of long-term, reliable deployment. Results of the application of this technique to the tracking of the vehicle position are shown.
IEEE Robotics & Automation Magazine | 2012
Stefan B. Williams; Oscar Pizarro; Michael V. Jakuba; Craig R. Johnson; Ns Barrett; Russell C. Babcock; Gary A. Kendrick; Peter D. Steinberg; Andrew Heyward; Peter Doherty; Ian Mahon; Matthew Johnson-Roberson; Daniel Steinberg; Ariell Friedman
We have established an Australia-wide observation program that exhibits recent developments in autonomous underwater vehicle (AUV) systems to deliver precisely navigated time series benthic imagery at selected reference stations on Australias continental shelf. These observations are designed to help characterize changes in benthic assemblage composition and cover derived from precisely registered maps collected at regular intervals. This information will provide researchers with the baseline ecological data necessary to make quantitative inferences about the long-term effects of climate change and human activities on the benthos. Incorporating a suite of observations that capitalize on the unique capabilities of AUVs into Australias integrated marine observation system (IMOS) [1] is providing a critical link between oceanographic and benthic processes. IMOS is a nationally coordinated program designed to establish and maintain the research infrastructure required to support Australias marine science research. It has, and will maintain, a strategic focus on the impact of major boundary currents on continental shelf environments, ecosystems, and biodiversity. The IMOS AUV facility observation program is designed to generate physical and biological observations of benthic variables that cannot be cost effectively obtained by other means.
international conference on robotics and automation | 2002
Stefan B. Williams; Gamini Dissanayake; Hugh F. Durrant-Whyte
This paper presents a novel approach to the multi-vehicle simultaneous localisation and mapping (SLAM) problem that exploits the manner in which observations are fused into the global map of the environment to manage the computational complexity of the algorithm and improve the data association process. Rather than incorporating every observation directly into the global map of the environment, the constrained local submap filter (CLSF) relies on creating an independent, local submap of the features in the immediate vicinity of the vehicle. This local submap is then periodically fused into the global map of the environment. This representation is shown to reduce the computational complexity of maintaining the global map estimates as well as improving the data association process. This paper examines the prospect of applying the CLSF algorithm to the multi-vehicle SLAM problem.
international conference on robotics and automation | 2002
Stefan B. Williams; Gamini Dissanayake; Hugh F. Durrant-Whyte
This paper presents a novel approach to the simultaneous localisation and mapping algorithm that exploits the manner in which observations are fused into the global map of the environment to manage the computational complexity of the algorithm and improve the data association process. Rather than incorporating every observation directly into the global map of the environment, the constrained local submap filter relies on creating an independent, local submap of the features in the immediate vicinity of the vehicle. This local submap is then periodically fused into the global map of the environment using appropriately formulated constraints between the common feature estimates. This approach is shown to be effective in reducing the computational complexity of maintaining the global map estimates as well as improving the data association process.
international conference on robotics and automation | 2007
Alex Brooks; Tobias Kaupp; Alexei Makarenko; Stefan B. Williams; Anders Orebäck
This Chapter describes Orca: an open-source project which applies Component-Based Software Engineering principles to robotics. It provides the means for defining and implementing interfaces such that components developed independently are likely to be inter-operable. In addition it provides a repository of free re-useable components. Orca attempts to be widely applicable by imposing minimal design constraints. This Chapter describes lessons learned while using Orca and steps taken to improve the framework based on those lessons. Improvements revolve around middleware issues and the problems encountered while scaling to larger distributed systems. Results are presented from systems that were implemented.