Michael Bosse
Commonwealth Scientific and Industrial Research Organisation
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael Bosse.
international conference on computer graphics and interactive techniques | 2001
Chris Buehler; Michael Bosse; Leonard McMillan; Steven J. Gortler; Michael F. Cohen
We describe an image based rendering approach that generalizes many current image based rendering algorithms, including light field rendering and view-dependent texture mapping. In particular, it allows for lumigraph-style rendering from a set of input cameras in arbitrary configurations (i.e., not restricted to a plane or to any specific manifold). In the case of regular and planar input camera positions, our algorithm reduces to a typical lumigraph approach. When presented with fewer cameras and good approximate geometry, our algorithm behaves like view-dependent texture mapping. The algorithm achieves this flexibility because it is designed to meet a set of specific goals that we describe. We demonstrate this flexibility with a variety of examples.
international conference on robotics and automation | 2003
Michael Bosse; Paul Newman; John J. Leonard; Martin Soika; Wendelin Feiten; Seth J. Teller
This paper describes Atlas, a hybrid metrical/topological approach to SLAM that achieves efficient mapping of large-scale environments. The representation is a graph of coordinate frames, with each vertex in the graph representing a local frame, and each edge representing the transformation between adjacent frames. In each frame, we build a map that captures the local environment and the current robot pose along with the uncertainties of each. Each maps uncertainties are modeled with respect to its own frame. Probabilities of entities with respect to arbitrary frames are generated by following a path formed by the edges between adjacent frames, computed via Dijkstras shortest path algorithm. Loop closing is achieved via an efficient map matching algorithm. We demonstrate the technique running in real-time in a large indoor structured environment (2.2 km path length) with multiple nested loops using laser or ultrasonic ranging sensors.
The International Journal of Robotics Research | 2004
Michael Bosse; Paul Newman; John J. Leonard; Seth J. Teller
In this paper we describe Atlas, a hybrid metrical/topological approach to simultaneous localization and mapping (SLAM) that achieves efficient mapping of large-scale environments. The representation is a graph of coordinate frames, with each vertex in the graph representing a local frame and each edge representing the transformation between adjacent frames. In each frame, we build a map that captures the local environment and the current robot pose along with the uncertainties of each. Each map’s uncertainties are modeled with respect to its own frame. Probabilities of entities with respect to arbitrary frames are generated by following a path formed by the edges between adjacent frames, computed using either the Dijkstra shortest path algorithm or breath-first search. Loop closing is achieved via an efficient map-matching algorithm coupled with a cycle verification step. We demonstrate the performance of the technique for post-processing large data sets, including an indoor structured environment (2.2 km path length) with multiple nested loops using laser or ultrasonic ranging sensors.
The International Journal of Robotics Research | 2015
Stefan Leutenegger; Simon Lynen; Michael Bosse; Roland Siegwart; Paul Timothy Furgale
Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate visual–inertial odometry or simultaneous localization and mapping (SLAM). While historically the problem has been addressed with filtering, advancements in visual estimation suggest that nonlinear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual–inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochastic cloning sliding-window filter. This competitive reference implementation performs tightly coupled filtering-based visual–inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy.
IEEE Transactions on Robotics | 2012
Michael Bosse; Robert Zlot; Paul Flick
Three-dimensional perception is a key technology for many robotics applications, including obstacle detection, mapping, and localization. There exist a number of sensors and techniques for acquiring 3-D data, many of which have particular utility for various robotic tasks. We introduce a new design for a 3-D sensor system, constructed from a 2-D range scanner coupled with a passive linkage mechanism, such as a spring. By mounting the other end of the passive linkage mechanism to a moving body, disturbances resulting from accelerations and vibrations of the body propel the 2-D scanner in an irregular fashion, thereby extending the devices field of view outside of its standard scanning plane. The proposed 3-D sensor system is advantageous due to its mechanical simplicity, mobility, low weight, and relatively low cost. We analyze a particular implementation of the proposed device, which we call Zebedee, consisting of a 2-D time-of-flight laser range scanner rigidly coupled to an inertial measurement unit and mounted on a spring. The unique configuration of the sensor system motivates unconventional and specialized algorithms to be developed for data processing. As an example application, we describe a novel 3-D simultaneous localization and mapping solution in which Zebedee is mounted on a moving platform. Using a motion capture system, we have verified the positional accuracy of the sensor trajectory. The results demonstrate that the six-degree-of-freedom trajectory of a passive spring-mounted range sensor can be accurately estimated from laser range data and industrial-grade inertial measurements in real time and that a quality 3-D point cloud map can be generated concurrently using the same data.
international conference on robotics and automation | 2009
Michael Bosse; Robert Zlot
Scan-matching is a technique that can be used for building accurate maps and estimating vehicle motion by comparing a sequence of point cloud measurements of the environment taken from a moving sensor. One challenge that arises in mapping applications where the sensor motion is fast relative to the measurement time is that scans become locally distorted and difficult to align. This problem is common when using 3D laser range sensors, which typically require more scanning time than their 2D counterparts. Existing 3D mapping solutions either eliminate sensor motion by taking a “stop-and-scan” approach, or attempt to correct the motion in an open-loop fashion using odometric or inertial sensors. We propose a solution to 3D scan-matching in which a continuous 6DOF sensor trajectory is recovered to correct the point cloud alignments, producing locally accurate maps and allowing for a reliable estimate of the vehicle motion. Our method is applied to data collected from a 3D spinning lidar sensor mounted on a skid-steer loader vehicle to produce quality maps of outdoor scenes and estimates of the vehicle trajectory during the mapping sequences.
The International Journal of Robotics Research | 2008
Michael Bosse; Robert Zlot
Reliable data association techniques for simultaneous localization and mapping (SLAM) are necessary for the generation of large-scale maps in unstructured outdoor environments. Data association techniques are required at two levels: the local level represents the inner loop of the mapping algorithm, and the global level where newly mapped areas are matched to previously mapped areas to detect repeated coverage and close loops. Local map building is achieved using a robust iterative scan matching technique incorporated into an extended Kalman filter where the state consists of the current pose and previous poses sampled periodically and at a fixed lag from the current time. The introduction of states at a fixed time lag significantly reduces the growth of errors in the location estimate and the resultant map. For global matching, we enhance existing histogram cross-correlation techniques, introducing entropy sequences of projection histograms and an exhaustive correlation approach for reliable matching in unstructured environments. This enables loop closure without depending on prior knowledge of map alignment. These data association techniques are incorporated into the Atlas SLAM framework, enabling the generation of accurate two-dimensional laser maps over tens of kilometers in challenging outdoor environments.
International Journal of Computer Vision | 2003
Seth J. Teller; Matthew E. Antone; Zachary Bodnar; Michael Bosse; Satyan R. Coorg; Manish Jethwa; Neel Master
We describe a dataset of several thousand calibrated, time-stamped, geo-referenced, high dynamic range color images, acquired under uncontrolled, variable illumination conditions in an outdoor region spanning several hundred meters. The image data is grouped into several regions which have little mutual inter-visibility. For each group, the calibration data is globally consistent on average to roughly five centimeters and 0 1°, or about four pixels of epipolar registration. All image, feature and calibration data is available for interactive inspection and downloading at http://city.lcs.mit.edu/data.Calibrated imagery is of fundamental interest in a variety of applications. We have made this data available in the belief that researchers in computer graphics, computer vision, photogrammetry and digital cartography will find it of value as a test set for their own image registration algorithms, as a calibrated image set for applications such as image-based rendering, metric 3D reconstruction, and appearance recovery, and as input for existing GIS applications.
computer vision and pattern recognition | 2001
Chris Buehler; Michael Bosse; Leonard McMillan
We consider the problem of video stabilization: removing unwanted image perturbations due to unstable camera motions. We approach this problem from an image-based rendering (IBR) standpoint. Given an unstabilized video sequence, the task is to synthesize a new sequence as seen from a stabilized camera trajectory. This task is relatively straightforward if one has a Euclidean reconstruction of the unstabilized camera trajectory and a suitable IBR algorithm. However, it is often not feasible to obtain a Euclidean reconstruction from an arbitrary video sequence. In light of this problem, we describe IBR techniques for non-metric reconstructions, which are often much easier to obtain since they do not require camera calibration. These rendering techniques are well suited to the video stabilization problem. The key idea behind our techniques is that all measurements are specified in the image space, rather than in the non-metric space.
The International Journal of Robotics Research | 2002
John J. Leonard; Richard J. Rikoski; Paul Newman; Michael Bosse
In this paper we present a technique for mapping partially observable features from multiple uncertain vantage points. The problem of concurrent mapping and localization (CML) is stated as follows. Starting from an initial known position, a mobile robot travels through a sequence of positions, obtaining a set of sensor measurements at each position. The goal is to process the sensor data to produce an estimate of the trajectory of the robot while concurrently building a map of the environment. In this paper, we describe a generalized framework for CML that incorporates temporal as well as spatial correlations. The representation is expanded to incorporate past vehicle positions in the state vector. Estimates of the correlations between current and previous vehicle states are explicitly maintained. This enables the consistent initialization of map features using data from multiple time steps. Updates to the map and the vehicle trajectory can also be performed in batches of data acquired from multiple vantage points. The method is illustrated with sonar data from a testing tank and via experiments with a B21 land mobile robot, demonstrating the ability to perform CML with sparse and ambiguous data.
Collaboration
Dive into the Michael Bosse's collaboration.
Commonwealth Scientific and Industrial Research Organisation
View shared research outputsCommonwealth Scientific and Industrial Research Organisation
View shared research outputs