Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ingmar Posner is active.

Publication


Featured researches published by Ingmar Posner.


The International Journal of Robotics Research | 2009

Navigating, Recognizing and Describing Urban Spaces With Vision and Lasers

Paul Newman; Gabe Sibley; Mike Smith; Mark Cummins; Alastair Harrison; Chris Mei; Ingmar Posner; Robbie Shade; Derik Schroeter; Liz Murphy; Winston Churchill; Dave Cole; Ian D. Reid

In this paper we describe a body of work aimed at extending the reach of mobile navigation and mapping. We describe how running topological and metric mapping and pose estimation processes concurrently, using vision and laser ranging, has produced a full six-degree-of-freedom outdoor navigation system. It is capable of producing intricate three-dimensional maps over many kilometers and in real time. We consider issues concerning the intrinsic quality of the built maps and describe our progress towards adding semantic labels to maps via scene de-construction and labeling. We show how our choices of representation, inference methods and use of both topological and metric techniques naturally allow us to fuse maps built from multiple sessions with no need for manual frame alignment or data association.


ieee intelligent vehicles symposium | 2013

Toward automated driving in cities using close-to-market sensors: An overview of the V-Charge Project

Paul Timothy Furgale; Ulrich Schwesinger; Martin Rufli; Wojciech Waclaw Derendarz; Hugo Grimmett; Peter Mühlfellner; Stefan Wonneberger; Julian Timpner; Stephan Rottmann; Bo Li; Bastian Schmidt; Thien-Nghia Nguyen; Elena Cardarelli; Stefano Cattani; Stefan Brüning; Sven Horstmann; Martin Stellmacher; Holger Mielenz; Kevin Köser; Markus Beermann; Christian Häne; Lionel Heng; Gim Hee Lee; Friedrich Fraundorfer; Rene Iser; Rudolph Triebel; Ingmar Posner; Paul Newman; Lars C. Wolf; Marc Pollefeys

Future requirements for drastic reduction of CO2 production and energy consumption will lead to significant changes in the way we see mobility in the years to come. However, the automotive industry has identified significant barriers to the adoption of electric vehicles, including reduced driving range and greatly increased refueling times. Automated cars have the potential to reduce the environmental impact of driving, and increase the safety of motor vehicle travel. The current state-of-the-art in vehicle automation requires a suite of expensive sensors. While the cost of these sensors is decreasing, integrating them into electric cars will increase the price and represent another barrier to adoption. The V-Charge Project, funded by the European Commission, seeks to address these problems simultaneously by developing an electric automated car, outfitted with close-to-market sensors, which is able to automate valet parking and recharging for integration into a future transportation system. The final goal is the demonstration of a fully operational system including automated navigation and parking. This paper presents an overview of the V-Charge system, from the platform setup to the mapping, perception, and planning sub-systems.


robotics science and systems | 2015

Voting for Voting in Online Point Cloud Object Detection

Dominic Zeng Wang; Ingmar Posner

This paper proposes an efficient and effective scheme to applying the sliding window approach popular in computer vision to 3D data. Specifically, the sparse nature of the problem is exploited via a voting scheme to enable a search through all putative object locations at any orientation. We prove that this voting scheme is mathematically equivalent to a convolution on a sparse feature grid and thus enables the processing, in full 3D, of any point cloud irrespective of the number of vantage points required to construct it. As such it is versatile enough to operate on data from popular 3D laser scanners such as a Velodyne as well as on 3D data obtained from increasingly popular push-broom configurations. Our approach is “embarrassingly parallelisable” and capable of processing a point cloud containing over 100K points at eight orientations in less than 0.5s. For the object classes car, pedestrian and bicyclist the resulting detector achieves best-in-class detection and timing performance relative to prior art on the KITTI dataset as well as compared to another existing 3D object detection approach.


international conference on robotics and automation | 2012

What could move? Finding cars, pedestrians and bicyclists in 3D laser data

Dominic Zeng Wang; Ingmar Posner; Paul Newman

This paper tackles the problem of segmenting things that could move from 3D laser scans of urban scenes. In particular, we wish to detect instances of classes of interest in autonomous driving applications - cars, pedestrians and bicyclists - amongst significant background clutter. Our aim is to provide the layout of an end-to-end pipeline which, when fed by a raw stream of 3D data, produces distinct groups of points which can be fed to downstream classifiers for categorisation. We postulate that, for the specific classes considered in this work, solving a binary classification task (i.e. separating the data into foreground and background first) outperforms approaches that tackle the multi-class problem directly. This is confirmed using custom and third-party datasets gathered of urban street scenes. While our system is agnostic to the specific clustering algorithm deployed we explore the use of a Euclidean Minimum Spanning Tree for an end-to-end segmentation pipeline and devise a RANSAC-based edge selection criterion.


robotics: science and systems | 2008

Fast Probabilistic Labeling of City Maps.

Ingmar Posner; Mark Cummins; Paul Newman

This paper introduces a probabilistic, two-stage classification framework for the semantic annotation of urban maps as provided by a mobile robot. During the first stage, local scene properties are considered using a probabilistic bagof-words classifier. The second stage incorporates contextual information across a given scene via a Markov Random Field (MRF). Our approach is driven by data from an onboard camera and 3D laser scanner and uses a combination of appearancebased and geometric features. By framing the classification exercise probabilistically we are able to execute an informationtheoretic bail-out policy when evaluating appearance-based classconditional likelihoods. This efficiency, combined with low order MRFs resulting from our two-stage approach, allows us to generate scene labels at speeds suitable for online deployment and use. We demonstrate and analyze the performance of our technique on data gathered over almost 17 km of track through a city.


international conference on robotics and automation | 2017

Vote3Deep: Fast object detection in 3D point clouds using efficient convolutional neural networks

Martin Engelcke; Dushyant Rao; Dominic Zeng Wang; Chi Hay Tong; Ingmar Posner

This paper proposes a computationally efficient approach to detecting objects natively in 3D point clouds using convolutional neural networks (CNNs). In particular, this is achieved by leveraging a feature-centric voting scheme to implement novel convolutional layers which explicitly exploit the sparsity encountered in the input. To this end, we examine the trade-off between accuracy and speed for different architectures and additionally propose to use an L1 penalty on the filter activations to further encourage sparsity in the intermediate representations. To the best of our knowledge, this is the first work to propose sparse convolutional layers and L1 regularisation for efficient large-scale processing of 3D data. We demonstrate the efficacy of our approach on the KITTI object detection benchmark and show that VoteSDeep models with as few as three layers outperform the previous state of the art in both laser and laser-vision based approaches by margins of up to 40% while remaining highly competitive in terms of processing time.


Robotics and Autonomous Systems | 2008

Online generation of scene descriptions in urban environments

Ingmar Posner; Derik Schroeter; Paul Newman

The ability to extract a rich set of semantic workspace labels from sensor data gathered in complex environments is a fundamental prerequisite to any form of semantic reasoning in mobile robotics. In this paper, we present an online system for the augmentation of maps of outdoor urban environments with such higher-order, semantic labels. The system employs a shallow supervised classification hierarchy to classify scene attributes, consisting of a mixture of 2D/3D geometric and visual scene information, into a range of different workspace classes. The union of classifier responses yields a rich, composite description of the local workspace. We present extensive experimental results, using two large urban data sets collected by our research platform.


international conference on robotics and automation | 2007

Describing Composite Urban Workspaces

Ingmar Posner; Derik Schroeter; Paul Newman

In this paper we present an appearance-based method for augmenting maps of outdoor urban environments with higher-order, semantic labels. Our motivation is to increase the value and utility of the typically low-level representations built by contemporary SLAM algorithms. A supervised learning scheme is employed to train a set of classifiers to respond to common scene attributes given a mixture of geometric and visual scene information. The union of classifier responses yields a composite description of the local workspace. We apply our method to three large data sets


The International Journal of Robotics Research | 2015

Model-free detection and tracking of dynamic objects with 2D lidar

Dominic Zeng Wang; Ingmar Posner; Paul Newman

We present a new approach to detection and tracking of moving objects with a 2D laser scanner for autonomous driving applications. Objects are modelled with a set of rigidly attached sample points along their boundaries whose positions are initialized with and updated by raw laser measurements, thus allowing a non-parametric representation that is capable of representing objects independent of their classes and shapes. Detection and tracking of such object models are handled in a theoretically principled manner as a Bayes filter where the motion states and shape information of all objects are represented as a part of a joint state which includes in addition the pose of the sensor and geometry of the static part of the world. We derive the prediction and observation models for the evolution of the joint state, and describe how the knowledge of the static local background helps in identifying dynamic objects from static ones in a principled and straightforward way. Dealing with raw laser points poses a significant challenge to data association. We propose a hierarchical approach, and present a new variant of the well-known Joint Compatibility Branch and Bound algorithm to respect and take advantage of the constraints of the problem introduced through correlations between observations. Finally, we calibrate the system systematically on real world data containing 7,500 labelled object examples and validate on 6,000 test cases. We demonstrate its performance over an existing industry standard targeted at the same problem domain as well as a classical approach to model-free object tracking.


international symposium on experimental robotics | 2008

Using Scene Similarity for Place Labelling

Ingmar Posner; Derik Schroeter; Paul Newman

This paper is about labelling regions of a mobile robot’s workspace using scene appearance similarity. We do this by operating on a single matrix which expresses the pairwise similarity between all captured scenes. We describe and motivate a sequence of algorithms which, in conjunction with spatial constraints provided by the continuous motion of the vehicle, produce meaningful workspace segmentations. We provide detailed experimental results from various outdoor trials.

Collaboration


Dive into the Ingmar Posner's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge