Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Frank Moosmann is active.

Publication


Featured researches published by Frank Moosmann.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2008

Randomized Clustering Forests for Image Classification

Frank Moosmann; Eric Nowak; Frédéric Jurie

Some of the most effective recent methods for content-based image classification work by quantizing image descriptors, and accumulating histograms of the resulting visual word codes. Large numbers of descriptors and large codebooks are required for good results and this becomes slow using k-means. We introduce Extremely Randomized Clustering Forests-ensembles of randomly created clustering trees-and show that they provide more accurate results, much faster training and testing, and good resistance to background clutter. Second, an efficient image classification method is proposed. It combines ERC-Forests and saliency maps very closely with the extraction of image information. For a given image, a classifier builds a saliency map online and uses it to classify the image. We show in several state-of-the-art image classification tasks that this method can speed up the classification process enormously. Finally, we show that the proposed ERC-Forests can also be used very successfully for learning distance between images. The distance computation algorithm consists of learning the characteristic differences between local descriptors sampled from pairs of same or different objects. These differences are vector quantized by ERC-Forests and the similarity measure is computed from this quantization. The similarity measure has been evaluated on four very different datasets and always outperforms the state-of-the-art competitive approaches.


ieee intelligent vehicles symposium | 2009

Segmentation of 3D lidar data in non-flat urban environments using a local convexity criterion

Frank Moosmann; Oliver Pink; Christoph Stiller

Present object detection methods working on 3D range data are so far either optimized for unstructured offroad environments or flat urban environments. We present a fast algorithm able to deal with tremendous amounts of 3D Lidar measurements. It uses a graph-based approach to segment ground and objects from 3D lidar scans using a novel unified, generic criterion based on local convexity measures. Experiments show good results in urban environments including smoothly bended road surfaces.


international conference on robotics and automation | 2012

Automatic camera and range sensor calibration using a single shot

Andreas Geiger; Frank Moosmann; Omer Car; Bernhard Schuster

As a core robotic and vision problem, camera and range sensor calibration have been researched intensely over the last decades. However, robotic research efforts still often get heavily delayed by the requirement of setting up a calibrated system consisting of multiple cameras and range measurement units. With regard to removing this burden, we present a toolbox with web interface for fully automatic camera-to-camera and camera-to-range calibration. Our system is easy to setup and recovers intrinsic and extrinsic camera parameters as well as the transformation between cameras and range sensors within one minute. In contrast to existing calibration approaches, which often require user intervention, the proposed method is robust to varying imaging conditions, fully automatic, and easy to use since a single image and range scan proves sufficient for most calibration scenarios. Experimentally, we demonstrate that the proposed checkerboard corner detector significantly outperforms current state-of-the-art. Furthermore, the proposed camera-to-range registration method is able to discover multiple solutions in the case of ambiguities. Experiments using a variety of sensors such as grayscale and color cameras, the Kinect 3D sensor and the Velodyne HDL-64 laser scanner show the robustness of our method in different indoor and outdoor settings and under various lighting conditions.


IEEE Transactions on Intelligent Transportation Systems | 2012

Team AnnieWAY's Entry to the 2011 Grand Cooperative Driving Challenge

Andreas Geiger; Martin Lauer; Frank Moosmann; Benjamin Ranft; Holger H. Rapp; Christoph Stiller; Julius Ziegler

In this paper, we present the concepts and methods developed for the autonomous vehicle known as AnnieWAY, which is our winning entry to the 2011 Grand Cooperative Driving Challenge. We describe algorithms for sensor fusion, vehicle-to-vehicle communication, and cooperative control. Furthermore, we analyze the performance of the proposed methods and compare them with those of competing teams. We close with our results from the competition and lessons learned.


ieee intelligent vehicles symposium | 2008

Classification of weather situations on single color images

Martin Roser; Frank Moosmann

Present vision based driver assistance systems are designed to perform under good-natured weather conditions. However, limited visibility caused by heavy rain or fog strongly affects vision systems. To improve machine vision in bad weather situations, a reliable detection system is necessary as a ground base. We present an approach that is able to distinguish between multiple weather situations based on the classification of single monocular color images, without any additional assumptions or prior knowledge. The proposed image descriptor clearly outperforms existing descriptors for that task. Experimental results on real traffic images are characterized by high accuracy, efficiency, and versatility with respect to driver assistance systems.


ieee intelligent vehicles symposium | 2009

Visual features for vehicle localization and ego-motion estimation

Oliver Pink; Frank Moosmann; Alexander Bachmann

This paper introduces a novel method for vehicle pose estimation and motion tracking using visual features. The method combines ideas from research on visual odometry with a feature map that is automatically generated from aerial images into a Visual Navigation System. Given an initial pose estimate, e.g. from a GPS receiver, the system is capable of robustly tracking the vehicle pose in geographical coordinates over time, using image data as the only input. Experiments on real image data have shown that the precision of the position estimate with respect to the feature map typically lies within only several centimeters. This makes the algorithm interesting for a wide range of applications like navigation, path planning or lane keeping.


international conference on robotics and automation | 2013

Joint self-localization and tracking of generic objects in 3D range data

Frank Moosmann; Christoph Stiller

Both, the estimation of the trajectory of a sensor and the detection and tracking of moving objects are essential tasks for autonomous robots. This work proposes a new algorithm that treats both problems jointly. The sole input is a sequence of dense 3D measurements as returned by multi-layer laser scanners or time-of-flight cameras. A major characteristic of the proposed approach is its applicability to any type of environment since specific object models are not used at any algorithm stage. More specifically, precise localization in non-flat environments is possible as well as the detection and tracking of e.g. trams or recumbent bicycles. Moreover, 3D shape estimation of moving objects is inherent to the proposed method. Thorough evaluation is conducted on a vehicular platform with a mounted Velodyne HDL-64E laser scanner.


ieee intelligent vehicles symposium | 2007

An integrated simulation framework for cognitive automobiles

Stefan Vacek; R. Nagel; T. Batz; Frank Moosmann

A cognitive automobile is a complex system. It is an indisputable fact that simulations are valuable tools for the development and testing of such complex systems. This paper presents an integrated closed-loop simulation framework which supports the development of a cognitive automobile. The framework aims at simulating complex traffic scenes in inner-city environments. The key features of the simulation are the generation of synthetic data for high level inference mechanisms, providing data for the analysis of car-to-car communication strategies and evaluation of cooperative vehicle behavior.


intelligent robots and systems | 2010

Moving on to dynamic environments: Visual odometry using feature classification

Bernd Kitt; Frank Moosmann; Christoph Stiller

Visually estimating a robots own motion has been an active field of research within the last years. Though impressive results have been reported, some application areas still exhibit huge challenges. Especially for car-like robots in urban environments even the most robust estimation techniques fail due to a vast portion of independently moving objects. Hence, we move one step further and propose a method that combines ego-motion estimation with low-level object detection. We specifically design the method to be general and applicable in real-time. Pre-classifying interest points is a key step, which rejects matches on possibly moving objects and reduces the computational load of further steps. Employing an Iterated Sigma Point Kalman Filter in combination with a RANSAC based outlier rejection scheme yields a robust frame-to-frame motion estimation even in the case when many independently moving objects cover the image. Extensive experiments show the robustness of the proposed approach in highly dynamic environments with speeds up to 20m/s.


international conference on computer vision | 2011

Unsupervised discovery of object classes in 3D outdoor scenarios

Frank Moosmann; Miro Sauerland

Designing object models for a robots detection-system can be very time-consuming since many object classes exist. This paper presents an approach that automatically infers object classes from recorded 3D data and collects training examples. A special focus is put on difficult unstructured outdoor scenarios with object classes ranging from cars over trees to buildings. In contrast to many existing works, it is not assumed that perfect segmentation of the scene is possible. Instead, a novel hierarchical segmentation method is proposed that works together with a novel inference strategy to infer object classes.

Collaboration


Dive into the Frank Moosmann's collaboration.

Top Co-Authors

Avatar

Christoph Stiller

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Oliver Pink

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Alexander Bachmann

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Bernd Kitt

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Bernhard Schuster

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Holger H. Rapp

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Julius Ziegler

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Martin Lauer

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Martin Roser

Karlsruhe Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge