Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthias Luber is active.

Publication


Featured researches published by Matthias Luber.


international conference on robotics and automation | 2010

People tracking with human motion predictions from social forces

Matthias Luber; Johannes A. Stork; Gian Diego Tipaldi; Kai Oliver Arras

For many tasks in populated environments, robots need to keep track of current and future motion states of people. Most approaches to people tracking make weak assumptions on human motion such as constant velocity or acceleration. But even over a short period, human behavior is more complex and influenced by factors such as the intended goal, other people, objects in the environment, and social rules. This motivates the use of more sophisticated motion models for people tracking especially since humans frequently undergo lengthy occlusion events. In this paper, we consider computational models developed in the cognitive and social science communities that describe individual and collective pedestrian dynamics for tasks such as crowd behavior analysis. In particular, we integrate a model based on a social force concept into a multi-hypothesis target tracker. We show how the refined motion predictions translate into more informed probability distributions over hypotheses and finally into a more robust tracking behavior and better occlusion handling. In experiments in indoor and outdoor environments with data from a laser range finder, the social force model leads to more accurate tracking with up to two times fewer data association errors.


intelligent robots and systems | 2011

People tracking in RGB-D data with on-line boosted target models

Matthias Luber; Luciano Spinello; Kai Oliver Arras

People tracking is a key component for robots that are deployed in populated environments. Previous works have used cameras and 2D and 3D range finders for this task. In this paper, we present a 3D people detection and tracking approach using RGB-D data. We combine a novel multi-cue person detector for RGB-D data with an on-line detector that learns individual target models. The two detectors are integrated into a decisional framework with a multi-hypothesis tracker that controls on-line learning through a track interpretation feedback. For on-line learning, we take a boosting approach using three types of RGB-D features and a confidence maximization search in 3D space. The approach is general in that it neither relies on background learning nor a ground plane assumption. For the evaluation, we collect data in a populated indoor environment using a setup of three Microsoft Kinect sensors with a joint field of view. The results demonstrate reliable 3D tracking of people in RGB-D data and show how the framework is able to avoid drift of the on-line detector and increase the overall tracking performance.


international conference on robotics and automation | 2008

Efficient people tracking in laser range data using a multi-hypothesis leg-tracker with adaptive occlusion probabilities

Kai Oliver Arras; Slawomir Grzonka; Matthias Luber; Wolfram Burgard

We present an approach to laser-based people tracking using a multi-hypothesis tracker that detects and tracks legs separately with Kalman filters, constant velocity motion models, and a multi-hypothesis data association strategy. People are defined as high-level tracks consisting of two legs that are found with little model knowledge. We extend the data association so that it explicitly handles track occlusions in addition to detections and deletions. Additionally, we adapt the corresponding probabilities in a situation-dependent fashion so as to reflect the fact that legs frequently occlude each other. Experimental results carried out with a mobile robot illustrate that our approach can robustly and efficiently track multiple people even in situations of high levels of occlusion.


intelligent robots and systems | 2012

Socially-aware robot navigation: A learning approach

Matthias Luber; Luciano Spinello; Jens Silva; Kai Oliver Arras

The ability to act in a socially-aware way is a key skill for robots that share a space with humans. In this paper we address the problem of socially-aware navigation among people that meets objective criteria such as travel time or path length as well as subjective criteria such as social comfort. Opposed to model-based approaches typically taken in related work, we pose the problem as an unsupervised learning problem. We learn a set of dynamic motion prototypes from observations of relative motion behavior of humans found in publicly available surveillance data sets. The learned motion prototypes are then used to compute dynamic cost maps for path planning using an any-angle A* algorithm. In the evaluation we demonstrate that the learned behaviors are better in reproducing human relative motion in both criteria than a Proxemics-based baseline method.


international conference on robotics and automation | 2011

Tracking people in 3D using a bottom-up top-down detector

Luciano Spinello; Matthias Luber; Kai Oliver Arras

People detection and tracking is a key component for robots and autonomous vehicles in human environments. While prior work mainly employed image or 2D range data for this task, in this paper, we address the problem using 3D range data. In our approach, a top-down classifier selects hypotheses from a bottom-up detector, both based on sets of boosted features. The bottom-up detector learns a layered person model from a bank of specialized classifiers for different height levels of people that collectively vote into a continuous space. Modes in this space represent detection candidates that each postulate a segmentation hypothesis of the data. In the top-down step, the candidates are classified using features that are computed in voxels of a boosted volume tessellation. We learn the optimal volume tessellation as it enables the method to stably deal with sparsely sampled and articulated objects. We then combine the detector with tracking in 3D for which we take a multi-target multi-hypothesis tracking approach. The method neither needs a ground plane assumption nor relies on background learning. The results from experiments in populated urban environments demonstrate 3D tracking and highly robust people detection up to 20 m with equal error rates of at least 93%.


The International Journal of Robotics Research | 2011

Place-dependent people tracking

Matthias Luber; Gian Diego Tipaldi; Kai Oliver Arras

People detection and tracking are important in many situations where robots and humans work and live together. But unlike targets in traditional tracking problems, people typically move and act under the constraints of their environment. The probabilities and frequencies for when people appear, disappear, walk or stand are not uniform but vary over space making human behavior strongly place-dependent. In this paper we present a model that encodes spatial priors on human behavior and show how the model can be incorporated into a people-tracking system. We learn a non-homogeneous spatial Poisson process that improves data association in a multi-hypothesis target tracker through more informed probability distributions over hypotheses. We further present a place-dependent motion model whose predictions follow the space-usage patterns that people take and which are described by the learned spatial Poisson process. Large-scale experiments in different indoor and outdoor environments using laser range data, demonstrate how both extensions lead to more accurate tracking behavior in terms of data-association errors and number of track losses. The extended tracker is also slightly more efficient than the baseline approach. The system runs in real-time on a typical desktop computer.


robot soccer world cup | 2006

Successful search and rescue in simulated disaster areas

Alexander Kleiner; Michael Brenner; Tobias Bräuer; Christian Dornhege; Moritz Göbelbecker; Matthias Luber; Johann Prediger; Jörg Stückler; Bernhard Nebel

RoboCupRescue Simulation is a large-scale multi-agent simulation of urban disasters where, in order to save lives and minimize damage, rescue teams must effectively cooperate despite sensing and communication limitations. This paper presents the comprehensive search and rescue approach of the ResQ Freiburg team, the winner in the RoboCupRescue Simulation league at RoboCup 2004. Specific contributions include the predictions of travel costs and civilian life-time, the efficient coordination of an active disaster space exploration, as well as an any-time rescue sequence optimization based on a genetic algorithm. We compare the performances of our team and others in terms of their capability of extinguishing fires, freeing roads from debris, disaster space exploration, and civilian rescue. The evaluation is carried out with information extracted from simulation log files gathered during RoboCup 2004. Our results clearly explain the success of our team, and also confirm the scientific approaches proposed in this paper.


Autonomous Robots | 2009

Classifying dynamic objects

Matthias Luber; Kai Oliver Arras; Christian Plagemann; Wolfram Burgard

For robots operating in real-world environments, the ability to deal with dynamic entities such as humans, animals, vehicles, or other robots is of fundamental importance. The variability of dynamic objects, however, is large in general, which makes it hard to manually design suitable models for their appearance and dynamics. In this paper, we present an unsupervised learning approach to this model-building problem. We describe an exemplar-based model for representing the time-varying appearance of objects in planar laser scans as well as a clustering procedure that builds a set of object classes from given observation sequences. Extensive experiments in real environments demonstrate that our system is able to autonomously learn useful models for, e.g., pedestrians, skaters, or cyclists without being provided with external class information.


robotics: science and systems | 2013

Multi-Hypothesis Social Grouping and Tracking for Mobile Robots.

Matthias Luber; Kai Oliver Arras

Detecting and tracking people and groups of people is a key skill for robots in populated environments. In this paper, we address the problem of detecting and learning socio-spatial relations between individuals and to track their group formations. Opposed to related work, we track and reason about multiple social grouping hypotheses in a recursive way, assume a mobile sensor that perceives the scene from a first-person perspective, and achieve good tracking performance in real-time using only 2D range data. The method, that relies on an extended multi-hypothesis tracking approach, also improves person-level tracking in two ways: the social grouping information is fed back to predict human motion over learned intra-group constraints and to support data association by adapting track-specific occlusion probabilities. Both measures lead to an improved occlusion handling and a better trade-off between false negative and false positive tracks. In experiments with a mobile robot and on large-scale outdoor data sets, we demonstrate how the approach is able to model social grouping and to improve person tracking by a significant reduction of track identifier switches and false negative tracks.


robotics: science and systems | 2008

Classifying Dynamic Objects: An Unsupervised Learning Approach.

Matthias Luber; Kai Oliver Arras; Christian Plagemann; Wolfram Burgard

For robots operating in real-world environments, the ability to deal with dynamic entities such as humans, animals, vehicles, or other robots is of fundamental importance. The variability of dynamic objects, however, is large in general, which makes it hard to manually design suitable models for their appearance and dynamics. In this paper, we present an unsupervised learning approach to this model-building problem. We describe an exemplar-based model for representing the time-varying appearance of objects in planar laser scans as well as a clustering procedure that builds a set of object classes from given training sequences. Extensive experiments in real environments demonstrate that our system is able to autonomously learn useful models for, e.g., pedestrians, skaters, or cyclists without being provided with external class information.

Collaboration


Dive into the Matthias Luber's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge