Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mathias Franzius is active.

Publication


Featured researches published by Mathias Franzius.


international conference on intelligent transportation systems | 2010

System approach for multi-purpose representations of traffic scene elements

Jens Schmuedderich; Nils Einecke; Stephan Hasler; Alexander Gepperth; Bram Bolder; Robert Kastner; Mathias Franzius; Sven Rebhan; Benjamin Dittes; Heiko Wersing; Julian Eggert; Jannik Fritsch; Christian Goerick

A major step towards intelligent vehicles lies in the acquisition of an environmental representation of sufficient generality to serve as the basis for a multitude of different assistance-relevant tasks. This acquisition process must reliably cope with the variety of environmental changes inherent to traffic environments. As a step towards this goal, we present our most recent integrated system performing object detection in challenging environments (e.g., inner-city or heavy rain). The system integrates unspecific and vehicle-specific methods for the detection of traffic scene elements, thus creating multiple object hypotheses. Each detection method is modulated by optimized models of typical scene context features which are used to enhance and suppress hypotheses. A multi-object tracking and fusion process is applied to make the produced hypotheses spatially and temporally coherent. In extensive evaluations we show that the presented system successfully analyzes scene elements under diverse conditions, including challenging weather and changing scenarios. We demonstrate that the used generic hypothesis representations allow successful application to a variety of tasks including object detection, movement estimation, and risk assessment by time-to-contact evaluation.


international conference on neural information processing | 2013

Outdoor Self-Localization of a Mobile Robot Using Slow Feature Analysis

Benjamin Metka; Mathias Franzius; Ute Bauer-Wersing

We apply slow feature analysis (SFA) to the problem of self-localization with a mobile robot. A similar unsupervised hierarchical model has earlier been shown to extract a virtual rat’s position as slowly varying features by directly processing the raw, high dimensional views captured during a training run. The learned representations encode the robot’s position, are orientation invariant and similar to cells in a rodent’s hippocampus.


international conference on artificial neural networks | 2016

Improving Robustness of Slow Feature Analysis Based Localization Using Loop Closure Events

Benjamin Metka; Mathias Franzius; Ute Bauer-Wersing

Hierarchical Slow Feature Analysis (SFA) extracts a spatial representation of the environment by directly processing images from a training run and has been shown to enable self-localization of a mobile robot by encoding its position as slowly varying features. However, in real world outdoor scenarios other variables, like global illumination or location of dynamic objects, might vary on an equal or slower time scale than the position of the robot. To prevent encoding of said variables we propose to restructure the temporal order of training samples based on loop closures in the trajectory. Every time the robot passes by a previously visited place, former recorded images are re-inserted to increase temporal variation of environmental variables. Hence, it is a feedback signal enforcing the model to produce similar outputs due to its slowness objective. Experiments in a simulated outdoor environment demonstrate increased robustness especially for changing lighting conditions.


international conference on advanced robotics | 2015

Predicting the long-term robustness of visual features

Benjamin Metka; Annika Besetzny; Ute Bauer-Wersing; Mathias Franzius

Many vision based localization methods extract local visual features to build a sparse map of the environment and estimate the position of the camera from feature correspondences. However, the majority of features is typically only detectable for short time-frames so that most information in the map becomes obsolete over longer periods of time. Long-term localization is therefore a challenging problem especially in outdoor scenarios where the appearance of the environment can change drastically due to different day times, weather conditions or seasonal effects. We propose to learn a model of stable and unstable feature characteristics from texture and color information around detected interest points that allows to predict the robustness of visual features. The model can be incorporated into the conventional feature extraction and matching process to reject potentially unstable features during the mapping phase. The application of the additional filtering step yields more compact maps and therefore reduces the probability of false positive matches, which can cause complete failure of a localization system. The model is trained with recordings of a train journey on the same track across seasons which facilitates the identification of stable and unstable features. Experiments on data of the same domain demonstrate the generalization capabilities of the learned characteristics.


international conference on multimodal interfaces | 2011

Multimodal segmentation of object manipulation sequences with product models

Alexandra Barchunova; Robert Haschke; Mathias Franzius; Helge Ritter

In this paper we propose an approach for unsupervised segmentation of continuous object manipulation sequences into semantically differing subsequences. The proposed method estimates segment borders based on an integrated consideration of three modalities (tactile feedback, hand posture, audio) yielding robust and accurate results in a single pass. To this end, a Bayesian approach originally applied by Fearnhead to segment one-dimensional time series data -- is extended to allow an integrated segmentation of multi-modal sequences. We propose a joint product model which combines modality-specific likelihoods to model segments. Weight parameters control the influence of each modality within the joint model. We discuss the relevance of all modalities based on an evaluation of the temporal and structural correctness of segmentation results obtained from various weight combinations.


international conference on artificial neural networks | 2010

Learning invariant visual shape representations from physics

Mathias Franzius; Heiko Wersing

3D shape determines an objects physical properties to a large degree. In this article, we introduce an autonomous learning system for categorizing 3D shape of simulated objects from single views. The system extends an unsupervised bottom-up learning architecture based on the slowness principle with top-down information derived from the physical behavior of objects. The unsupervised bottom-up learning leads to pose invariant representations. Shape specificity is then integrated as top-down information from the movement trajectories of the objects. As a result, the system can categorize 3D object shape from a single static object view without supervised postprocessing.


ACIT - Information and Communication Technology | 2010

Identification of High-Level Object Manipulation Operations from Multimodal Input

Alexandra Barchunova; Mathias Franzius; Michael Pardowitz; Helge Ritter

Object manipulation constitutes a large part of our daily hand movements. Recognition of such movements by a robot in an interactive scenario is an issue that is rapidly gaining attention. In this paper we present an approach to identification of a class of high-level manual object manipulations. Experiments have shown that the naive approach based on classification of low-level sensor data yields poor performance. In this paper we introduce a two-stage procedure that considerably improves the identification performance. In the first stage of the procedure we estimate an intermediate representation by applying a linear preprocessor to the multimodal low-level sensor data. This mapping calculates shape, orientation and weight estimators of the interaction object. In the second stage we generate a classifier that is trained to identify high-level object manipulations given the intermediate representation based on shape, orientation and weight. The devices used in our procedure are: Immersion CyberGlove II enhanced with five tactile sensors on the fingertips (TouchGlove), nine tactile sensors to measure the change of the object’s weight and a VICON multicamera system for trajectory recording. We have achieved the following recognition rates for 3600 data samples representing a sequence of manual object manipulations: 100% correct labelling of “holding”, 97% of “pouring”, 81% of “squeezing” and 65% of “tilting”.


field and service robotics | 2018

Boundary Wire Mapping on Autonomous Lawn Mowers

Nils Einecke; Jörg Deigmöller; Keiji Muro; Mathias Franzius

Currently, the service robot market mainly consists of floor cleaning and lawn mowing robots. While some cleaning robots already feature SLAM technology for the constrained indoor application, autonomous lawn mowers typically use an electric wire for boundary definition and homing towards to charging station. An intermediate step towards SLAM for mowers is mapping of the boundary wire. In this work, we analyze three types of approaches for estimating the boundary of the working area of an autonomous mower: GNSS, visual odometry, and wheel-yaw odometry. We extended the latter with orientation loop closure, which gives the best overall result in estimating the metric shape of the boundary.


PLOS ONE | 2018

Bio-inspired visual self-localization in real world scenarios using Slow Feature Analysis

Benjamin Metka; Mathias Franzius; Ute Bauer-Wersing

We present a biologically motivated model for visual self-localization which extracts a spatial representation of the environment directly from high dimensional image data by employing a single unsupervised learning rule. The resulting representation encodes the position of the camera as slowly varying features while being invariant to its orientation resembling place cells in a rodent’s hippocampus. Using an omnidirectional mirror allows to manipulate the image statistics by adding simulated rotational movement for improved orientation invariance. We apply the model in indoor and outdoor experiments and, for the first time, compare its performance against two state of the art visual SLAM methods. Results of the experiments show that the proposed straightforward model enables a precise self-localization with accuracies in the range of 13-33cm demonstrating its competitiveness to the established SLAM methods in the tested scenarios.


computer vision and pattern recognition | 2017

Embedded Robust Visual Obstacle Detection on Autonomous Lawn Mowers

Mathias Franzius; Mark Dunn; Nils Einecke; Roman Dirnberger

Currently, the only mass-market service robots are floor cleaners and lawn mowers. Although available for more than 20 years, they mostly lack intelligent functions from modern robot research. In particular, the obstacle detection and avoidance is typically a simple physical collision detection. In this work, we discuss a prototype autonomous lawn mower with camera-based non-contact obstacle avoidance. We devised a low-cost compact module consisting of color cameras and an ARM-based processing board, which can be added to an autonomous lawn mower with minimal effort. For testing our system, we conducted a field test with 20 prototype units distributed in eight European countries with a total mowing time of 3,494 hours. The results show that our proposed system is able to work without expert interaction for a full season and strongly reduces collision events while still keeping the good mowing performance. Furthermore, a questionnaire with the testers revealed that most people would favor the camera-based mower over a non-camera-based mower.

Collaboration


Dive into the Mathias Franzius's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Niko Wilbert

Humboldt University of Berlin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert A. Legenstein

Graz University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge