Thomas Reineking
University of Bremen
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Thomas Reineking.
International Journal of Approximate Reasoning | 2016
Joachim Clemens; Thomas Reineking; Tobias Kluth
Probability theory has become the standard framework in the field of mobile robotics because of the inherent uncertainty associated with sensing and acting. In this paper, we show that the theory of belief functions with its ability to distinguish between different types of uncertainty is able to provide significant advantages over probabilistic approaches in the context of robotics. We do so by presenting solutions to the essential problems of simultaneous localization and mapping (SLAM) and planning based on belief functions. For SLAM, we show how the joint belief function over the map and the robots poses can be factored and efficiently approximated using a Rao-Blackwellized particle filter, resulting in a generalization of the popular probabilistic FastSLAM algorithm. Our SLAM algorithm produces occupancy grid maps where belief functions explicitly represent additional information about missing and conflicting measurements compared to probabilistic grid maps. The basis for this SLAM algorithm are forward and inverse sensor models, and we present general evidential models for range sensors like sonar and laser scanners. Using the generated evidential grid maps, we show how optimal decisions can be made for path planning and active exploration. To demonstrate the effectiveness of our evidential approach, we apply it to two real-world datasets where a mobile robot has to explore unknown environments and solve different planning problems. Finally, we provide a quantitative evaluation and show that the evidential approach outperforms a probabilistic one both in terms of map quality and navigation performance. A belief-function-based approach to SLAM for mobile robots is presented.Different types of uncertainty are explicitly represented in evidential grid maps.Optimal navigation and exploration based on evidential grid maps is shown.Evidential forward and inverse models for range sensors are provided.The approach is evaluated using real-world datasets recorded by a mobile robot.
international conference spatial cognition | 2008
Thomas Reineking; Christian Kohlhagen; Christoph Zetzsche
Humans utilize region-based hierarchical representations in the context of navigation. We propose a computational model for representing region hierarchies and define criteria for automatically generating them. We devise a cognitively plausible online wayfinding algorithm exploiting the hierarchical decomposition given by regions. The algorithm allows an agent to derive plans with decreasing detail level along paths, enabling the agent to obtain the next action in logarithmic time and complete solutions in almost linear time. The resulting paths are reasonable approximations of optimal shortest paths.
International Journal of Approximate Reasoning | 2016
Thomas Reineking
Obtaining reliable estimates of the parameters of a probabilistic classification model is often a challenging problem because the amount of available training data is limited. In this paper, we present a classification approach based on belief functions that makes the uncertainty resulting from limited amounts of training data explicit and thereby improves classification performance. In addition, we model classification as an active information acquisition problem where features are sequentially selected by maximizing the expected information gain with respect to the current belief distribution, thus reducing uncertainty as quickly as possible. For this, we consider different measures of uncertainty for belief functions and provide efficient algorithms for computing them. As a result, only a small subset of features need to be extracted without negatively impacting the recognition rate. We evaluate our approach on an object recognition task where we compare different evidential and Bayesian methods for obtaining likelihoods from training data and we investigate the influence of different uncertainty measures on the feature selection process. An active classification approach based on belief functions is presented.The amount of available training data is reflected by the classification model.An information gain strategy for actively selecting features is proposed.Algorithms for efficiently computing different belief-function-based uncertainty measures are provided.The effectiveness of the approach is demonstrated in an application to object recognition.
BELIEF 2014 Proceedings of the Third International Conference on Belief Functions: Theory and Applications - Volume 8764 | 2014
Joachim Clemens; Thomas Reineking
We present an evidential multi-sensor fusion approach for navigating a maneuverable ice probe designed for extraterrestrial sample analysis missions. The probe is equipped with a variety of sensors and has to estimate its own position within the ice as well as a map of its surroundings. The sensor fusion is based on an evidential SLAM approach which produces evidential occupancy grid maps that contain more information about the environment compared to probabilistic grid maps. We describe the different sensor models underlying the algorithm and we present empirical results obtained under controlled conditions in order to analyze the effectiveness of the proposed multi-sensor fusion approach. In particular, we show that the localization error is significantly reduced by combining multiple sensors.
BELIEF 2014 Proceedings of the Third International Conference on Belief Functions: Theory and Applications - Volume 8764 | 2014
Thomas Reineking; Kerstin Schill
This paper presents an object recognition approach based on belief function inference and information gain maximization. A common problem for probabilistic object recognition models is that the parameters of the probability distributions cannot be accurately estimated using the available training data due to high dimensionality. We therefore use belief functions in order to make the reliability of the evidence provided by the training data an explicit part of the recognition model. In contrast to typical classification approaches, we consider recognition as a sequential information-gathering process where a system with dynamic beliefs actively seeks to acquire new evidence. This acquisition process is based on the principle of maximum expected information gain and enables the system to perform optimal actions for reducing uncertainty as quickly as possible. We evaluate our system on a standard object recognition dataset where we investigate the effect of the amount of training data on classification performance by comparing different methods for constructing belief functions from data.
Cognitive Processing | 2009
Johannes Wolter; Thomas Reineking; Christoph Zetzsche; Kerstin Schill
The concept of place is essential to the way humans represent and interact with spatial environments. This raises the question of how ‘‘being at a place’’ can be inferred from sensory information. The investigation of place cells, for example, indicates the importance of visual cues for the robust localization of rodents (O’Keefe and Dostrovsky 1971), however, the exact processing mechanisms remain unclear. The activation of a place cell is primarily determined by the animal’s location. Typically, it is independent of the orientation and other conditions like illumination. This kind of independence from certain aspects of the sensory input is a key challenge in the field of pattern recognition where it is referred to as invariance. A prominent example is recognizing objects invariantly under transformations resulting from changes in the perspective on the object (for example recognizing the tree in both Fig. 1a and d). In this paper, our goal is to investigate the invariance properties specific to place recognition in order to draw conclusions about the suitability of different image processing techniques. The variance in the visual input perceived at a specific place mainly results from minor changes of the observer’s orientation whereas the variance between places results from changes in position. Strictly speaking, any change in position leads to another place, but, for most purposes, the granularity of a place as a local environment, like in place cells, is desirable. The question thus is: What are the consequences of the different changes for the projection of the environment on the retina? (see Fig. 1) Changes in position orthogonal to the viewing direction (e.g., taking a step to the left) roughly correspond to a translation of the projected pattern. The extent of the translation depends on the depth structure of the perceived scene (motion parallax) which can lead to occlusions and distortions. Changes in position along the viewing direction lead to changes in scale and similar occlusions/distortions. Changes in the observer’s orientation relate to a translation of the pattern on the retina including minor distortions, depending on the lens and deviations with respect to the rotation axis from the nodal point. Overall, place recognition should be invariant under minor translations of the perceived pattern, but it should be selective with respect to major changes in scale or occlusions resulting from translation by the observer. By contrast, object recognition approaches are typically designed for achieving invariance under changes in scale, partial occlusions and out-of-plane rotations. Having specified the invariance requirements for place recognition, we now try to relate the possible approaches to place recognition with the broader realm of pattern recognition. For this, we suggest a conceptual space with three basic dimensions. In the first dimension, we distinguish between local and global (holistic) approaches. A local representation relies on information extracted at single points or regions in the image instead of processing the visual input as a whole. Such regions of interest (ROI) are ideally highly informative and can be associated with an actual object/landmark or with some other measure like curvature or saliency, see Zetzsche and Barth (1990); Lowe (2004). Object or landmark regions are less common due to the higher computational complexity and limited robustness of their detection. The detection is followed by a feature vector extraction describing the ROI. By contrast, global J. Wolter (&) T. Reineking C. Zetzsche K. Schill Cognitive Neuroinformatics, Universität Bremen, Bremen, Germany e-mail: [email protected]
international conference spatial cognition | 2014
Thomas Reineking; Joachim Clemens
We show how a SLAM algorithm based on belief function theory can produce evidential occupancy grid maps that provide a mobile robot with additional information about its environment. While uncertainty in probabilistic grid maps is usually measured by entropy, we show that for evidential grid maps, uncertainty can be expressed in a three-dimensional space and we propose appropriate measures for quantifying uncertainty in these different dimensions. We analyze these measures in a practical mapping example containing typical sources of uncertainty for SLAM. As a result of the evidential representation, the robot is able to distinguish between different sources of uncertainty (e.g., a lack of measurements vs. conflicting measurements) which are indistinguishable in the probabilistic framework.
international conference spatial cognition | 2010
Thomas Reineking; Johannes Wolter; Konrad Gadzicki; Christoph Zetzsche
Determining ones position within the environment is a basic feature of spatial behavior and spatial cognition. This task is of inherently sensorimotor nature in that it results from a combination of sensory features and motor actions, where the latter comprise exploratory movements to different positions in the environment. Biological agents achieve this in a robust and effortless fashion, which prompted us to investigate a bio-inspired architecture to study the localization process of an artificial agent which operates in virtual spatial environments. The spatial representation in this architecture is based on sensorimotor features that comprise sensory sensory features as well as motor actions. It is hierarchically organized and its structure can be learned in an unsupervised fashion by an appropriate clustering rule. In addition, the architecture has a temporal belief update mechanism which explicitly utilizes the statistical correlations of actions and locations. The architecture is hybrid in integrating bottom-up processing of sensorimotor features with topdown reasoning which is able to select optimal motor actions based on the principle of maximum information gain. The architecture operates on two sensorimotor levels, a macro-level, which controls the movements of the agent in space, and on a micro-level, which controls its eye movements. As a result, the virtual mobile agent is able to localize itself within an environment using a minimum number of exploratory actions.
international conference spatial cognition | 2014
David Nakath; Tobias Kluth; Thomas Reineking; Christoph Zetzsche; Kerstin Schill
Spatial interaction of biological agents with their environment is based on the cognitive processing of sensory as well as motor information. There are many models for sole sensory processing but only a few for integrating sensory and motor information into a unifying sensorimotor approach. Additionally, neither the relations shaping the integration are yet clear nor how the integrated information can be used in an underlying representation. Therefore, we propose a probabilistic model for integrated processing of sensory and motor information by combining bottom-up feature extraction and top-down action selection embedded in a Bayesian inference approach. The integration of sensory perceptions and motor information brings about two main advantages: (i) Their statistical dependencies can be exploited by representing the spatial relationships of the sensor information in the underlying joint probability distribution and (ii) a top-down process can compute the next most informative region according to an information gain strategy. We evaluated our system in two different object recognition tasks. We found that the integration of sensory and motor information significantly improves active object recognition, in particular when these movements have been chosen by an information gain strategy.
international joint conference on knowledge discovery, knowledge engineering and knowledge management | 2009
Thomas Reineking; Niclas Schult; Joana Hois
One of the reasons why humans are so successful at interpreting everyday situations is that they are able to combine disparate forms of knowledge. Most artificial systems, by contrast, are restricted to a single representation and hence fail to utilize the complementary nature of multiple sources of information. In this paper, we introduce an information-driven scene categorization system that integrates common sense knowledge provided by a domain ontology with a learned statistical model in order to infer a scene class from recognized objects. We show how the unspecificity of coarse logical constraints and the uncertainty of statistical relations and the object detection process can be modeled using Dempster-Shafer theory and derive the resulting belief update equations. In addition, we define an uncertainty minimization principle for adaptively selecting the most informative object detectors and present classification results for scenes from the LabelMe image database.