Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Markus Eisenbach is active.

Publication


Featured researches published by Markus Eisenbach.


advanced video and signal based surveillance | 2012

View Invariant Appearance-Based Person Reidentification Using Fast Online Feature Selection and Score Level Fusion

Markus Eisenbach; Alexander Kolarow; Konrad Schenk; Klaus Debes; Horst-Michael Gross

Fast and robust person reidentification is an important task in multi-camera surveillance and automated access control. We present an efficient appearance-based algorithm, able to reidentify a person regardless of occlusions, distance to the camera, and changes in view and lighting. The use of fast online feature selection techniques enables us to perform reidentification in hyper-real-time for a multi-camera system, by taking only 10 seconds for evaluating 100 minutes of HD-video data. We demonstrate, that our approach surpasses current appearance-based state-of-the-art in reidentification quality and computational speed and sets a new reference in non-biometric reidentification.


international symposium on neural networks | 2015

Evaluation of multi feature fusion at score-level for appearance-based person re-identification

Markus Eisenbach; Alexander Kolarow; Alexander Vorndran; Julia Niebling; Horst-Michael Gross

Robust appearance-based person re-identification can only be achieved by combining multiple diverse features describing the subject. Since individual features perform different, it is not trivial to combine them. Often this problem is bypassed by concatenating all feature vectors and learning a distance metric for the combined feature vector. However, to perform well, metric learning approaches need many training samples which are not available in most real-world applications. In contrast, in our approach we perform score-level fusion to combine the matching scores of different features. To evaluate which score-level fusion techniques perform best for appearance-based person re-identification, we examine several score normalization and feature weighting approaches employing the the widely used and very challenging VIPeR dataset. Experiments show that in fusing a large ensemble of features, the proposed score-level fusion approach outperforms linear metric learning approaches which fuse at feature-level. Furthermore, a combination of linear metric learning and score-level fusion even outperforms the currently best non-linear kernel-based metric learning approaches, regarding both accuracy and computation time.


KI'11 Proceedings of the 34th Annual German conference on Advances in artificial intelligence | 2011

Comparison of laser-based person tracking at feet and upper-body height

Konrad Schenk; Markus Eisenbach; Alexander Kolarow; Horst-Michael Gross

In this paper, a systematic comparative analysis of laserbased tracking methods, at feet and upper-body height, is performed. To this end, we created a well defined dataset, including challenging but realistic person movement trajectories, appearing in public operational environments, recorded with multiple laser range finders. In order to evaluate and compare the tracking results, we applied and adapted a performance metric, known from the Computer Vision area. The dataset in combination with this performance metric enables us to perform systematic and repeatable experiments for benchmarking laser-based person trackers.


Autonomous Robots | 2017

ROREAS: robot coach for walking and orientation training in clinical post-stroke rehabilitation--prototype implementation and evaluation in field trials

Horst-Michael Gross; Andrea Scheidig; Klaus Debes; Erik Einhorn; Markus Eisenbach; Steffen Mueller; Thomas Schmiedel; Thanh Q. Trinh; Christoph Weinrich; Tim Wengefeld; Andreas Bley; Christian Märtin

This paper describes the objectives and the state of implementation of the ROREAS project which aims at developing a socially assistive robot coach for walking and orientation training of stroke patients in the clinical rehabilitation. The robot coach is to autonomously accompany the patients during their exercises practicing their mobility skills. This requires strongly user-centered, polite and attentive social navigation and interaction abilities that can motivate the patients to start, continue, and regularly repeat their self-training. The paper gives an overview of the training scenario and describes the constraints and requirements arising from the scenario and the operational environment. Moreover, it presents the mobile robot ROREAS and gives an overview of the robot’s system architecture and the required human- and situation-aware navigation and interaction skills. Finally, it describes our three-stage approach in conducting function and user tests in the clinical environment: pre-tests with technical staff, followed by function tests with clinical staff and user trials with volunteers from the group of stroke patients, and presents the results of these tests conducted so far.


intelligent robots and systems | 2012

Vision-based hyper-real-time object tracker for robotic applications

Alexander Kolarow; Michael Brauckmann; Markus Eisenbach; Konrad Schenk; Erik Einhorn; Klaus Debes; Horst-Michael Gross

Fast vision-based object and person tracking is important for various applications in mobile robotics and Human-Robot Interaction. While current state-of-the-art methods use descriptive features for visual tracking, we propose a novel approach using a sparse template based feature set, which is drawn from homogeneous regions on the object to be tracked. Using only a small number of simple features, without complex descriptors in combination with logarithmic-search, the tracker performs at hyper-real-time on HD-images without the use of parallelized hardware. Detailed benchmark experiments show that it outperforms most other state-of-the-art approaches for real-time object and person tracking in quality and runtime. In the experiments we also show the robustness of the tracker and evaluate the effects of different initialization methods, feature sets, and parameters on the tracker. Although we focus on the scenario of person and object tracking in robot applications, the proposed tracker can be used for a variety of other tracking tasks.


intelligent robots and systems | 2015

User recognition for guiding and following people with a mobile robot in a clinical environment

Markus Eisenbach; Alexander Vorndran; Sven Sorge; Horst-Michael Gross

Rehabilitative follow-up care is important for stroke patients to regain their motor and cognitive skills. We aim to develop a robotic rehabilitation assistant for walking exercises in late stages of rehabilitation. The robotic rehab assistant is to accompany inpatients during their self-training, practicing both mobility and spatial orientation skills. To hold contact to the patient, even after temporally full occlusions, robust user re-identification is essential. Therefore, we implemented a person re-identification module that continuously re-identifies the patient, using only few amount of the robots processing resources. It is robust to varying illumination and occlusions. State-of-the-art performance is confirmed on a standard benchmark dataset, as well as on a recorded scenario-specific dataset. Additionally, the benefit of using a visual re-identification component is verified by live-tests with the robot in a stroke rehab clinic.


advanced video and signal based surveillance | 2013

APFel: The intelligent video analysis and surveillance system for assisting human operators

Alexander Kolarow; Konrad Schenk; Markus Eisenbach; Michael Dose; Michael Brauckmann; Klaus Debes; Horst-Michael Gross

The rising need for security in the last years has led to an increased use of surveillance cameras in both public and private areas. The increasing amount of footage makes it necessary to assist human operators with automated systems to monitor and analyze the video data in reasonable time. In this paper we summarize our work of the past three years in the field of intelligent and automated surveillance. Our proposed system extends the common active monitoring of camera footage into an intelligent automated investigative person-search and walk path reconstruction of a selected person within hours of image data. Our system is evaluated and tested under life-like conditions in real-world surveillance scenarios. Our experiments show that with our system an operator can reconstruct a case in a fraction of time, compared to manually searching the recorded data.


intelligent robots and systems | 2012

Automatic calibration of a stationary network of laser range finders by matching movement trajectories

Konrad Schenk; Alexander Kolarow; Markus Eisenbach; Klaus Debes; Horst-Michael Gross

Laser based detection and tracking of persons can be used for numerous tasks. While a single laser range finder (LRF) is sufficient for detecting and tracking persons on a mobile robot platform, a network of multiple LRF is required to observe persons in larger spaces. Calibrating multiple LRF into a global coordinate system is usually done by hand in a time consuming procedure. An automatic calibration mechanism for such a sensor network is introduced in this paper. Without the need of prior knowledge about the environment, this mechanism is able to obtain the positions and orientations of all LRF in a global coordinate system. By comparing person tracks, determined for each individual LRF unit and matching them, constrains between the LRF units can be calculated. We are able to estimate the poses of all LRF by resolving these constrains. We evaluate and compare our method to the current state of the art approach methodically and experimentally. Experiments show that our calibration approach outperforms this approach.


international joint conference on neural network | 2016

Cooperative multi-scale Convolutional Neural Networks for person detection

Markus Eisenbach; Daniel Seichter; Tim Wengefeld; Horst-Michael Gross

Robust person detection is required by many computer vision applications. We present a deep learning approach, that combines three Convolutional Neural Networks to detect people at different scales, which is the first time that a multi-resolution model is combined with deep learning techniques in the pedestrian detection domain. The networks learn features from raw pixel information, which is also rare for pedestrian detection. Due to the use of multiple Convolutional Neural Networks at different scales, the learned features are specific for far, medium, and near scales respectively, and thus, the overall performance is improved. Furthermore, we show, that neural approaches can also be applied successfully for the remaining processing steps of classification and non-maximum suppression. The evaluation on the most popular Caltech pedestrian detection benchmark shows that the proposed method can compete with state of the art methods without using Caltech training data and without fine tuning. Therefore, it is shown that our method generalizes well on domains it is not trained on.


advanced video and signal based surveillance | 2012

Automatic Calibration of Multiple Stationary Laser Range Finders Using Trajectories

Konrad Schenk; Alexander Kolarow; Markus Eisenbach; Klaus Debes; Horst-Michael Gross

Laser based detection and tracking of persons can be used for numerous tasks, like statistical measurements for determining bottlenecks in public buildings, optimizing passenger flow, or planning camera placement. Only a network of multiple LRF is sufficient to fulfill these tasks in larger spaces. Calibrating multiple LRF into a global coordinate system is usually done by hand in a time consuming procedure. In this paper, we address the problem of automatically calibrating such a sensor network. We introduce an automatic calibration mechanism, which is able to obtain the positions and orientations of all LRF in a global coordinate system, without any prior knowledge of the scene. Our approach is based on comparing person tracks, determined by each individual LRF unit and matching them in order to obtain constraints between the LRF units. By resolving these constraints, we are able to estimate the poses of all LRF. We evaluate and compare our method to the current state of the art approach methodically and experimentally. Experiments show that our calibration approach outperforms this approach.

Collaboration


Dive into the Markus Eisenbach's collaboration.

Top Co-Authors

Avatar

Horst-Michael Gross

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar

Klaus Debes

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar

Tim Wengefeld

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar

Thanh Q. Trinh

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar

Andrea Scheidig

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar

Steffen Mueller

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar

Christian Märtin

Augsburg University of Applied Sciences

View shared research outputs
Top Co-Authors

Avatar

Christoph Weinrich

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ronny Stricker

Technische Universität Ilmenau

View shared research outputs
Researchain Logo
Decentralizing Knowledge