Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bertram Taetz is active.

Publication


Featured researches published by Bertram Taetz.


international conference on computer vision | 2015

Flow Fields: Dense Correspondence Fields for Highly Accurate Large Displacement Optical Flow Estimation

Christian Bailer; Bertram Taetz; Didier Stricker

Modern large displacement optical flow algorithms usually use an initialization by either sparse descriptor matching techniques or dense approximate nearest neighbor fields. While the latter have the advantage of being dense, they have the major disadvantage of being very outlier-prone as they are not designed to find the optical flow, but the visually most similar correspondence. In this article we present a dense correspondence field approach that is much less outlier-prone and thus much better suited for optical flow estimation than approximate nearest neighbor fields. Our approach does not require explicit regularization, smoothing (like median filtering) or a new data term. Instead we solely rely on patch matching techniques and a novel multi-scale matching strategy. We also present enhancements for outlier filtering. We show that our approach is better suited for large displacement optical flow estimation than modern descriptor matching techniques. We do so by initializing EpicFlow with our approach instead of their originally used state-of-the-art descriptor matching technique. We significantly outperform the original EpicFlow on MPI-Sintel, KITTI 2012, KITTI 2015 and Middlebury. In this extended article of our former conference publication we further improve our approach in matching accuracy as well as runtime and present more experiments and insights.


Sensors | 2016

On Inertial Body Tracking in the Presence of Model Calibration Errors

Markus Miezal; Bertram Taetz; Gabriele Bleser

In inertial body tracking, the human body is commonly represented as a biomechanical model consisting of rigid segments with known lengths and connecting joints. The model state is then estimated via sensor fusion methods based on data from attached inertial measurement units (IMUs). This requires the relative poses of the IMUs w.r.t. the segments—the IMU-to-segment calibrations, subsequently called I2S calibrations—to be known. Since calibration methods based on static poses, movements and manual measurements are still the most widely used, potentially large human-induced calibration errors have to be expected. This work compares three newly developed/adapted extended Kalman filter (EKF) and optimization-based sensor fusion methods with an existing EKF-based method w.r.t. their segment orientation estimation accuracy in the presence of model calibration errors with and without using magnetometer information. While the existing EKF-based method uses a segment-centered kinematic chain biomechanical model and a constant angular acceleration motion model, the newly developed/adapted methods are all based on a free segments model, where each segment is represented with six degrees of freedom in the global frame. Moreover, these methods differ in the assumed motion model (constant angular acceleration, constant angular velocity, inertial data as control input), the state representation (segment-centered, IMU-centered) and the estimation method (EKF, sliding window optimization). In addition to the free segments representation, the optimization-based method also represents each IMU with six degrees of freedom in the global frame. In the evaluation on simulated and real data from a three segment model (an arm), the optimization-based method showed the smallest mean errors, standard deviations and maximum errors throughout all tests. It also showed the lowest dependency on magnetometer information and motion agility. Moreover, it was insensitive w.r.t. I2S position and segment length errors in the tested ranges. Errors in the I2S orientations were, however, linearly propagated into the estimated segment orientations. In the absence of magnetic disturbances, severe model calibration errors and fast motion changes, the newly developed IMU centered EKF-based method yielded comparable results with lower computational complexity.


international conference on optoelectronics and microelectronics | 2017

Development of an Inertial Motion Capture System for Clinical Application

Gabriele Bleser; Bertram Taetz; Markus Miezal; Corinna A. Christmann; Daniel Steffen; Katja Regenspurger

Abstract The ability to capture human motion based on wearable sensors has a wide range of applications, e.g., in healthcare, sports, well-being, and workflow analysis. This article focuses on the development of an online-capable system for accurately capturing joint kinematics based on inertial measurement units (IMUs) and its clinical application, with a focus on locomotion analysis for rehabilitation. The article approaches the topic from the technology and application perspectives and fuses both points of view. It presents, in a self-contained way, previous results from three studies as well as new results concerning the technological development of the system. It also correlates these with new results from qualitative expert interviews with medical practitioners and movement scientists. The interviews were conducted for the purpose of identifying relevant application scenarios and requirements for the technology used. As a result, the potentials of the system for the different identified application scenarios are discussed and necessary next steps are deduced from this analysis.


international conference on machine vision | 2017

A framework for an accurate point cloud based registration of full 3D human body scans

Vladislav Golyanik; Gerd Reis; Bertram Taetz; Didier Strieker

Alignment of 3D human body scans is a challenging problem in computer vision with various applications. While being extensively studied for the mesh-based case, it is still involved if scans lack topology. In this paper, we propose a practical solution to the point cloud based registration of 3D human scans and a 3D human template. We adopt recent advances in point set registration with prior matches and design a fully automated registration framework. Our framework consists of several steps including establishment of prior matches, alignment of point clouds into a common reference frame, global non-rigid registration, partial non-rigid registration, and a post-processing step. We can handle large point clouds with significant variations in appearance automatically and achieve high registration accuracy which is shown experimentally. Finally, we demonstrate a pipeline for treatment of social pathologies with animatable virtual avatars as an exemplary real-world application of the new framework.


international symposium on visual computing | 2016

Two Phase Classification for Early Hand Gesture Recognition in 3D Top View Data

Aditya Tewari; Bertram Taetz; Frederic Grandidier; Didier Stricker

This work classifies top-view hand-gestures observed by a Time of Flight (ToF) camera using Long Short-Term Memory (LSTM) architecture of neural networks. We demonstrate a performance improvement by a two-phase classification. Therefore we reduce the number of classes to be separated in each phase and combine the output probabilities. The modified system architecture achieves an average cross-validation accuracy of 90.75% on a 9-gesture dataset. This is demonstrated to be an improvement over the single all-class LSTM approach. The networks are trained to predict the class-label continuously during the sequence. A frame-based gesture prediction, using accumulated gesture probabilities per frame of the video sequence, is introduced. This eliminates the latency due to prediction of gesture at the end of the sequence as is usually the case with majority voting based methods.


Sensors | 2018

Validity, Test-Retest Reliability and Long-Term Stability of Magnetometer Free Inertial Sensor Based 3D Joint Kinematics

Wolfgang Teufl; Markus Miezal; Bertram Taetz; Michael Fröhlich; Gabriele Bleser

The present study investigates an algorithm for the calculation of 3D joint angles based on inertial measurement units (IMUs), omitting magnetometer data. Validity, test-retest reliability, and long-term stability are evaluated in reference to an optical motion capture (OMC) system. Twenty-eight healthy subjects performed a 6 min walk test. Three-dimensional joint kinematics of the lower extremity was recorded simultaneously by means of seven IMUs and an OptiTrack OMC system. To evaluate the performance, the root mean squared error (RMSE), mean range of motion error (ROME), coefficient of multiple correlations (CMC), Bland-Altman (BA) analysis, and intraclass correlation coefficient (ICC) were calculated. For all joints, the RMSE was lower than 2.40°, and the ROME was lower than 1.60°. The CMC revealed good to excellent waveform similarity. Reliability was moderate to excellent with ICC values of 0.52–0.99 for all joints. Error measures did not increase over time. When considering soft tissue artefacts, RMSE and ROME increased by an average of 2.2° ± 1.5° and 2.9° ± 1.7°. This study revealed an excellent correspondence of a magnetometer-free IMU system with an OMC system when excluding soft tissue artefacts.


Sensors | 2018

IMU-to-Segment Assignment and Orientation Alignment for the Lower Body Using Deep Learning

Tobias Zimmermann; Bertram Taetz; Gabriele Bleser

Human body motion analysis based on wearable inertial measurement units (IMUs) receives a lot of attention from both the research community and the and industrial community. This is due to the significant role in, for instance, mobile health systems, sports and human computer interaction. In sensor based activity recognition, one of the major issues for obtaining reliable results is the sensor placement/assignment on the body. For inertial motion capture (joint kinematics estimation) and analysis, the IMU-to-segment (I2S) assignment and alignment are central issues to obtain biomechanical joint angles. Existing approaches for I2S assignment usually rely on hand crafted features and shallow classification approaches (e.g., support vector machines), with no agreement regarding the most suitable features for the assignment task. Moreover, estimating the complete orientation alignment of an IMU relative to the segment it is attached to using a machine learning approach has not been shown in literature so far. This is likely due to the high amount of training data that have to be recorded to suitably represent possible IMU alignment variations. In this work, we propose online approaches for solving the assignment and alignment tasks for an arbitrary amount of IMUs with respect to a biomechanical lower body model using a deep learning architecture and windows of 128 gyroscope and accelerometer data samples. For this, we combine convolutional neural networks (CNNs) for local filter learning with long-short-term memory (LSTM) recurrent networks as well as generalized recurrent units (GRUs) for learning time dynamic features. The assignment task is casted as a classification problem, while the alignment task is casted as a regression problem. In this framework, we demonstrate the feasibility of augmenting a limited amount of real IMU training data with simulated alignment variations and IMU data for improving the recognition/estimation accuracies. With the proposed approaches and final models we achieved 98.57% average accuracy over all segments for the I2S assignment task (100% when excluding left/right switches) and an average median angle error over all segments and axes of 2.91° for the I2S alignment task.


international symposium on mixed and augmented reality | 2017

[POSTER] A Probabilistic Combination of CNN and RNN Estimates for Hand Gesture Based Interaction in Car

Aditya Tewari; Bertram Taetz; Frederic Grandidier; Didier Stricker

Hand Gesture Recognition is completed on top-view hand images observed by a Time of Flight(ToF) camera in a car. The work attempts to solve two important problems of touchless interactions inside a car. First, low latency identification of the gestures which are unobtrusive for the driver. Second, reducing the labelled data required to train learning based solutions, this is particularly important because labelling of gesture sequences is expensive and exigent. This work attempts to improve the fast detection of hand-gestures by correcting the probability estimate of a Long Short Term Memory (LSTM) network by pose prediction made by a Convolutional Neural Network(CNN). Weak models for hand gesture classes based on five hand poses are designed to assist in the predictioncorrection scheme. A training procedure to reduce the labelled data required for hand pose classification is also introduced. This method tries to utilise the statistical property of the dataset to identify a good initialization of weights for the CNN, here we demonstrate this using the Principal Component Analysis(PCA) embedding of non-labelled hand pose sequences. While solving a nine class hand gesture problem we demonstrate an accuracy of 89.50% which is better than existing systems. We also show that a PCA embedding based initialization improves the classification performance of the CNN based pose classifier.


international conference on robotics and automation | 2017

Real-time inertial lower body kinematics and ground contact estimation at anatomical foot points for agile human locomotion

Markus Miezal; Bertram Taetz; Gabriele Bleser

The ability to accurately capture locomotion is relevant in various use cases, in particular in the sports and health area. With the major goal of providing a measurement system that can deliver different types of relevant information (3D body segment kinematics, spatiotemporal locomotion parameters, and locomotion patterns) in-field and in real-time, we propose a novel probabilistic (single-plane) ground contact estimation method, using four contact points defined through a biomechanical foot model, and integrate this into an existing inertial motion capturing method. The resulting method is quantitatively evaluated on simulated and real IMU data in comparison to an optical motion capture system on walking, running, and jumping sequences. The results show its ability to maintain a good average 3D kinematics estimation error on low- and high-acceleration locomotion, whereas many previous accuracy studies restrict themselves to movements with low to moderate global accelerations, such as upper body activities or slow locomotion. Moreover, a qualitative evaluation of the estimated ground contact probabilities demonstrates the methods ability to also provide consistent information also for deriving spatiotemporal locomotion parameters as well as locomotion patterns (e.g., over-pronation/-supination) simultaneously with the 3D kinematics.


international conference on information fusion | 2016

Towards self-calibrating inertial body motion capture

Bertram Taetz; Gabriele Bleser; Markus Miezal

Collaboration


Dive into the Bertram Taetz's collaboration.

Top Co-Authors

Avatar

Gabriele Bleser

Kaiserslautern University of Technology

View shared research outputs
Top Co-Authors

Avatar

Markus Miezal

Kaiserslautern University of Technology

View shared research outputs
Top Co-Authors

Avatar

Corinna A. Christmann

Kaiserslautern University of Technology

View shared research outputs
Top Co-Authors

Avatar

Aditya Tewari

German Research Centre for Artificial Intelligence

View shared research outputs
Top Co-Authors

Avatar

Alexandra Hoffmann

Kaiserslautern University of Technology

View shared research outputs
Top Co-Authors

Avatar

Didier Strieker

Kaiserslautern University of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Fröhlich

Kaiserslautern University of Technology

View shared research outputs
Top Co-Authors

Avatar

Sigrid Leyendecker

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Vladislav Golyanik

Kaiserslautern University of Technology

View shared research outputs
Top Co-Authors

Avatar

Wolfgang Teufl

Kaiserslautern University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge