Christian Nissler
German Aerospace Center
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Christian Nissler.
2016 International Conference on Autonomous Robot Systems and Competitions (ICARSC) | 2016
Martin J. Schuster; Christoph Brand; Sebastian G. Brunner; Peter Lehner; Josef Reill; Sebastian Riedel; Tim Bodenmüller; Kristin Bussmann; Stefan Büttner; Andreas Dömel; Werner Friedl; Iris Lynne Grixa; Matthias Hellerer; Heiko Hirschmüller; Michael Kassecker; Zoltan-Csaba Marton; Christian Nissler; Felix Ruess; Michael Suppa; Armin Wedler
The task of planetary exploration poses many challenges for a robot system, from weight and size constraints to sensors and actuators suitable for extraterrestrial environment conditions. In this work, we present the Light Weight Rover Unit (LRU), a small and agile rover prototype that we designed for the challenges of planetary exploration. Its locomotion system with individually steered wheels allows for high maneuverability in rough terrain and the application of stereo cameras as its main sensor ensures the applicability to space missions. We implemented software components for self-localization in GPS-denied environments, environment mapping, object search and localization and for the autonomous pickup and assembly of objects with its arm. Additional high-level mission control components facilitate both autonomous behavior and remote monitoring of the system state over a delayed communication link. We successfully demonstrated the autonomous capabilities of our LRU at the SpaceBotCamp challenge, a national robotics contest with focus on autonomous planetary exploration. A robot had to autonomously explore a moon-like rough-terrain environment, locate and collect two objects and assemble them after transport to a third object - which the LRU did on its first try, in half of the time and fully autonomous.
Frontiers in Neurorobotics | 2016
Christian Nissler; Nikoleta Mouriki; Claudio Castellini
One of the crucial problems found in the scientific community of assistive/rehabilitation robotics nowadays is that of automatically detecting what a disabled subject (for instance, a hand amputee) wants to do, exactly when she wants to do it, and strictly for the time she wants to do it. This problem, commonly called “intent detection,” has traditionally been tackled using surface electromyography, a technique which suffers from a number of drawbacks, including the changes in the signal induced by sweat and muscle fatigue. With the advent of realistic, physically plausible augmented- and virtual-reality environments for rehabilitation, this approach does not suffice anymore. In this paper, we explore a novel method to solve the problem, which we call Optical Myography (OMG). The idea is to visually inspect the human forearm (or stump) to reconstruct what fingers are moving and to what extent. In a psychophysical experiment involving ten intact subjects, we used visual fiducial markers (AprilTags) and a standard web camera to visualize the deformations of the surface of the forearm, which then were mapped to the intended finger motions. As ground truth, a visual stimulus was used, avoiding the need for finger sensors (force/position sensors, datagloves, etc.). Two machine-learning approaches, a linear and a non-linear one, were comparatively tested in settings of increasing realism. The results indicate an average error in the range of 0.05–0.22 (root mean square error normalized over the signal range), in line with similar results obtained with more mature techniques such as electromyography. If further successfully tested in the large, this approach could lead to vision-based intent detection of amputees, with the main application of letting such disabled persons dexterously and reliably interact in an augmented-/virtual-reality setup.
ieee international conference on rehabilitation robotics | 2015
Christian Nissler; Nikoleta Mouriki; Claudio Castellini; Vasileios Belagiannis; Nassir Navab
Given the recent progress in the development of computer vision, it is nowadays possible to optically track features of the human body with unprecedented precision. We take this as a starting point to build a novel human-machine interface for the disabled. In this particular work we explore the possibility of visually inspecting the human forearm to detect what fingers are moving, and to what extent. In particular, in a psychophysical experiment with ten intact subjects, we tracked the deformations of the surface of the forearm to try and reconstruct intended finger motions. Ridge Regression was used for the reconstruction. The results are highly promising, leading to an average error in the range of 0.13 to 0.2 (normalized root mean square error). If further successfully tested in the large, this approach could represent a fully fledged alternative / replacement to similar traditional interfaces such as, e.g., surface electromyography.
emerging technologies and factory automation | 2015
Christian Nissler; Florian Krebs
Carbon fiber reinforced plastics are playing a key role for aircraft constructions nowadays as well as in the future because of the convenient ratio of strength to weight. Due to the growing requirements of this market, an automation of the production process is necessary. Because of the high unit volumes and accuracy required, the use of robots with increased work accuracy is essential. The German Aerospace Center has developed, in an internal cooperation, an end effector for the camera-based determination of its position and orientation in space. This article deals with the construction and structure of the end effector and first experimental results.
Archive | 2017
Christian Nissler; Mathilde Connan; Markus Nowak; Claudio Castellini
Tactile myography is a promising method for dexterous myocontrol. It stems from the idea of detecting muscle activity, and hence the desired actions to be performed by a prosthesis, via the muscle deformations induced by said activity, using a tactile sensor on the stump. Tactile sensing is high-resolution force / pressure sensing; such a technique promises to yield a rich flow of information about an amputated subject’s intent. In this work we propose a preliminary comparison between tactile myography and surface electromyography enforcing simultaneous and proportional control during an online target-reaching experiment. Six intact subjects and a trans-radial amputee were engaged in repeated hand opening / closing, wrist flexion / extension and wrist pronation / supination, to various degrees of activation. Albeit limited, the results we show indicate that tactile myography enforces an almost uniformly better performance than sEMG.
intelligent robots and systems | 2013
Christian Nissler; Zoltan-Csaba Marton; Michael Suppa
This paper presents a method for finding the largest, connected, smooth surface in noisy depth images. The formulation of the fitting in a Sample Consensus approach allows the use of RANSAC (or any other similar estimator), and makes the method tolerant to low percentage of inliers in the input. Therefore it can be used to simultaneously segment and model the surface of interest. This is important in applications like analyzing physical properties of carbon-fiber-reinforced polymer (CFRP) structures using depth cameras. Employing bivariate polynomials for modeling turns out to be advantageous, allowing to capture the variations along the two principle directions on the surface. However, fitting them efficiently using RANSAC is not straightforward. We present the necessary pre- and post-processing, distance and normal direction checks, and degree optimization (lowering the order of the polynomial), and evaluate how these improve results. Finally, to improve the initial estimate provided by RANSAC and to stabilize the results, an Expectation Maximization (EM) strategy is employed to converge to the best solution. The method was tested on high-quality data and as well on real-world scenes captured by a RGB-D camera.
Archive | 2017
Christian Nissler; Imran Badshah; Claudio Castellini; Wadim Kehl; Nassir Navab
In order to improve the accuracy and reliability of myocontrol (control of prosthetic devices using signals gathered from the human body), novel kinds of sensors able to detect muscular activity are being explored. In particular, Optical Myography (OMG) consists of optically tracking and decoding the deformations happening at the surface of the body whenever muscles are activated. OMG potentially requires no devices to be worn, but since it is an advanced problem of computer vision, it incurs a number of other drawbacks, e.g., changing illumination, identification of markers, frame tear and drop. In this work we propose an improvement to OMG as it has been recently introduced, namely we relax the need of precise positioning and orientation of the markers on the body surface. The small size of the markers and their curvature while adhering to the surface of the forearm can lead to missed detections and misdetections in their orientation; here we rather detect the deformations by applying a Convolutional Neural Network to the region of interest around the feature source segmented, from the forearm. The classification-based approach yields results similar to those obtained by other classification based modalities, reaching accuracies in the range of 96.21% to 99.30% when performed on 10 intact subjects.
2017 First IEEE International Conference on Robotic Computing (IRC) | 2017
Christian Nissler; Zoltan-Csaba Marton
Journal of Intelligent and Robotic Systems | 2017
Martin J. Schuster; Sebastian Georg Brunner; Kristin Bussmann; Stefan Büttner; Andreas Dömel; Matthias Hellerer; Hannah Lehner; Peter Lehner; Oliver Porges; Josef Reill; Sebastian Riedel; Mallikarjuna Vayugundla; Bernhard Vodermayer; Tim Bodenmüller; Christoph Brand; Werner Friedl; Iris Lynne Grixa; Heiko Hirschmüller; Michael Kaßecker; Zoltan-Csaba Marton; Christian Nissler; Felix Ruess; Michael Suppa; Armin Wedler
international conference on robotics and automation | 2017
Shashank Govinderaj; Jeremi Gancet; Mark Post; Raul Dominguez; Fabrice Souvannavong; Simon Lacroix; Michal Smisek; Javier Hidalgo-Carrio; Bilal Wehbe; Alexander Fabisch; Andrea De Maio; Nassir W. Oumer; Vincent Bissonnette; Zoltan-Csaba Marton; Sandeep Kottath; Christian Nissler; Xiu Yan; Rudolph Triebel; Francesco Nuzzolo