Jörn Vogel
German Aerospace Center
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jörn Vogel.
intelligent robots and systems | 2011
Jörn Vogel; Claudio Castellini; Patrick van der Smagt
In this paper we describe and practically demonstrate a robotic arm/hand system that is controlled in realtime in 6D Cartesian space through measured human muscular activity. The soft-robotics control architecture of the robotic system ensures safe physical human robot interaction as well as stable behaviour while operating in an unstructured environment. Muscular control is realised via surface electromyography, a non-invasive and simple way to gather human muscular activity from the skin. A standard supervised machine learning system is used to create a map from muscle activity to hand position, orientation and grasping force which then can be evaluated in real time—the existence of such a map is guaranteed by gravity compensation and low-speed movement. No kinematic or dynamic model of the human arm is necessary, which makes the system quickly adaptable to anyone. Numerical validation shows that the system achieves good movement precision. Live evaluation and demonstration of the system during a robotic trade fair is reported and confirms the validity of the approach, which has potential applications in muscle-disorder rehabilitation or in teleoperation where a close-range, safe master/slave interaction is required, and/or when optical/magnetic position tracking cannot be enforced.
The International Journal of Robotics Research | 2015
Jörn Vogel; Sami Haddadin; Beata Jarosiewicz; John D. Simeral; Daniel Bacher; Leigh R. Hochberg; John P. Donoghue; P. van der Smagt
Fully autonomous applications of modern robotic systems are still constrained by limitations in sensory data processing, scene interpretation, and automated reasoning. However, their use as assistive devices for people with upper-limb disabilities has become possible with recent advances in “soft robotics”, that is, interaction control, physical human–robot interaction, and reflex planning. In this context, impedance and reflex-based control has generally been understood to be a promising approach to safe interaction robotics. To create semi-autonomous assistive devices, we propose a decision-and-control architecture for hand–arm systems with “soft robotics” capabilities, which can then be used via human–machine interfaces (HMIs). We validated the functionality of our approach within the BrainGate2 clinical trial, in which an individual with tetraplegia used our architecture to control a robotic hand–arm system under neural control via a multi-electrode array implanted in the motor cortex. The neuroscience results of this research have previously been published by Hochberg et al. In this paper we present our assistive decision-and-control architecture and demonstrate how the semi-autonomous assistive behavior can help the user. In our framework the robot is controlled through a multi-priority Cartesian impedance controller and its behavior is extended with collision detection and reflex reaction. Furthermore, virtual workspaces are added to ensure safety. On top of this we employ a decision-and-control architecture that uses sensory information available from the robotic system to evaluate the current state of task execution. Based on a set of available assistive skills, our architecture provides support in object interaction and manipulation and thereby enhances the usability of the robotic system for use with HMIs. The goal of our development is to provide an easy-to-use robotic system for people with physical disabilities and thereby enable them to perform simple tasks of daily living. In an exemplary real-world task, the participant was able to serve herself a beverage autonomously for the first time since her brainstem stroke, which she suffered approximately 14 years prior to this research.
Journal of Neural Engineering | 2013
L L Bologna; J Pinoteau; J-B Passot; J A Garrido; Jörn Vogel; E. Ros Vidal; A Arleo
OBJECTIVE Fine touch sensing relies on peripheral-to-central neurotransmission of somesthetic percepts, as well as on active motion policies shaping tactile exploration. This paper presents a novel neuroengineering framework for robotic applications based on the multistage processing of fine tactile information in the closed action-perception loop. APPROACH The integrated system modules focus on (i) neural coding principles of spatiotemporal spiking patterns at the periphery of the somatosensory pathway, (ii) probabilistic decoding mechanisms mediating cortical-like tactile recognition and (iii) decision-making and low-level motor adaptation underlying active touch sensing. We probed the resulting neural architecture through a Braille reading task. MAIN RESULTS Our results on the peripheral encoding of primary contact features are consistent with experimental data on human slow-adapting type I mechanoreceptors. They also suggest second-order processing by cuneate neurons may resolve perceptual ambiguities, contributing to a fast and highly performing online discrimination of Braille inputs by a downstream probabilistic decoder. The implemented multilevel adaptive control provides robustness to motion inaccuracy, while making the number of finger accelerations covariate with Braille character complexity. The resulting modulation of fingertip kinematics is coherent with that observed in human Braille readers. SIGNIFICANCE This work provides a basis for the design and implementation of modular neuromimetic systems for fine touch discrimination in robotics.
intelligent robots and systems | 2013
Jörn Vogel; Justin Bayer; Patrick van der Smagt
The development of new, light robotic systems has opened up a wealth of human-robot interaction applications. In particular, the use of robot manipulators as personal assistant for the disabled is realistic and affordable, but still requires research as to the brain-computer interface. Based on our previous work with tetraplegic individuals, we investigate the use of low-cost yet stable surface Electromyography (sEMG) interfaces for individuals with Spinal Muscular Atrophy (SMA), a disease leading to the death of neuronal cells in the anterior horn of the spinal cord; with sEMG, we can record remaining active muscle fibers. We show the ability of two individuals with SMA to actively control a robot in 3.5D continuously decoded through sEMG after a few minutes of training, allowing them to regain some independence in daily life. Although movement is not nearly as fast as natural, unimpaired movement, reach and grasp success rates are near 100% after 50s of movement.
international conference on robotics and automation | 2017
Annette Hagengruber; Hannes Höppner; Jörn Vogel
This paper examines the capability to incorporate spatial force feedback to the human toe when teleoperating a robotic arm in a force task. Due to the growing complexity of teleoperated systems new means of feedback get increasingly important. To investigate the viability of spatial toe-feedback, experiments with 12 subjects were conducted. The participants had to teleoperate a DLR Light-Weight Robot (LWR) via optical tracking of one finger in order to push a toy train. The orientation of the rail was unknown to the subject and had to be explored using the haptic feedback — a three-dimensional spatial force to the toe, reflecting the contact forces at the robotic end-effector — in absence of visual feedback. The rail was mounted in one of four possible orientations (differences of 45°). The main task of the experiment was to identify the present orientation. In our study subjects could successfully identify the orientation of the rail in more than two thirds of all trials (68%). In almost half of the trials (44%) the subjects were able to move the train along the rails long enough to reach the bumpers at the end and identify them as such. Assuming no feedback would be provided at all, the first metric has a chance level of 25%, and reaching the bumper can be considered impossible. Thus, we can conclude that humans can incorporate spatial force feedback to the toe into their sensorimotor loop.
international conference on robotics and automation | 2016
Jörn Vogel; Katharina Hertkorn; Rohit U. Menon; Maximo A. Roa
This paper proposes a scheme to provide flexible semi-autonomous grasping capabilities to an assistive robotic manipulator. The testbed consists of a five-finger robotic hand mounted on a robotic arm. During teleoperation, the position of the hand is continuously controlled in the three translational degrees of freedom, and the user has no direct influence over the rotational behavior. The proposed semi-autonomy scheme assists the user for moving and orienting the hand towards the object, and automates the grasping process when it is triggered. The velocity commands issued by the user are enhanced using virtual fixtures, which are not preprogrammed to support one approach direction to the (known) object, but are adapted online according to the intended movement. The approach is validated with a psycho-physical user study where the participants grasp objects in a simulation environment using a SpaceMouse interface. This setting serves as a testbed for the target application in which disabled subjects will control the real robotic system with an interface based on bio-signals. The user study compares the semi-autonomous and the pure teleoperation modes in terms of objective and subjective measures, showing an increase in performance and a decrease in workload for the proposed semi-autonomous mode.
international conference on robotics and automation | 2016
Roman Weitschat; Alexander Dietrich; Jörn Vogel
Motion planning in robotics is a very large field of research. Many different approaches have been developed to create smooth trajectories for robot movement. For example there are optimization algorithms, which optimize kinematic or dynamic properties of a trajectory. Furthermore, nonlinear programming methods like e.g. optimal control, or polynomial based methods are widely used for trajectory generation. Most of these techniques are used to calculate a trajectory in advance, or they are limited to create point-to-point motions, where the robot needs to stop when switching to the next target point, especially, when interpolating in rotational space. In this paper, we combine a low-pass filter and spherical linear interpolation to realize a velocity-limited online trajectory generator for robot orientations in quaternion space. We use the developed motion generator for mirroring a human arm motion with a robot, recorded by a low frequency visual tracking. Using the proposed method, we can replicate the motion of the operators arm with very little delay and thereby achieve an easy-to-use interface. Furthermore, as we can strictly limit the velocity of the generated motion, the approach can safely be used in human robot collaboration applications.
international conference on robotics and automation | 2017
Jörn Vogel; Naohiro Takemura; Hannes Höppner; Patrick van der Smagt; Gowrishankar Ganesh
Tool-held hitting tasks, like hammering a nail or striking a ball with a bat, require humans, and robots, to purposely collide and transfer momentum from their limbs to the environment. Due to the vibrational dynamics, every tool has a location where a hit is most efficient results in minimal tool vibrations, and consequently maximum energy transfer to the environment. In sports, this location is often referred to as the “sweet spot” of a bat, or racquet. Our recent neuroscience study suggests that humans optimize hits by using the jerk and torque felt at their hand. Motivated by this result, in this work we first analyze the vibrational dynamics of an end-effector-held bat to understand the signature projected by a sweet spot on the jerk and torque sensed at the end-effector. We then use this analysis to develop a controller for a robotic “baseball hitter”. The controller enables the robot-hitter to iteratively adjust its swing trajectory to ensure that the contact with the ball occurs at the sweet spot of the bat. We tested the controller on the DLR LWR III manipulator with three different bats. Like a human, our robot hitter is able to optimize the energy transfer, specifically maximize the ball velocity, during hits, by using its end effector position and torque sensors, and without any prior knowledge of the shape, size or material of the held bat.
international conference on robotics and automation | 2017
Roman Weitschat; Jörn Vogel; Sophie Lantermann; Hannes Höppner
A fundamental problem in human-robot collaboration is to ensure safety for humans being located in the workspace of the robot. Several new robots, referred to as collaborative robots, are pushing into the market. Most of these so-called co-bots have similar properties. They are small, lightweight and designed with big roundings to ensure safety in the case of a collision with a human. Equipped with torque sensors, external torque observers, tactile skins, etc., they are able to stop the robot when an emergency occurs. While developing more and more co-bots, the main focus lies on the robot itself. But to make a robot deployable, a special tool for a defined task is needed. These tools are often sharp-edged and dangerous in case of a collision with a human. In this paper we present a new safety module for robots to ensure safety for different tools in collaborative tasks. This module, filled with air pressure during the robot motion, covers mounted tools and carried workpieces. In case of a non or very slow moving robot, the safety module is able to pull back and the tool is uncovered. In our experiments we found out that we can increase the velocity up to 1 m/s while satisfying the requirements of the ISO/TS 15066 and retain the full functionality of the tool.
human robot interaction | 2017
Annette Hagengruber; Daniel Leidner; Jörn Vogel
Neuromuscular diseases, stroke, or trauma can lead to a reduced neural function which severely inhibits limb functionality. If the disease is strongly advanced, people can’t manage their daily life independently and become reliant on 24-hour care. In this situation, assistive technology, like a robotic manipulator mounted on a wheelchair, can provide help and relief. However, control of such a device is usually achieved with a joystick, which requires to have remaining functionality in hand and finger movement. This prevents many people with tetraplegia from efficient use of such assistive technology. An alternative to the joystick, is given by Brain-Computer Interfaces. It has been shown that noninvasive interfaces like Electroencephalography can be used to achieve control over low-dimensional devices like power wheelchairs [3]. More complex tasks like control of assistive robotic devices have been demonstrated with invasive interfaces, like the BrainGate Neural Interface System [1]. We investigate the use of surface Electromyography (EMG) as an interface for assistive robotic devices. It is a comparably cheap and easy to apply technology. We could show that people with tetraplegia due to a severe Spinal Muscular Atrophy (SMA), can still achieve control over a robotic manipulator (e.g. drinking from a bottle) by recording remaining muscular activity [4]. To investigate the use of EMG as an interface to assistive technology, we developed the research platform EDAN (EMG-controlled Daily Assistant). It consists of a robotic manipulator mounted on a state of the art power wheelchair. We use a torque controlled robotic arm (DLR-LWR 3), which is well suited for safe physical interaction with humans and the environment. The five-finger hand mounted on the robotic arm allows for stable grasping of a variety of everydayobjects. The focus of our research is twofold. On the one hand, we investigate the use of EMG as a non-invasive interface to provide people with control over assistive systems. On the other hand, we develop methods to simplify the usage of such systems with the support of artificial intelligence. Manual control of robotic manipulators is rather slow and cumbersome, especially when controlled with a noisy interface like a BCI. Artificial intelligence can help to significantly improve the usability of such systems. A shared control ap-