Alexander Stumpf
Technische Universität Darmstadt
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alexander Stumpf.
Journal of Field Robotics | 2015
Stefan Kohlbrecher; Alberto Romay; Alexander Stumpf; Anant Gupta; Oskar von Stryk; Felipe Bacim; Doug A. Bowman; Alex K. Goins; Ravi Balasubramanian; David C. Conner
Team ViGIR entered the 2013 DARPA Robotics Challenge DRC with a focus on developing software to enable an operator to guide a humanoid robot through the series of challenge tasks emulating disaster response scenarios. The overarching philosophy was to make our operators full team members and not just mere supervisors. We designed our operator control station OCS to allow multiple operators to request and share information as needed to maintain situational awareness under bandwidth constraints, while directing the robot to perform tasks with most planning and control taking place onboard the robot. Given the limited development time, we leveraged a number of open source libraries in both our onboard software and our OCS design; this included significant use of the robot operating system libraries and toolchain. This paper describes the high level approach, including the OCS design and major onboard components, and it presents our DRC Trials results. The paper concludes with a number of lessons learned that are being applied to the final phase of the competition and are useful for related projects as well.
ieee-ras international conference on humanoid robots | 2014
Alexander Stumpf; Stefan Kohlbrecher; David C. Conner; Oskar von Stryk
In recent years, the numbers of life-size humanoids as well as their mobility capabilities have steadily grown. Stable walking motion and control for humanoid robots are already well investigated research topics. This raises the question how navigation problems in complex and unstructured environments can be solved utilizing a given black box walking controller with proper perception and modeling of the environment provided. In this paper we present a complete system for supervised footstep planning including perception, world modeling, 3D planner and operator interface to enable a humanoid robot to perform sequences of steps to traverse uneven terrain. A proper height map and surface normal estimation are directly obtained from point cloud data. A search-based planning approach (ARA*) is extended to sequences of footsteps in full 3D space (6 DoF). The planner utilizes a black box walking controller without knowledge of its implementation details. Results are presented for an Atlas humanoid robot during participation of Team ViGIR in the 2013 DARPA Robotics Challenge Trials.
ieee-ras international conference on humanoid robots | 2014
Alberto Romay; Stefan Kohlbrecher; David C. Conner; Alexander Stumpf; Oskar von Stryk
Humanoid robotic manipulation in unstructured environments is a challenging problem. Limited perception, communications and environmental constraints present challenges that prevent fully autonomous or purely teleoperated robots from reliably interacting with their environment. In order to achieve higher reliability in manipulation we present an approach involving remote human supervision. Strengths from both human operator and humanoid robot are leveraged through a user interface that allows the operator to perceive the remote environment through an aggregated worldmodel based on onboard sensing, while the robot can efficiently receive perceptual and semantic information from the operator. A template based manipulation approach has been successfully applied to the Atlas humanoid robot; we show real world footage of the results obtained in the DARPA Robotics Challenge Trials 2013.
ieee-ras international conference on humanoid robots | 2016
Alexander Stumpf; Stefan Kohlbrecher; David C. Conner; Oskar von Stryk
Humanoid robots benefit from their anthropomorphic shape when operating in human-made environments. In order to achieve human-like capabilities, robots must be able to perceive, understand and interact with the surrounding world. Humanoid locomotion in uneven terrain is a challenging task as it requires sophisticated world model generation, motion planning and control algorithms and their integration. In recent years, much progress in world modeling and motion control has been achieved. This paper presents one of the very first open source frameworks for full 3D footstep planning available for ROS which integrates perception and locomotion systems of humanoid bipedal robots. The framework is designed to be used for different type of humanoid robots having different perception and locomotion capabilities with minimal implementation effort. In order to integrate with almost any humanoid walking controller, the system can easily be extended with additional functionality that may be needed by low-level motion algorithms. It also considers sophisticated human-robot interaction that enables to direct the planner to generate improved solutions, provides monitoring data to the operator and debugging feedback for developers. The provided software package consists of three major blocks that address world generation, planning and interfacing low-level motion algorithms. The framework has been successfully applied to four different full-size humanoid robots.
ieee-ras international conference on humanoid robots | 2015
Alberto Romay; Achim Stein; Martin Oehler; Alexander Stumpf; Stefan Kohlbrecher; Oskar von Stryk; David C. Conner
Among the eight tasks of the DARPA Robotics Challenge (DRC), the driving task was one of the most challenging. Obstacles in the course prevented straight driving and restricted communications limited the situation awareness of the operator. In this video we show how Team Hector and Team ViGIR successfully completed the driving task with different robot platforms, THOR-Mang and Atlas respectively, but using the same software and compliant steering adapter. Our driving user interface presents to the operator image view from cameras and driving aids such as wheel positioning and turn radius path of the wheels. The operator uses a standard computer game joystick which is used to command steering wheel angles and gas pedal pressure. Steering wheel angle positions are generated off-line and interpolated on-line in the robots onboard computer. The compliant steering adapter accommodates end-effector positioning errors. Gas pedal pressure is generated by a binary joint position of the robots leg. Commands are generated in the operator control station and sent as target positions to the robot. The driving user interface also provides feedback from the current steering wheel position. Video footage with descriptions from the driving interface, robots camera and LIDAR perception and external task monitoring is presented.
ieee-ras international conference on humanoid robots | 2014
Alberto Romay; Stefan Kohlbrecher; Alexander Stumpf; Oskar von Stryk; Felipe Bacim; Doug A. Bowman; Alex K. Goins; Ravi Balasubramanian; David C. Conner
Summary form only given. Team ViGIR entered the 2013 DARPA Robotics Challenge (DRC) with a focus on developing software to enable an operator to guide a humanoid robot through the series of challenge tasks emulating disaster scenarios. We designed our operator control station (OCS) to allow multiple operators to request and share information as needed to maintain situational awareness under bandwidth constraints, while directing the robot to perform tasks with most planning and control taking place onboard the robot. Given the limited development time we leveraged a number of open source libraries in both our onboard software and our OCS design; this included significant use of the Robot Operating System (ROS) libraries and toolchain. The DRC consisted of 8 tasks; this video shows our approach for the Hose task, where the remote operator guides the robot through the OCS to walk towards a reel to pick up a hose, then walk with the hose towards a wye and attach the hose to it. Synchronized footage from both, Hose scenario and screen cast of the OCS, shows step by step how the operator interacts with the robot through template based manipulation and high level semantic commands like defining waypoints or objects to be grasped.
Archive | 2018
David C. Conner; Stefan Kohlbrecher; Philipp Schillinger; Alberto Romay; Alexander Stumpf; Spyros Maniatopoulos; Hadas Kress-Gazit; Oskar von Stryk
This chapter discusses the common reactive high-level behavioral control system used by Team ViGIR and Team Hector on separate robots in the 2015 DARPA Robotics Challenge (DRC) Finals. We present an approach that allows one or more human operators to share control authority with a high-level behavior controller in the form of a finite state machine (automaton). This collaborative autonomy leverages the relative strengths of the robotic system and the (remote) human operators; it increases reliability of the human-robot team performance and decreases the task completion time. This approach is well-suited to disaster scenarios due to the unstructured nature of the environment. The system allows the operators to adjust the robotic system’s autonomy on-the-fly in response to changing circumstances, and to modify pre-defined behaviors as needed. To enable these high-level behaviors, we introduce our system designs for several of the lower-level system capabilities such as footstep planning and template-based object manipulation. We evaluate the proposed approach in the context of our two teams’ participation in the DRC Finals using two different humanoid platforms, and in systematic experiments conducted in the lab afterward. We present a discussion about the lessons learned during the DRC, especially those related to transitioning between operator-centered control and behavior-centered control during competition. Finally, we describe ongoing research beyond the DRC that extends the systems developed during the DRC. All of our described software is available as open source software.
robot soccer world cup | 2017
Nicolai Ommer; Alexander Stumpf; Oskar von Stryk
Online adaptation of motion models enables autonomous robots to move more accurate in case of unknown disturbances. This paper proposes a new adaptive compensation feedforward controller capable of online learning a compensation motion model without any prior knowledge to counteract non-modeled disturbance such as slippage or hardware malfunctions. The controller is able to prevent motion errors a priori and is well suited for real hardware due to high adaptation rate. It can be used in conjunction with any motion model as only motion errors are compensated. A simple interface enables quick deployment of other robot systems as demonstrated in Small Size and Rescue Robot RoboCup leagues.
robot soccer world cup | 2014
Stefan Kohlbrecher; Florian Kunz; Dorothea Koert; Christian Rose; Paul Manns; Kevin Daun; Johannes Schubert; Alexander Stumpf; Oskar von Stryk
Participating in the RoboCup Rescue Real Robot League competition for approximately 5 years, the members of Team Hector Darmstadt have always focused on robot autonomy for Urban Search and Rescue (USAR). In 2014, the team won the RoboCup RRL competition. This marked the first time a team with a strong focus on autonomy won the championship. This paper describes both the underlying research and open source developoments that made this success possible as well as ongoing work focussed on increasing rescue robot performance.
Frontiers in Robotics and AI | 2016
Stefan Kohlbrecher; Alexander Stumpf; Alberto Romay; Philipp Schillinger; Oskar von Stryk; David C. Conner