Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andreas Bihlmaier is active.

Publication


Featured researches published by Andreas Bihlmaier.


simulation modeling and programming for autonomous robots | 2014

Robot Unit Testing

Andreas Bihlmaier; Heinz Wörn

We introduce Robot Unit Testing (RUT) as a methodology to bring modern testing methods into robotics. Through RUT the range of robotics software that can be automatically tested is extended beyond current practice. A robotics simulator is used to bridge the gap between well automated tests that only check a robots software and time consuming, inherently manual tests on robots of alloy and circuits. An in-depth realization of RUT is shown, which is based on the Robot Operating System (ROS) framework and the Gazebo simulator due to their prominence in robotics research and inherent suitability for the RUT methodology.


Artificial Intelligence Review | 2013

Towards Cognitive Medical Robotics in Minimal Invasive Surgery

Oliver Weede; Andreas Bihlmaier; Jessica Hutzl; Beat P. Müller-Stich; Heinz Wörn

Up to date, medical robots for minimal invasive surgery do not provide assistance appropriate to the workflow of the intervention. A simple concept of a cognitive system is presented, which is derived from a classic closed-loop control. As implementation, we present a cognitive medical robot system using lightweight robots with redundant kinematics. The robot system includes several control modes and human-machine interfaces. We focus on describing knowledge acquisition about the workflow of an intervention and present two example applications utilizing the acquired knowledge: autonomous camera guidance and planning of minimal invasive port (trocar) positions in combination with an initial robot setup. Port planning is described as optimization problem. The autonomous camera system includes a mid-term movement prediction of the ongoing intervention. The cognitive approach to a medical robot system includes taking the environment into account. The goal is to create a system that acts as a human assistant, who perceives the situation, understands the context based on his knowledge and acts appropriate.


Robot Operating System (ROS) - The Complete Reference (Volume 1). Ed.: A. Koubaa | 2016

ROS-Based Cognitive Surgical Robotics

Andreas Bihlmaier; Tim Beyl; Philip Nicolai; Mirko Kunze; Julien Mintenbeck; Luzie Schreiter; Thorsten Brennecke; Jessica Hutzl; Jörg Raczkowsky; Heinz Wörn

The case study at hand describes our ROS-based setup for robot-assisted (minimally-invasive) surgery. The system includes different perception components (Kinects, Time-of-Flight Cameras, Endoscopic Cameras, Marker-based Trackers, Ultrasound), input devices (Force Dimension Haptic Input Devices), robots (KUKA LWRs, Universal Robots UR5, ViKY Endoscope Holder), surgical instruments and augmented reality displays. Apart from bringing together the individual components in a modular and flexible setup, many subsystems have been developed based on combinations of the single components. These subsystems include a bimanual telemanipulator, multiple Kinect people tracking, knowledge-based endoscope guidance and ultrasound tomography. The platform is not a research project in itself, but a basic infrastructure used for various research projects. We want to show how to build a large robotics platform, in fact a complete lab setup, based on ROS. It is flexible and modular enough to do research on different robotics related questions concurrently. The whole setup is running on ROS Indigo and Ubuntu Trusty (14.04). A repository of already open sourced components is available at https://github.com/KITmedical.


international conference on multisensor fusion and integration for intelligent systems | 2012

Robustness, scalability and flexibility: Key-features in modular self-reconfigurable mobile robotics

Rene Matthias; Andreas Bihlmaier; Heinz Wörn

In this paper we address some of the most important aspects in modular self-reconfigurable mobile robotics. Related work indicates that it is not sufficient to just have a flexible, scalable and robust platform but that it is necessary to preserve these capabilities in higher levels of organization to benefit from them. Hence we analyze the way in which similar platforms and the according software are implemented. Then we describe the way our own platform is implemented within the SYMBRION and REPLICATOR projects. Afterwards we show how we manage to preserve the robustness and flexibility for use by other researchers in higher levels of organization. To conclude we provide some measurements that show the general adequacy of our platform architecture to cope with the challenges posed by multi-modular self-reconfigurable robotics.


international workshop on robot motion and control | 2013

Automated planning as a new approach for the self-reconfiguration of mobile modular robots

Andreas Bihlmaier; Lutz Winkler; Heinz Wörn

We present a new approach to the solution of the self-reconfiguration problem for mobile modular robots (MMRs). The solution describes self-reconfiguration as a planning problem that can be tackled by an automated planner. In addition to the usage of the advanced domain-independent search heuristics within the planner, we introduce domain-specific heuristics into the domain description on a higher conceptual level. An explicit optimality measure is part of the given domain description. The planner can cope with difficult self-reconfigurations, e.g. involving building helper organisms. The abstract symbolic plan is executed by a behavior based robot controller, where each robot is seen as an agent that has access to local information only. The position which a robot takes in the final configuration is determined by swarm mechanisms during runtime. A coordination instance broadly monitors the self-reconfiguration. The feasibility and advantages of this approach compared to previous work on self-reconfiguration of MMRs is shown by planning and executing self-reconfiguration in simulation for several organism families with different reconfiguration complexities.


Robot Operating System (ROS) - The Complete Reference (Volume 1). Ed.: A. Koubaa | 2016

Advanced ROS Network Introspection (ARNI)

Andreas Bihlmaier; Matthias Hadlich; Heinz Wörn

This tutorial chapter gives an introduction to Advanced ROS Network Introspection (ARNI), which was released as a solution for monitoring large ROS-based robotic installations. In the spirit of infrastructure monitoring (like Nagios), we generate metadata about all hosts, nodes, topics and connections, in order to monitor and specify the state of distributed robot software based on ROS. ARNI provides a more in-depth view of what is going on within the ROS computation graph out of the box. Any existing ROS node and host can be introspected without prior modification or recompilation. This extends from live network properties to host and node specific ones by running an additional node on each host of the ROS network. Furthermore, it is possible to define reference values for the state of all ROS components based on their metadata attributes. Subsequently, ARNI provides a mechanism to take countermeasures on detection of a violated specification. All features are modular and can be used without modifying existing ROS software. ARNI was written for ROS Indigo and this tutorial has been tested on Ubuntu Trusty (14.04). A link to the source code repository together with complementary information is available at http://wiki.ros.org/arni.


robotics automation and mechatronics | 2015

Learning surgical know-how: Dexterity for a cognitive endoscope robot

Andreas Bihlmaier; Heinz Wörn

A successful surgery requires a working cooperation between the surgeon, the anesthetist and the operating room staff. In minimally invasive surgery a further cooperation is essential: The teamwork between surgeon and camera assistant. Because the surgeon has to handle two instruments, he is unable to guide the endoscope at the same time. Thus the surgeon has to rely on the assistant to provide him with a proper view of the anatomical structures he is operating on. Unfortunately, in practice the team does often not have a lot of teamwork experience. Good positioning of the endoscope does not follow simple control rules, but is highly dependent on the current task and the individual surgical technique. In some cases both instruments should be in the center of the field of view, in others only one instrument is visible at the edge of the image. The paper describes how this endoscope guidance know-how can be learned from the assistant and made available to a cognitive camera guidance robot. As a result, the surgeon can rely on an assistance system, which works based on recorded surgical know-how instead of manually programmed actions.


simulation modeling and programming for autonomous robots | 2016

A data-driven large-scale optimization approach for task-specific physics realism in real-time robotics simulation

Andreas Bihlmaier; Kai J. Kohlhoff

Physics-based simulation of robots requires models of the simulated robots and their environment. For a realistic simulation behavior, these models must be accurate. Their physical properties such as geometric and kinematic values, as well as dynamic parameters such as mass, inertia matrix and friction, must be modelled. Unfortunately, this problem is hard for at least two reasons. First, physics engines designed for simulation of rigid bodies in real-time cannot accurately describe many common real world phenomena, e.g. (drive) friction and grasping. Second, the prime candidate solution to the model parameter problem, classical parameter identification algorithms, although well-studied and efficient, often necessitate a significant manual engineering effort and may not be applicable due to application constraints. Thus, we present a data-driven general purpose tool, which allows to optimize model parameters for (task-specific) realistic simulation behavior. Our approach directly uses the simulator and the model under optimization to improve model parameters. The optimization process is highly distributed and uses a hybrid optimization approach based on metaheuristics and the Ceres non-linear least squares solver. The user only has to provide a configuration file that specifies which model parameter to optimize together with realism criteria and a set of reference recordings from the real robot system.


international conference on intelligent autonomous systems | 2016

Hierarchical Task Networks as Domain-Specific Language for Planning Surgical Interventions

Andreas Bihlmaier; Luzie Schreiter; Jörg Raczkowsky; Heinz Wörn

The following paper addresses the challenges of defining surgical workflows. Surgical workflows have to deal with medical and technical aspects on different levels of abstraction in order to ensure safety. We propose hierarchical task networks (HTN) as a unifying domain-specific language (DSL) for the definition of surgical workflows. The DSL describes relations and dependencies in state sequences and surgical actions for complex workflows on varying levels of detail. With an HTN planner we are able to decompose high-level steps into primitive actions and identify all possible workflows together with their paths through the intervention. This information can be used to identify missing or inaccurate information in literature and consequently improve the workflow and safety of the surgical intervention. By means of a case study we present a detailed HTN-based DSL for Laparoscopic Cholecystectomy to show the advantage of using our particular approach to workflow modeling.


Studies in health technology and informatics | 2016

Hybrid Rendering Architecture for Realtime and Photorealistic Simulation of Robot-Assisted Surgery.

Sebastijan Müller; Andreas Bihlmaier; Stephan Irgenfried; Heinz Wörn

In this paper we present a method for combining realtime and non-realtime (photorealistic) rendering with open source software. Realtime rendering provides sufficient realism and is a good choice for most simulation and regression testing purposes in robot-assisted surgery. However, for proper end-to-end testing of the system, some computer vision algorithms require high fidelity images that capture more minute details of the real scene. One of the central practical obstacles to combining both worlds in a uniform way is creating models that are suitable for both kinds of rendering paradigms. We build a modeling pipeline using open source tools that builds on established, open standards for data exchange. The result is demonstrated through a unified model of the medical OpenHELP phantom used in the Gazebo robotics simulator, which can at the same time be rendered with more visual fidelity in the Cycles raytracer.

Collaboration


Dive into the Andreas Bihlmaier's collaboration.

Top Co-Authors

Avatar

Heinz Wörn

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jessica Hutzl

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sebastian Bodenstedt

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Stefanie Speidel

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jörg Raczkowsky

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Luzie Schreiter

Karlsruhe Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge