Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christian Schlette is active.

Publication


Featured researches published by Christian Schlette.


international conference on intelligent robotics and applications | 2012

Control by 3d simulation --- a new erobotics approach to control design in automation

Juergen Rossmann; Michael Schluse; Christian Schlette; Ralf Waspe

This paper introduces new so-called control by 3D simulation concepts which are the basis for the simulation based development of complex control algorithms e. g. in the field of robotics and automation. Now, a controller design can be developed, parameterized, tested and verified using so called Virtual Testbeds until they perform adequately in simulation. Then a stripped down version of the same simulation system uses the same simulation model and the same simulation algorithms on the real hardware implementing a real-time capable controller. This results in an integrated development approach, which brings simulation technology on the real hardware to bridge the gap between simulation and real world operation. In this way, Virtual Testbeds and control by 3D simulation provide major building blocks in the emerging field of eRobotics to keep manageable the ever increasing complexity of current computer-aided solutions.


international conference on robotics and automation | 2011

Making planned paths look more human-like in humanoid robot manipulation planning

Franziska Zacharias; Christian Schlette; Florian Schmidt; Christoph Borst; Jürgen Rossmann; Gerd Hirzinger

It contradicts the humans expectations when humanoid robots move awkwardly during manipulation tasks. The unnatural motion may be caused by awkward start or goal configurations or by probabilistic path planning processes that are often used. This paper shows that the choice of an arms target configuration strongly effects planning time and how human-like a planned path appears. Human-like goal configurations are found using a criterion from ergonomics research. The knowledge which pose of the Tool Center Point (TCP) can be reached in a natural manner is encapsulated in a restricted reachability map for the robot arm.


Künstliche Intelligenz | 2014

Mental Models for Intelligent Systems: eRobotics Enables New Approaches to Simulation-Based AI

Jürgen Roßmann; Linus Atorf; Malte Rast; Georgij Grinshpun; Christian Schlette

AbstracteRobotics is a newly evolving branch of e-Systems engineering, providing tools to support the whole life cycle of robotic applications by means of electronic media. With the eRobotics methodology, the target system and its environment can be modeled, validated, and calibrated to achieve a close-to-reality simulation. In this contribution, we present simulation-based mental models for autonomous systems as a foundation for new approaches to prediction and artificial intelligence. We formulate a methodology to construct optimization problems within simulation environments in order to assist autonomous systems in action planning. We illustrate the usefulness and performance of this approach through various examples in different fields. As application for space robotics, we focus on climbing strategies of a legged mobile exploration robot. Furthermore, we enable skillfull interaction control in service robotics and address energy consumption issues. The contribution concludes with a detailed discussion of the concept presented here.


Archive | 2009

Model-Based Programming “by Demonstration”– Fast Setup of Robot Systems (ProDemo)

Jürgen Roßmann; Henning Ruf; Christian Schlette

This article describes a new integrated approach to robot programming which combines online and offline methods in an efficient, synergetic way. It aims at reducing the cost, the effort and the steepness of the learning curve to set up robotic systems, which are key issues to support the economic use of robots in small and medium enterprises. The innovative approach of the system lies in the use of a task-oriented description and modeling of the work cell as well as the intuitive commanding of the program flow. The input of a tracking system is used to define trajectories and to model obstacles in the work cell. All tasks are coordinated intuitively via a Graphical User Interface, which includes visual programming. A simulation system is then used for collision checking, visualization and optimization of the programs. This paper focuses on MMI’s developments of the GUI and the modeling capabilities of the system.


Proceedings of SPIE | 2015

Virtual commissioning of automated micro-optical assembly

Christian Schlette; Daniel Losch; Sebastian Haag; Daniel Zontar; Jürgen Roßmann; Christian Brecher

In this contribution, we present a novel approach to enable virtual commissioning for process developers in micro-optical assembly. Our approach aims at supporting micro-optics experts to effectively develop assisted or fully automated assembly solutions without detailed prior experience in programming while at the same time enabling them to easily implement their own libraries of expert schemes and algorithms for handling optical components. Virtual commissioning is enabled by a 3D simulation and visualization system in which the functionalities and properties of automated systems are modeled, simulated and controlled based on multi-agent systems. For process development, our approach supports event-, state- and time-based visual programming techniques for the agents and allows for their kinematic motion simulation in combination with looped-in simulation results for the optical components. First results have been achieved for simply switching the agents to command the real hardware setup after successful process implementation and validation in the virtual environment. We evaluated and adapted our system to meet the requirements set by industrial partners-- laser manufacturers as well as hardware suppliers of assembly platforms. The concept is applied to the automated assembly of optical components for optically pumped semiconductor lasers and positioning of optical components for beam-shaping


international conference on computer modelling and simulation | 2013

A Virtual Testbed for Human-Robot Interaction

Juergen Rossmann; Linus Atorf; Malte Rast; Christian Schlette

Many research efforts in human-robot interaction (HRI) have so far focussed on the mechanical design of intrinsically safe robots, as well as impedance control for tasks in which human and robot will work together. However, comparatively little attention is paid to an approach that ensures safety and permits close HRI while dispensing human with the necessity of physical presence in close proximity to the robot. In this work we acquire real human manipulation gestures using Microsofts Kinect sensor and project them in realtime into the workspace of a simulated, impedance controlled robot manipulator. This way, we can remotely and therefore safely superimpose human motion over the robots dynamic motion, wherever the human operator is located. The simulated robot state is then transferred to the real robot as input, so as to physically perform the intended task. The Virtual Testbed approach might not only be useful for HRI pre-analysis, testing and validation goals but particularly advantageous for telepresence, industrial and harzardous tasks as well as training purposes. Simulation results are provided to show the effectiveness of the approach.


Production Engineering | 2014

A new benchmark for pose estimation with ground truth from virtual reality

Christian Schlette; Anders Buch; Eren Erdal Aksoy; Thomas Steil; Jeremie Papon; Thiusius Rajeeth Savarimuthu; Florentin Wörgötter; Norbert Krüger; Jürgen Roßmann

AbstractThe development of programming paradigms for industrial assembly currently gets fresh impetus from approaches in human demonstration and programming-by-demonstration. Major low- and mid-level prerequisites for machine vision and learning in these intelligent robotic applications are pose estimation, stereo reconstruction and action recognition. As a basis for the machine vision and learning involved, pose estimation is used for deriving object positions and orientations and thus target frames for robot execution. Our contribution introduces and applies a novel benchmark for typical multi-sensor setups and algorithms in the field of demonstration-based automated assembly. The benchmark platform is equipped with a multi-sensor setup consisting of stereo cameras and depth scanning devices (see Fig. 1). The dimensions and abilities of the platform have been chosen in order to reflect typical manual assembly tasks. Following the eRobotics methodology, a simulatable 3D representation of this platform was modelled in virtual reality. Based on a detailed camera and sensor simulation, we generated a set of benchmark images and point clouds with controlled levels of noise as well as ground truth data such as object positions and time stamps. We demonstrate the application of the benchmark to evaluate our latest developments in pose estimation, stereo reconstruction and action recognition and publish the benchmark data for objective comparison of sensor setups and algorithms in industry.


systems man and cybernetics | 2018

Teaching a Robot the Semantics of Assembly Tasks

Thiusius Rajeeth Savarimuthu; Anders Buch; Christian Schlette; Nils Wantia; Jurgen Robmann; David Martínez Martínez; Guillem Alenyà; Carme Torras; Ales Ude; Bojan Nemec; Aljaz Kramberger; Florentin Wörgötter; Eren Erdal Aksoy; Jeremie Papon; Simon Haller; Justus H. Piater; Norbert Krüger

We present a three-level cognitive system in a learning by demonstration context. The system allows for learning and transfer on the sensorimotor level as well as the planning level. The fundamentally different data structures associated with these two levels are connected by an efficient mid-level representation based on so-called “semantic event chains.” We describe details of the representations and quantify the effect of the associated learning procedures for each level under different amounts of noise. Moreover, we demonstrate the performance of the overall system by three demonstrations that have been performed at a project review. The described system has a technical readiness level (TRL) of 4, which in an ongoing follow-up project will be raised to TRL 6.


Proceedings of SPIE | 2015

Minimal-effort planning of active alignment processes for beam-shaping optics

Sebastian Haag; Matthias Schranner; Tobias Müller; Daniel Zontar; Christian Schlette; Daniel Losch; Christian Brecher; Jürgen Roßmann

In science and industry, the alignment of beam-shaping optics is usually a manual procedure. Many industrial applications utilizing beam-shaping optical systems require more scalable production solutions and therefore effort has been invested in research regarding the automation of optics assembly. In previous works, the authors and other researchers have proven the feasibility of automated alignment of beam-shaping optics such as collimation lenses or homogenization optics. Nevertheless, the planning efforts as well as additional knowledge from the fields of automation and control required for such alignment processes are immense. This paper presents a novel approach of planning active alignment processes of beam-shaping optics with the focus of minimizing the planning efforts for active alignment. The approach utilizes optical simulation and the genetic programming paradigm from computer science for automatically extracting features from a simulated data basis with a high correlation coefficient regarding the individual degrees of freedom of alignment. The strategy is capable of finding active alignment strategies that can be executed by an automated assembly system. The paper presents a tool making the algorithm available to end-users and it discusses the results of planning the active alignment of the well-known assembly of a fast-axis collimator. The paper concludes with an outlook on the transferability to other use cases such as application specific intensity distributions which will benefit from reduced planning efforts.


emerging technologies and factory automation | 2016

3D simulation-based user interfaces for a highly-reconfigurable industrial assembly cell

Christian Schlette; Daniel Losch; Georgij Grinshpun; Markus Emde; Ralf Waspe; Nils Wantia; Jurgen Robmann

Although SMEs would benefit from robotic solutions in assembly, the required invests and efforts for their implementation are often too risky and costly for them. Here, the Horizon 2020 project “ReconCell” aims at developing a new type of highy-reconfigurable multi-robot assembly cell which adresses the particular needs of SMEs. At the Institute for Man- Machine Interaction (MMI), we are developing 3D simulation-based user interfaces for ReconCell as the central technology to enable the fast, easy and safe programming of the system. ReconCell heavily builds on previous developments that are transferred from research and prepared for industrial partners with real use cases and demands. Thus, in this contribution, we describe MMIs software platform that will be the basis of the desired user interfaces for robot simulation and control, assembly simulation and execution, Visual Programming and sensor simulation.

Collaboration


Dive into the Christian Schlette's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Linus Atorf

RWTH Aachen University

View shared research outputs
Top Co-Authors

Avatar

Markus Emde

RWTH Aachen University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nils Wantia

RWTH Aachen University

View shared research outputs
Researchain Logo
Decentralizing Knowledge