Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Claus Lenz is active.

Publication


Featured researches published by Claus Lenz.


robot and human interactive communication | 2008

Joint-action for humans and industrial robots for assembly tasks

Claus Lenz; Suraj Nair; Markus Rickert; Alois Knoll; Wolfgang Rösel; Jürgen Gast; Alexander Bannat; Frank Wallhoff

This paper presents a concept of a smart working environment designed to allow true joint-actions of humans and industrial robots. The proposed system perceives its environment with multiple sensor modalities and acts in it with an industrial robot manipulator to assemble capital goods together with a human worker. In combination with the reactive behavior of the robot, safe collaboration between the human and the robot is possible. Furthermore, the system anticipates human behavior, based on knowledge databases and decision processes, ensuring an effective collaboration between the human and robot. As a proof of concept, we introduce a use case where an arm is assembled and mounted on a robotpsilas body.


IEEE Transactions on Automation Science and Engineering | 2011

Artificial Cognition in Production Systems

Alexander Bannat; Thibault Bautze; Michael Beetz; Juergen Blume; Klaus Diepold; Christoph Ertelt; Florian Geiger; Thomas Gmeiner; Tobias Gyger; Alois Knoll; Christian Lau; Claus Lenz; Martin Ostgathe; Gunther Reinhart; Wolfgang Roesel; Thomas Ruehr; Anna Schuboe; Kristina Shea; Ingo Stork genannt Wersborg; Sonja Stork; William Tekouo; Frank Wallhoff; Mathey Wiesbeck; Michael F. Zaeh

Todays manufacturing and assembly systems have to be flexible to adapt quickly to an increasing number and variety of products, and changing market volumes. To manage these dynamics, several production concepts (e.g., flexible, reconfigurable, changeable or autonomous manufacturing and assembly systems) were proposed and partly realized in the past years. This paper presents the general principles of autonomy and the proposed concepts, methods and technologies to realize cognitive planning, cognitive control and cognitive operation of production systems. Starting with an introduction on the historical context of different paradigms of production (e.g., evolution of production and planning systems), different approaches for the design, planning, and operation of production systems are lined out and future trends towards fully autonomous components of an production system as well as autonomous parts and products are discussed. In flexible production systems with manual and automatic assembly tasks, human-robot cooperation is an opportunity for an ergonomic and economic manufacturing system especially for low lot sizes. The state-of-the-art and a cognitive approach in this area are outlined. Furthermore, introducing self-optimizing and self-learning control systems is a crucial factor for cognitive systems. This principles are demonstrated by a quality assurance and process control in laser welding that is used to perform improved quality monitoring. Finally, as the integration of human workers into the workflow of a production system is of the highest priority for an efficient production, worker guidance systems for manual assembly with environmentally and situationally dependent triggered paths on state-based graphs are described in this paper.


machine vision applications | 2008

A Unifying Software Architecture for Model-based Visual Tracking

Giorgio Panin; Claus Lenz; Suraj Nair; Erwin Roth; Thomas Friedlhuber; Alois Knoll

In this paper we propose a general, object-oriented software architecture for model-based visual tracking. The library is general purpose with respect to object model, estimated pose parameters, visual modalities employed, number of cameras and objects, and tracking methodology. The base class structure provides the necessary building blocks for implementing a wide variety of both known and novel tracking systems, integrating different visual modalities, like as color, motion, edge maps etc., in a multi-level fashion, ranging from pixel-level segmentation, up to local features matching and maximum-likelihood object pose estimation. The proposed structure allows integrating known data association algorithms for simultaneous, multiple object tracking tasks, as well as data fusion techniques for robust, multi-sensor tracking; within these contexts, parallelization of each tracking algorithm can as well be easily accomplished. Application of the proposed architecture is demonstrated through the definition and practical implementation of several tasks, all specified in terms of a self-contained description language.


Advanced Engineering Informatics | 2010

A skill-based approach towards hybrid assembly

Frank Wallhoff; Jürgen Blume; Alexander Bannat; Wolfgang Rösel; Claus Lenz; Alois Knoll

Efficient cooperation of humans and industrial robots is based on a common understanding of the task as well as the perception and understanding of the partners action in the next step. In this article, a hybrid assembly station is presented, in which an industrial robot can learn new tasks from worker instructions. The learned task is performed by both the robot and the human worker together in a shared workspace. This workspace is monitored using multi-sensory perception for detecting persons as well as objects. The environmental data are processed within the collision avoidance module to provide safety for persons and equipment. The real-time capable software architecture and the orchestration of the involved modules using a knowledge-based system controller is presented. Finally, the functionality is demonstrated within an experimental cell in a real-world production scenario.


PLOS ONE | 2012

Moving Just Like You: Motor Interference Depends on Similar Motility of Agent and Observer

Aleksandra Kupferberg; Markus Huber; Bartosz Helfer; Claus Lenz; Alois Knoll; Stefan Glasauer

Recent findings in neuroscience suggest an overlap between brain regions involved in the execution of movement and perception of another’s movement. This so-called “action-perception coupling” is supposed to serve our ability to automatically infer the goals and intentions of others by internal simulation of their actions. A consequence of this coupling is motor interference (MI), the effect of movement observation on the trajectory of one’s own movement. Previous studies emphasized that various features of the observed agent determine the degree of MI, but could not clarify how human-like an agent has to be for its movements to elicit MI and, more importantly, what ‘human-like’ means in the context of MI. Thus, we investigated in several experiments how different aspects of appearance and motility of the observed agent influence motor interference (MI). Participants performed arm movements in horizontal and vertical directions while observing videos of a human, a humanoid robot, or an industrial robot arm with either artificial (industrial) or human-like joint configurations. Our results show that, given a human-like joint configuration, MI was elicited by observing arm movements of both humanoid and industrial robots. However, if the joint configuration of the robot did not resemble that of the human arm, MI could longer be demonstrated. Our findings present evidence for the importance of human-like joint configuration rather than other human-like features for perception-action coupling when observing inanimate agents.


intelligent robots and systems | 2009

Constraint task-based control in industrial settings

Claus Lenz; Markus Rickert; Giorgio Panin; Alois Knoll

Direct physical human-robot interaction has become a central part in the research field of robotics today. To use the advantages of the potential for humans and robots to work together as a team in industrial settings, the most important issues are safety for the human and an easy way to describe tasks for the robot. In this work, we present an approach of a hierarchical structured control of industrial robots for joint-action scenarios. Multiple atomic tasks including dynamic collision avoidance, operational position, and posture can be combined in an arbitrary order respecting constraints of higher priority tasks. The controller flow is based on the theory of orthogonal projection using nullspaces and constraint least-square optimization. To proof the approach, we present three collaboration scenarios between a human and an industrial robot.


PLOS ONE | 2013

Spatiotemporal movement planning and rapid adaptation for manual interaction.

Markus Huber; Aleksandra Kupferberg; Claus Lenz; Alois Knoll; Thomas Brandt; Stefan Glasauer

Many everyday tasks require the ability of two or more individuals to coordinate their actions with others to increase efficiency. Such an increase in efficiency can often be observed even after only very few trials. Previous work suggests that such behavioral adaptation can be explained within a probabilistic framework that integrates sensory input and prior experience. Even though higher cognitive abilities such as intention recognition have been described as probabilistic estimation depending on an internal model of the other agent, it is not clear whether much simpler daily interaction is consistent with a probabilistic framework. Here, we investigate whether the mechanisms underlying efficient coordination during manual interactions can be understood as probabilistic optimization. For this purpose we studied in several experiments a simple manual handover task concentrating on the action of the receiver. We found that the duration until the receiver reacts to the handover decreases over trials, but strongly depends on the position of the handover. We then replaced the human deliverer by different types of robots to further investigate the influence of the delivering movement on the reaction of the receiver. Durations were found to depend on movement kinematics and the robot’s joint configuration. Modeling the task was based on the assumption that the receiver’s decision to act is based on the accumulated evidence for a specific handover position. The evidence for this handover position is collected from observing the hand movement of the deliverer over time and, if appropriate, by integrating this sensory likelihood with prior expectation that is updated over trials. The close match of model simulations and experimental results shows that the efficiency of handover coordination can be explained by an adaptive probabilistic fusion of a-priori expectation and online estimation.


intelligent robots and systems | 2011

Human workflow analysis using 3D occupancy grid hand tracking in a human-robot collaboration scenario

Claus Lenz; Alice Sotzek; Thorsten Röder; Helmuth Radrich; Alois Knoll; Markus Huber; Stefan Glasauer

In this work, we present a Hidden Markov Model (HMM) based workflow analysis of an assembly task jointly performed by a human and an assistive robotic system. In an experiment subjects had to assemble a tower by combining six cubes with several bolts for their own without the influence of a robot or any other technical device. To estimate the current action of the human, we have trained composite HMMs. After the successful evaluation on disjunct experimental data sets, the models are transferred to the assistive robotic system JAHIR, where the same assembly tasks was executed. A new 3D occupancy grid approach was used to determine the hand positions of the worker. The positions were then used to compute the inputs of the analysis HMMs. The workflow of the right hand could be recognized with an accuracy of 92:26% which is nearly as good as the recognition rate of reference experiments


international conference on image processing | 2007

Robust Multi-Modal Group Action Recognition in Meetings from Disturbed Videos with the Asynchronous Hidden Markov Model

Marc Al-Hames; Claus Lenz; Stephan Reiter; Joachim Schenk; Frank Wallhoff; Gerhard Rigoll

The asynchronous hidden Markov model (AHMM) models the joint likelihood of two observation sequences, even if the streams are not synchronised. We explain this concept and how the model is trained by the EM algorithm. We then show how the AHMM can be applied to the analysis of group action events in meetings from both clear and disturbed data. The AHMM outperforms an early fusion HMM by 5.7% recognition rate (a rel. error reduction of 38.5%) for clear data. For occluded data, the improvement is in average 6.5% recognition rate (rel. error red. 40%). Thus asynchronity is a dominant factor in meeting analysis, even if the data is disturbed. The AHMM exploits this and is therefore much more robust against disturbances.


robot and human interactive communication | 2014

Mechanisms and capabilities for human robot collaboration

Claus Lenz; Alois Knoll

This paper deals with the concept of a collaborative human-robot workspace in production environments recapitulating and complementing the work of the author presented in [1]. Different aspects regarding collaboration are discussed and applied in an exemplary scenario. Modalities including visualizations and audio are used to inform the human worker about next assembly steps and the current status of the system. The robot supplies the human worker with needed parts in an adaptive manner to prevent errors and to increase ergonomic benefits. Further, the human worker can intuitively interact and adjust the robot using projected menus on the worktable and by force-guidance of the robot. All these functions are brought together in an overall architecture.

Collaboration


Dive into the Claus Lenz's collaboration.

Top Co-Authors

Avatar

Cornelia Wendt

Bundeswehr University Munich

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge