Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bernhard Jung is active.

Publication


Featured researches published by Bernhard Jung.


IEEE Robotics & Automation Magazine | 2012

Physical Human-Robot Interaction: Mutual Learning and Adaptation

Shuhei Ikemoto; Heni Ben Amor; Takashi Minato; Bernhard Jung; Hiroshi Ishiguro

Close physical interaction between robots and humans is a particularly challenging aspect of robot development. For successful interaction and cooperation, the robot must have the ability to adapt its behavior to the human counterpart. Based on our earlier work, we present and evaluate a computationally efficient machine learning algorithm that is well suited for such close-contact interaction scenarios. We show that this algorithm helps to improve the quality of the interaction between a robot and a human caregiver. To this end, we present two human-in-the-loop learning scenarios that are inspired by human parenting behavior, namely, an assisted standing-up task and an assisted walking task.


ieee virtual reality conference | 2007

Grasp Recognition with Uncalibrated Data Gloves - A Comparison of Classification Methods

Guido Heumer; Heni Ben Amor; Matthias Weber; Bernhard Jung

This paper presents a comparison of various classification methods for the problem of recognizing grasp types involved in object manipulations performed with a data glove. Conventional wisdom holds that data gloves need calibration in order to obtain accurate results. However, calibration is a time-consuming process, inherently user-specific, and its results are often not perfect. In contrast, the present study aims at evaluating recognition methods that do not require prior calibration of the data glove, by using raw sensor readings as input features and mapping them directly to different categories of hand shapes. An experiment was carried out, where test persons wearing a data glove had to grasp physical objects of different shapes corresponding to the various grasp types of the Schlesinger taxonomy. The collected data was analyzed with 28 classifiers including different types of neural networks, decision trees, Bayes nets, and lazy learners. Each classifier was analyzed in six different settings, representing various application scenarios with differing generalization demands. The results of this work are twofold: (1) We show that a reasonably well to highly reliable recognition of grasp types can be achieved - depending on whether or not the glove user is among those training the classifier - even with uncalibrated data gloves. (2) We identify the best performing classification methods for recognition of various grasp types. To conclude, cumbersome calibration processes before productive usage of data gloves can be spared in many situations.


Artificial Intelligence Review | 1996

Dynamic conceptualization in a mechanical-object assembly environment

Ipke Wachsmuth; Bernhard Jung

In an experimental setting of mechanical-object assembly, the CODY (“Concept Dynamics”) project is concerned with the development of knowledge representations and inference methods that are able to dynamically conceptualize the situation in the task environment. A central aim is to enable an artificial agent to understand and process natural-language instructions of a human partner. Instructions may build on the current perception of the assembly environment on the one hand, and on the other on the knowledge-based understanding of grouped structures in the developing construct. To this end, a dynamic conceptualization must integrate information not only describing the types of the objects involved, but also their changing functional roles when becoming part of structured assemblies.We have developed an operational knowledge representation formalism, COAR (“Concepts for Objects, Assemblies, and Roles”), by which processes of dynamic conceptualization in sequences of assembly steps can be formally reconstructed. Inferences concern the assertion or retraction of aggregate representations in a dynamic knowledge base, as well as the computation of role changes for individual objects associated herewith. The structural representations integrate situated spatial features and relations, such as position, size, distance, or orthogonality, which are inferred on need from a geometry description of the task environment. The capacity of our approach has been evaluated in a 3D computergraphics simulation environment.1 A running demonstration of the virtual assembly workbench can be seen in our contribution to the IJCAI-95 Videotape Program, cf. (Cao et al., 1995).2 Some readers may still want to refer to this “false” propeller as a propeller. This is not the point in our current work. Our point is to provide means for recognizing a propeller that is correctly built according to its definition. Should the right-hand-side assemblage in Fig. 4 be recognizable as a propeller, we could allow for an according “slack” in the routine which evaluates orthogonality. Yet another point is how reference could be established to the right-hand-side assemblage by a description like “the propeller on the right”. This topic concerns future work.


intelligent robots and systems | 2013

Learning responsive robot behavior by imitation

Heni Ben Amor; David Vogt; Marco Ewerton; Erik Berger; Bernhard Jung; Jan Peters

In this paper we present a new approach for learning responsive robot behavior by imitation of human interaction partners. Extending previous work on robot imitation learning, that has so far mostly concentrated on learning from demonstrations by a single actor, we simultaneously record the movements of two humans engaged in on-going interaction tasks and learn compact models of the interaction. Extracted interaction models can thereafter be used by a robot to engage in a similar interaction with a human partner. We present two algorithms for deriving interaction models from motion capture data as well as experimental results on a humanoid robot.


Advanced Robotics | 2015

Estimation of perturbations in robotic behavior using dynamic mode decomposition

Erik Berger; Mark Sastuba; David Vogt; Bernhard Jung; Heni Ben Amor

Physical human–robot interaction tasks require robots that can detect and react to external perturbations caused by the human partner. In this contribution, we present a machine learning approach for detecting, estimating, and compensating for such external perturbations using only input from standard sensors. This machine learning approach makes use of Dynamic Mode Decomposition (DMD), a data processing technique developed in the field of fluid dynamics, which is applied to robotics for the first time. DMD is able to isolate the dynamics of a nonlinear system and is therefore well suited for separating noise from regular oscillations in sensor readings during cyclic robot movements. In a training phase, a DMD model for behavior-specific parameter configurations is learned. During task execution, the robot must estimate the external forces exerted by a human interaction partner. We compare the DMD-based approach to other interpolation schemes. A variant, sparsity promoting DMD, is particularly well suited for high-noise sensors. Results of a user study show that our DMD-based machine learning approach can be used to design physical human–robot interaction techniques that not only result in robust robot behavior but also enjoy a high usability. Graphical Abstract


conference of the industrial electronics society | 1998

Utilize speech and gestures to realize natural interaction in a virtual environment

Marc Erich Latoschik; Martin Fröhlich; Bernhard Jung; Ipke Wachsmuth

Virtual environments are a new means for human-computer interaction. Whereas techniques for visual presentation have reached a high level of maturity, many of the input devices and interaction techniques still tend to be awkward for this new media. Where the borders between real and artificial environments vanish, a more natural way of interaction is desirable. To this end, we investigate the benefits of integrated speech- and gesture-based interfaces for interacting with virtual environments. Our research results are applied within a virtual construction scenario, where 3D visualized mechanical objects can be spatially rearranged and assembled using speech- and gesture-based communication.


intelligent virtual agents | 2003

FlurMax: An interactive virtual agent for entertaining visitors in a hallway

Bernhard Jung; Stefan Kopp

FlurMax, a virtual agent, inhabits a hallway at the University of Bielefeld. He resides in a wide-screen panel equipped with a video camera to track and interact with visitors using speech, gesture, and emotional facial expression. For example, FlurMax will detect the presence of visitors and greet them with a friendly wave, saying ”Hello, I am Max”. FlurMax also recognizes simple gesturing of the by-passer, such as waving, and produces natural multimodal behaviors in response. FlurMax’s behavior selection is controlled by a simple emotional/motivational system which gradually changes his mood between states like happy, bored, surprised, and neutral.


robot and human interactive communication | 2009

Physical interaction learning: Behavior adaptation in cooperative human-robot tasks involving physical contact

Shuhei Ikemoto; Heni Ben Amor; Takashi Minato; Hiroshi Ishiguro; Bernhard Jung

In order for humans and robots to engage in direct physical interaction several requirements have to be met. Among others, robots need to be able to adapt their behavior in order to facilitate the interaction with a human partner. This can be achieved using machine learning techniques. However, most machine learning scenarios to-date do not address the question of how learning can be achieved for tightly coupled, physical touch interactions between the learning agent and a human partner. This paper presents an example for such human in-the-loop learning scenarios and proposes a computationally cheap learning algorithm for this purpose. The efficiency of this method is evaluated in an experiment, where human care givers help an android robot to stand up.


international conference on artificial reality and telexistence | 2006

An animation system for imitation of object grasping in virtual reality

Matthias Weber; Guido Heumer; Heni Ben Amor; Bernhard Jung

Interactive virtual characters are nowadays commonplace in games, animations, and Virtual Reality (VR) applications. However, relatively few work has so far considered the animation of interactive object manipulations performed by virtual humans. In this paper, we first present a hierarchical control architecture incorporating plans, behaviors, and motor programs that enables virtual humans to accurately manipulate scene objects using different grasp types. Furthermore, as second main contribution, we introduce a method by which virtual humans learn to imitate object manipulations performed by human VR users. To this end, movements of the VR user are analyzed and processed into abstract actions. A new data structure called grasp events is used for storing information about user interactions with scene objects. High-level plans are instantiated based on grasp events to drive the virtual humans’ animation. Due to their high-level representation, recorded manipulations often naturally adapt to new situations without losing plausibility.


Presence: Teleoperators & Virtual Environments | 2008

Grasp recognition for uncalibrated data gloves: A machine learning approach

Guido Heumer; Heni Ben Amor; Bernhard Jung

This paper presents a comparison of various machine learning methods applied to the problem of recognizing grasp types involved in object manipulations performed with a data glove. Conventional wisdom holds that data gloves need calibration in order to obtain accurate results. However, calibration is a time-consuming process, inherently user-specific, and its results are often not perfect. In contrast, the present study aims at evaluating recognition methods that do not require prior calibration of the data glove. Instead, raw sensor readings are used as input features that are directly mapped to different categories of hand shapes. An experiment was carried out in which test persons wearing a data glove had to grasp physical objects of different shapes corresponding to the various grasp types of the Schlesinger taxonomy. The collected data was comprehensively analyzed using numerous classification techniques provided in an open-source machine learning toolbox. Evaluated machine learning methods are composed of (a) 38 classifiers including different types of function learners, decision trees, rule-based learners, Bayes nets, and lazy learners; (b) data preprocessing using principal component analysis (PCA) with varying degrees of dimensionality reduction; and (c) five meta-learning algorithms under various configurations where selection of suitable base classifier combinations was informed by the results of the foregoing classifier evaluation. Classification performance was analyzed in six different settings, representing various application scenarios with differing generalization demands. The results of this work are twofold: (1) We show that a reasonably good to highly reliable recognition of grasp types can be achieveddepending on whether or not the glove user is among those training the classifiereven with uncalibrated data gloves. (2) We identify the best performing classification methods for the recognition of various grasp types. To conclude, cumbersome calibration processes before productive usage of data gloves can be spared in many situations.

Collaboration


Dive into the Bernhard Jung's collaboration.

Top Co-Authors

Avatar

Heni Ben Amor

Arizona State University

View shared research outputs
Top Co-Authors

Avatar

David Vogt

Freiberg University of Mining and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Erik Berger

Freiberg University of Mining and Technology

View shared research outputs
Top Co-Authors

Avatar

Guido Heumer

Freiberg University of Mining and Technology

View shared research outputs
Top Co-Authors

Avatar

Matthias Weber

Freiberg University of Mining and Technology

View shared research outputs
Top Co-Authors

Avatar

Matthias Lenk

Freiberg University of Mining and Technology

View shared research outputs
Top Co-Authors

Avatar

Steve Grehl

Freiberg University of Mining and Technology

View shared research outputs
Top Co-Authors

Avatar

Henry Lehmann

Freiberg University of Mining and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge