Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hartwig Holzapfel is active.

Publication


Featured researches published by Hartwig Holzapfel.


intelligent robots and systems | 2004

Natural human-robot interaction using speech, head pose and gestures

Rainer Stiefelhagen; Christian Fügen; R. Gieselmann; Hartwig Holzapfel; Kai Nickel; Alex Waibel

In this paper we present our ongoing work in building technologies for natural multimodal human-robot interaction. We present our systems for spontaneous speech recognition, multimodal dialogue processing and visual perception of a user, which includes the recognition of pointing gestures as well as the recognition of a persons head orientation. Each of the components is described in the paper and experimental results are presented. In order to demonstrate and measure the usefulness of such technologies for human-robot interaction, all components have been integrated on a mobile robot platform and have been used for real-time human-robot interaction in a kitchen scenario.


IEEE Transactions on Robotics | 2007

Enabling Multimodal Human–Robot Interaction for the Karlsruhe Humanoid Robot

Rainer Stiefelhagen; Hazim Kemal Ekenel; Christian Fügen; Petra Gieselmann; Hartwig Holzapfel; Florian Kraft; Kai Nickel; Michael Voit; Alex Waibel

In this paper, we present our work in building technologies for natural multimodal human-robot interaction. We present our systems for spontaneous speech recognition, multimodal dialogue processing, and visual perception of a user, which includes localization, tracking, and identification of the user, recognition of pointing gestures, as well as the recognition of a persons head orientation. Each of the components is described in the paper and experimental results are presented. We also present several experiments on multimodal human-robot interaction, such as interaction using speech and gestures, the automatic determination of the addressee during human-human-robot interaction, as well on interactive learning of dialogue strategies. The work and the components presented here constitute the core building blocks for audiovisual perception of humans and multimodal human-robot interaction used for the humanoid robot developed within the German research project (Sonderforschungsbereich) on humanoid cooperative robots.


international conference on multimodal interfaces | 2004

Implementation and evaluation of a constraint-based multimodal fusion system for speech and 3D pointing gestures

Hartwig Holzapfel; Kai Nickel; Rainer Stiefelhagen

This paper presents an architecture for fusion of multimodal input streams for natural interaction with a humanoid robot as well as results from a user study with our system. The presented fusion architecture consists of an application independent parser of input events, and application specific rules. In the presented user study, people could interact with a robot in a kitchen scenario, using speech and gesture input. In the study, we could observe that our fusion approach is very tolerant against falsely detected pointing gestures. This is because we use speech as the main modality and pointing gestures mainly for disambiguation of objects. In the paper we also report about the temporal correlation of speech and gesture events as observed in the user study.


Robotics and Autonomous Systems | 2008

A dialogue approach to learning object descriptions and semantic categories

Hartwig Holzapfel; Daniel Neubig; Alex Waibel

Acquiring new knowledge through interactive learning mechanisms is a key ability for humanoid robots in a natural environment. Such learning mechanisms need to be performed autonomously, and through interaction with the environment or with other agents/humans. In this paper, we describe a dialogue approach and a dynamic object model for learning semantic categories, object descriptions, and new words acquisition for object learning and integration with visual perception for grounding objects in the real world. The presented system has been implemented and evaluated on the humanoid robot Armar III.


ieee-ras international conference on humanoid robots | 2005

A cognitive architecture for a humanoid robot: a first approach

Catherina Burghart; Ralf Mikut; Rainer Stiefelhagen; Tamim Asfour; Hartwig Holzapfel; Peter Steinhaus; Ruediger Dillmann

Future life pictures humans having intelligent humanoid robotic systems taking part in their everyday life. Thus researchers strive to supply robots with an adequate artificial intelligence in order to achieve a natural and intuitive interaction between human being and robotic system. Within the German Humanoid Project we focus on learning and cooperating multimodal robotic systems. In this paper we present a first cognitive architecture for our humanoid robot: The architecture is a mixture of a hierarchical three-layered form on the one hand and a composition of behaviour-specific modules on the other hand. Perception, learning, planning of actions, motor control, and human-like communication play an important role in the robotic system and are embedded step by step in our architecture


international conference on multimodal interfaces | 2002

Integrating emotional cues into a framework for dialogue management

Hartwig Holzapfel; Christian Fuegen; Matthias Denecke; Alex Waibel

Emotions are very important in human-human communication but are usually ignored in human-computer interaction. Recent work focuses on recognition and generation of emotions as well as emotion driven behavior. Our work focuses on the use of emotions in dialogue systems that can be used with speech input or as well in multi-modal environments. We describe a framework for using emotional cues in a dialogue system and their informational characterization. We describe emotion models that can be integrated into the dialogue system and can be used in different domains and tasks. Our application of the dialogue system is planned to model multi-modal human-computer-interaction with a humanoid robotic system.


international conference on machine learning | 2005

The “FAME” interactive space

Florian Metze; Petra Gieselmann; Hartwig Holzapfel; Tobias Kluge; Ivica Rogina; Alex Waibel; Matthias Wölfel; James L. Crowley; Patrick Reignier; Dominique Vaufreydaz; François Bérard; Bérangère Cohen; Joëlle Coutaz; Sylvie Rouillard; Victoria Arranz; Manuel Bertran; Horacio Rodríguez

This paper describes the FAME multi-modal demonstrator, which integrates multiple communication modes - vision, speech and object manipulation - by combining the physical and virtual worlds to provide support for multi-cultural or multi-lingual communication and problem solving. The major challenges are automatic perception of human actions and understanding of dialogs between people from different cultural or linguistic backgrounds. The system acts as an information butler, which demonstrates context awareness using computer vision, speech and dialog modeling. The integrated computer-enhanced human-to-human communication has been publicly demonstrated at the FORUM2004 in Barcelona and at IST2004 in The Hague. Specifically, the Interactive Space described features an Augmented Table for multi-cultural interaction, which allows several users at the same time to perform multi-modal, cross-lingual document retrieval of audio-visual documents previously recorded by an Intelligent Cameraman during a week-long seminar.


KI'06 Proceedings of the 29th annual German conference on Artificial intelligence | 2006

A robot learns to know people: first contacts of a robot

Hartwig Holzapfel; Thomas Schaaf; Hazim Kemal Ekenel; Christoph Schaa; Alex Waibel

Acquiring knowledge about persons is a key functionality for humanoid robots. In a natural environment, the robot not only interacts with different people who he recognizes and who he knows. He will also have to interact with unknown persons, and by acquiring information about them, the robot can memorize these persons and provide extended personalized services. Today, researchers build systems to recognize a persons face, voice and other features. Most of them depend on precollected data. We think that with the given technology it is about time to build a system that collects data autonomously and thus gets to know and learns to recognize persons completely on its own. This paper describes the integration of different perceptual and dialog components and their individual functionality to build a robot that can contact persons, learns their names, and learns to recognize them in future encounters.


acm multimedia | 2008

Confidence based multimodal fusion for person identification

Philipp W.L. Große; Hartwig Holzapfel; Alex Waibel

Person identification is of great interest for various kinds of applications and interactive systems. In our system we use face recognition and voice recognition from data recorded in an interactive dialogue system. In such a system, sequential images and sequential utterances can be used to improve recognition accuracy over single hypotheses. The presented approach uses confidence-based fusion for sequence hypotheses, for multimodal fusion, and to provide a reliability measure of the classification quality that can be used to decide when to trust and when to ignore classification results.


ieee-ras international conference on humanoid robots | 2004

A way out of dead end situations in dialogue systems for human-robot interaction

Hartwig Holzapfel; Petra Gieselmann

In this paper, we present a strategy for resolving difficult situations in human-robot dialogues where the user input is inconsistent with the current discourse. Reasons for the inconsistency are analyzed in detail and a set of rules is implemented to take all of them into account. In a user test, we evaluated the success of the strategy which can reduce the communication problems resulting from misrecognized user utterances in human-robot communication.

Collaboration


Dive into the Hartwig Holzapfel's collaboration.

Top Co-Authors

Avatar

Alex Waibel

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Petra Gieselmann

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Rainer Stiefelhagen

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Catherina Burghart

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kai Nickel

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ralf Mikut

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Christian Fügen

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hazim Kemal Ekenel

Istanbul Technical University

View shared research outputs
Top Co-Authors

Avatar

Christian Fuegen

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Daniel Neubig

Karlsruhe Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge