Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ipke Wachsmuth is active.

Publication


Featured researches published by Ipke Wachsmuth.


intelligent virtual agents | 2005

A conversational agent as museum guide: design and evaluation of a real-world application

Stefan Kopp; Lars Gesellensetter; Nicole C. Krämer; Ipke Wachsmuth

This paper describes an application of the conversational agent Max in a real-world setting. The agent is employed as guide in a public computer museum, where he engages with visitors in natural face-to-face communication, provides them with information about the museum or the exhibition, and conducts natural small talk conversations. The design of the system is described with a focus on how the conversational behavior is achieved. Logfiles from interactions between Max and museum visitors were analyzed for the kinds of dialogue people are willing to have with Max. Results indicate that Max engages people in interactions where they are likely to use human-like communication strategies, suggesting the attribution of sociality to the agent.


Computer Animation and Virtual Worlds | 2004

Synthesizing multimodal utterances for conversational agents

Stefan Kopp; Ipke Wachsmuth

Conversational agents are supposed to combine speech with non‐verbal modalities for intelligible multimodal utterances. In this paper, we focus on the generation of gesture and speech from XML‐based descriptions of their overt form. An incremental production model is presented that combines the synthesis of synchronized gestural, verbal, and facial behaviors with mechanisms for linking them in fluent utterances with natural co‐articulation and transition effects. In particular, an efficient kinematic approach for animating hand gestures from shape specifications is presented, which provides fine adaptation to temporal constraints that are imposed by cross‐modal synchrony. Copyright


Archive | 1998

Gesture and Sign Language in Human-Computer Interaction

Ipke Wachsmuth; Martin Fröhlich

Invited Paper.- Research on Computer Science and Sign Language: Ethical Aspects.- Gesture Recognition.- An Inertial Measurement Framework for Gesture Recognition and Applications.- Interpretation of Shape-Related Iconic Gestures in Virtual Environments.- Real-Time Gesture Recognition by Means of Hybrid Recognizers.- Development of a Gesture Plug-In for Natural Dialogue Interfaces.- A Natural Interface to a Virtual Environment through Computer Vision-Estimated Pointing Gestures.- Recognition of Sign Language.- Towards an Automatic Sign Language Recognition System Using Subunits.- Signer-Independent Continuous Sign Language Recognition Based on SRN/HMM.- A Real-Time Large Vocabulary Recognition System for Chinese Sign Language.- The Recognition of Finger-Spelling for Chinese Sign Language.- Overview of Capture Techniques for Studying Sign Language Phonetics.- Gesture and Sign Language Synthesis.- Models with Biological Relevance to Control Anthropomorphic Limbs: A Survey.- Lifelike Gesture Synthesis and Timing for Conversational Agents.- SignSynth: A Sign Language Synthesis Application Using Web3D and Perl.- Synthetic Animation of Deaf Signing Gestures.- From a Typology of Gestures to a Procedure for Gesture Production.- A Signing Avatar on the WWW.- Nature and Notation of Sign Language.- Iconicity in Sign Language: A Theoretical and Methodological Point of View.- Notation System and Statistical Analysis of NMS in JSL.- Head Movements and Negation in Greek Sign Language.- Study on Semantic Representations of French Sign Language Sentences.- SignWriting-Based Sign Language Processing.- Gestural Action & Interaction.- Visual Attention towards Gestures in Face-to-Face Interaction vs. on Screen.- Labeling of Gestures in SmartKom - The Coding System.- Evoking Gestures in SmartKom - Design of the Graphical User Interface.- Quantitative Analysis of Non-obvious Performer Gestures.- Interactional Structure Applied to the Identification and Generation of Visual Interactive Behavior: Robots that (Usually) Follow the Rules.- Are Praxical Gestures Semiotised in Service Encounters?.- Applications Based on Gesture Control.- Visually Mediated Interaction Using Learnt Gestures and Camera Control.- Gestural Control of Sound Synthesis and Processing Algorithms.- Juggling Gestures Analysis for Music Control.- Hand Postures for Sonification Control.- Comparison of Feedforward (TDRBF) and Generative (TDRGBN) Network for Gesture Based Control.Research challenges in gesture: Open issues and unsolved problems.- Progress in sign language recognition.- Movement phases in signs and co-speech gestures, and their transcription by human coders.- Classifying two dimensional gestures in interactive systems.- Are listeners paying attention to the hand gestures of an anthropomorphic agent? an evaluation using a gaze tracking method.- Gesture-Based and haptic interaction for human skill acquisition.- High performance real-time gesture recognition using Hidden Markov Models.- Velocity profile based recognition of dynamic gestures with discrete Hidden Markov Models.- Video-based sign language recognition using Hidden Markov Models.- Corpus of 3D natural movements and sign language primitives of movement.- On the use of context and a priori knowledge in motion analysis for visual gesture recognition.- Automatic estimation of body regions from video images.- Rendering gestures as line drawings.- Investigating the role of redundancy in multimodal input systems.- Gesture recognition of the upper limbs - From signal to symbol.- Exploiting distant pointing gestures for object selection in a virtual environment.- An intuitive two-handed gestural interface for computer supported product design.- Detection of fingertips in human hand movement sequences.- Neural architecture for gesture-based human-machine-interaction.- Robotic gesture recognition.- Image based recognition of gaze direction using adaptive methods.- Towards a dialogue system based on recognition and synthesis of Japanese sign language.- The recognition algorithm with non-contact for Japanese sign language using morphological analysis.- Special topics of gesture recognition applied in intelligent home environments.- BUILD-IT: An intuitive design tool based on direct object manipulation.


Autonomous Agents and Multi-Agent Systems | 2010

Affective computing with primary and secondary emotions in a virtual human

Christian Becker-Asano; Ipke Wachsmuth

We introduce the WASABI ([W]ASABI [A]ffect [S]imulation for [A]gents with [B]elievable [I]nteractivity) Affect Simulation Architecture, in which a virtual human’s cognitive reasoning capabilities are combined with simulated embodiment to achieve the simulation of primary and secondary emotions. In modeling primary emotions we follow the idea of “Core Affect” in combination with a continuous progression of bodily feeling in three-dimensional emotion space (PAD space), that is subsequently categorized into discrete emotions. In humans, primary emotions are understood as onto-genetically earlier emotions, which directly influence facial expressions. Secondary emotions, in contrast, afford the ability to reason about current events in the light of experiences and expectations. By technically representing aspects of each secondary emotion’s connotative meaning in PAD space, we not only assure their mood-congruent elicitation, but also combine them with facial expressions, that are concurrently driven by primary emotions. Results of an empirical study suggest that human players in a card game scenario judge our virtual human MAX significantly older when secondary emotions are simulated in addition to primary ones.


agent-directed simulation | 2004

Simulating the Emotion Dynamics of a Multimodal Conversational Agent

Christian Becker; Stefan Kopp; Ipke Wachsmuth

We describe an implemented system for the simulation and visualisation of the emotional state of a multimodal conversational agent called Max. The focus of the presented work lies on modeling a coherent course of emotions over time. The basic idea of the underlying emotion system is the linkage of two interrelated psychological concepts: an emotion axis – representing short-time system states – and an orthogonal mood axis that stands for an undirected, longer lasting system state. A third axis was added to realize a dimension of boredom. To enhance the believability and lifelikeness of Max, the emotion system has been integrated in the agent’s architecture. In result, Max’s facial expression, gesture, speech, and secondary behaviors as well as his cognitive functions are modulated by the emotional system that, in turn, is affected by information arising at various levels within the agent’s architecture.


Journal for Research in Mathematics Education | 1988

IDENTIFYING FRACTIONS ON NUMBER LINES

George W. Bright; Merlyn J. Behr; Thomas R. Post; Ipke Wachsmuth

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected].. National Council of Teachers of Mathematics is collaborating with JSTOR to digitize, preserve and extend access to Journal for Research in Mathematics Education.


International Journal of Social Robotics | 2012

Generation and Evaluation of Communicative Robot Gesture

Maha Salem; Stefan Kopp; Ipke Wachsmuth; Katharina J. Rohlfing; Frank Joublin

How is communicative gesture behavior in robots perceived by humans? Although gesture is crucial in social interaction, this research question is still largely unexplored in the field of social robotics. Thus, the main objective of the present work is to investigate how gestural machine behaviors can be used to design more natural communication in social robots. The chosen approach is twofold. Firstly, the technical challenges encountered when implementing a speech-gesture generation model on a robotic platform are tackled. We present a framework that enables the humanoid robot to flexibly produce synthetic speech and co-verbal hand and arm gestures at run-time, while not being limited to a predefined repertoire of motor actions. Secondly, the achieved flexibility in robot gesture is exploited in controlled experiments. To gain a deeper understanding of how communicative robot gesture might impact and shape human perception and evaluation of human-robot interaction, we conducted a between-subjects experimental study using the humanoid robot in a joint task scenario. We manipulated the non-verbal behaviors of the robot in three experimental conditions, so that it would refer to objects by utilizing either (1) unimodal (i.e., speech only) utterances, (2) congruent multimodal (i.e., semantically matching speech and gesture) or (3) incongruent multimodal (i.e., semantically non-matching speech and gesture) utterances. Our findings reveal that the robot is evaluated more positively when non-verbal behaviors such as hand and arm gestures are displayed along with speech, even if they do not semantically match the spoken utterance.


Proceedings of Computer Animation 2002 (CA 2002) | 2002

Model-based animation of co-verbal gesture

Stefan Kopp; Ipke Wachsmuth

Virtual conversational agents are supposed to combine speech with non-verbal modalities for intelligible and believable utterances. However, the automatic synthesis of co-verbal gestures is still struggling with several problems like naturalness in procedurally generated animations, flexibility in pre-defined movements, and synchronization with speech. In this paper we focus on generating complex multimodal utterances including gesture and speech from XML-based descriptions of their overt form. We describe a coordination model that reproduces coarticulation and transition effects in both modalities. In particular, an efficient kinematic approach to creating gesture animations from shape specifications is presented, which provides fine adaptation to temporal constraints that are imposed by cross-modal synchrony.


Lecture Notes in Artificial Intelligence ; 4930 | 2008

Modelling Communication with Robots and Virtual Humans

Ipke Wachsmuth; Günther Knoblich

We may not be able to make you love reading, but modeling communication with robots and virtual humans will lead you to love reading starting from now. Book is the window to open the new world. The world that you want is in the better stage and level. World will always guide you to even the prestige stage of the life. You know, this is some of how reading will give you the kindness. In this case, more books you read more knowledge you know, but it can mean also the bore is full.


Ai Magazine | 2008

Embodied communication in humans and machines

Ipke Wachsmuth; Günther Knoblich

The challenge to develop an integrated perspective of embodiment in communication has been taken up by an international research group hosted by Bielefeld Universitys Center for Interdisciplinary Research (ZiF -- Zentrum fur interdisziplinare Forschung) from October, 2005 through September, 2006. An international conference was held there on 12-15 January, 2005 to define a research agenda that will explicitly address embodied communication in humans and machines.

Collaboration


Dive into the Ipke Wachsmuth's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bernhard Jung

Freiberg University of Mining and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Merlyn J. Behr

Northern Illinois University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge