Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ramin Yaghoubzadeh is active.

Publication


Featured researches published by Ramin Yaghoubzadeh.


intelligent virtual agents | 2013

Virtual Agents as Daily Assistants for Elderly or Cognitively Impaired People

Ramin Yaghoubzadeh; Marcel Kramer; Karola Pitsch; Stefan Kopp

People with cognitive impairments have problems organizing their daily life autonomously. A virtual agent as daily calendar assistant could provide valuable support, but this requires that these special user groups accept such a system and can interact with it successfully. In this paper we present studies to elucidate these questions for elderly users as well as cognitively impaired users. Results from interviews and focus groups show that acceptance can be increased by way of a participatory design method. Actual interaction studies with a prototype demonstrate the feasibility of spoken-language interaction and reveal strategies to mitigate understanding problems.


Journal on Multimodal User Interfaces | 2013

An architecture for fluid real-time conversational agents: integrating incremental output generation and input processing

Stefan Kopp; Herwin van Welbergen; Ramin Yaghoubzadeh; Hendrik Buschmeier

Embodied conversational agents still do not achieve the fluidity and smoothness of natural conversational interaction. One main reason is that current system often respond with big latencies and in inflexible ways. We argue that to overcome these problems, real-time conversational agents need to be based on an underlying architecture that provides two essential features for fast and fluent behavior adaptation: a close bi-directional coordination between input processing and output generation, and incrementality of processing at both stages. We propose an architectural framework for conversational agents [Artificial Social Agent Platform (ASAP)] providing these two ingredients for fluid real-time conversation. The overall architectural concept is described, along with specific means of specifying incremental behavior in BML and technical implementations of different modules. We show how phenomena of fluid real-time conversation, like adapting to user feedback or smooth turn-keeping, can be realized with ASAP and we describe in detail an example real-time interaction with the implemented system.


intelligent virtual agents | 2014

AsapRealizer 2.0: The Next Steps in Fluent Behavior Realization for ECAs

Herwin van Welbergen; Ramin Yaghoubzadeh; Stefan Kopp

Natural human interaction is highly dynamic and responsive: interlocutors produce utterances incrementally, smoothly switch speaking turns with virtually no delay, make use of on-the-fly adaptation and (self) interruptions, execute movement in tight synchrony, etc. We present the conglomeration of our research efforts in enabling the realization of such fluent interactions for Embodied Conversational Agents in the behavior realizer ‘AsapRealizer 2.0’ and show how it provides fluent realization capabilities that go beyond the state-of-the-art.


intelligent virtual agents | 2015

Adaptive Grounding and Dialogue Management for Autonomous Conversational Assistants for Elderly Users

Ramin Yaghoubzadeh; Karola Pitsch; Stefan Kopp

People with age-related or congenital cognitive impairments require assistance in daily tasks to enable them to maintain a self-determined lifestyle in their own home. We developed and evaluated a prototype of an autonomous spoken dialogue assistant to support these user groups in the domain of week planning. Based on insights from previous work with a WOz study, we designed a dialogue system which caters to the interactional needs of these user groups. Subjects were able to interact successfully with the system and rated it as equivalent in terms of robustness and usability compared to the WOz prototype.


Proceedings of the 7th Workshop on Speech and Language Processing for Assistive Technologies (SLPAT 2016) | 2016

Towards graceful turn management in human-agent interaction for people with cognitive impairments

Ramin Yaghoubzadeh; Stefan Kopp

A conversational approach to spoken human-machine interaction, the primary and most stable mode of interaction for many people with cognitive impairments, can require proactive control of the interactive flow from the system side. While spoken technology has primarily focused on unimodal spoken interruptions to this end, we propose a multimodal embodied approach with a virtual agent, incorporating an increasingly salient superposition of gestural, facial and paraverbal cues, in order to more gracefully signal turn taking. We implemented and evaluated this in a pilot study with five people with cognitive impairments. We present initial statistical results and promising insights from qualitative analysis which indicate that the basic approach works.


intelligent virtual agents | 2011

Creating familiarity through adaptive behavior generation in human-agent interaction

Ramin Yaghoubzadeh; Stefan Kopp

Embodied conversational agents should make use of an adaptive behavior generation mechanism which is able to gradually refine its repertoire to behaviors the individual user understands and accepts. We present a probabilistic model that takes into account possible sociocommunicative effects of utterances while selecting the behavioral form.


intelligent virtual agents | 2016

The Effect of an Intelligent Virtual Agent’s Nonverbal Behavior with Regard to Dominance and Cooperativity

Carolin Straßmann; Astrid M. Rosenthal-von der Pütten; Ramin Yaghoubzadeh; Raffael Kaminski; Nicole C. Krämer

In order to design a successful human-agent-interaction, knowledge about the effects of a virtual agent’s behavior is important. Therefore, the presented study aims to investigate the effect of different nonverbal behavior on the agent’s person perception with a focus on dominance and cooperativity. An online study with 190 participants was conducted to evaluate the effect of different nonverbal behaviors. 23 nonverbal behaviors of four different experimental conditions (dominant, submissive, cooperative and non-cooperative behavior) were compared. Results emphasize that, indeed, nonverbal behavior is powerful to affect users’ person perception. Data analyses reveal symbolic gestures such as crossing the arms, stemming the hands on the hip or touching one’s neck to most effectively influence dominance perception. Regarding perceived cooperativity expressivity has the most pronounced effect.


Human Centered Robot Systems, Cognition, Interaction, Technology | 2009

Social Motorics – Towards an Embodied Basis of Social Human-Robot Interaction

Amir Sadeghipour; Ramin Yaghoubzadeh; Andreas Rüter; Stefan Kopp

In this paper we present a biologically-inspired model for social behavior recognition and generation. Based on an unified sensorimotor representation, it integrates hierarchical motor knowledge structures, probabilistic forward models for predicting observations, and inverse models for motor learning. With a focus on hand gestures, results of initial evaluations against real-world data are presented.


intelligent virtual agents | 2016

flexdiam – Flexible Dialogue Management for Incremental Interaction with Virtual Agents (Demo Paper)

Ramin Yaghoubzadeh; Stefan Kopp

We present a demonstration system for incremental spoken human–machine dialogue for task-centric domains that includes a controller for verbal and nonverbal behavior for virtual agents. The dialogue management components can handle uncertainty in input and resolve it interactively with high responsivity, and state tracking is aware of momentary events such as interruptions by the user. Aside from adaptable dialogue strategies, such as for grounding, the system includes a multimodal floor management controller that attempts to limit the influence of idiosyncratic dialogue behavior on the part of our primary user groups – older adults and people with cognitive impairments – both of which have previously participated in pilot studies using the platform.


Proceedings of the 7th Workshop on Speech and Language Processing for Assistive Technologies (SLPAT 2016) | 2016

flexdiam – flexible dialogue management for problem-aware, incremental spoken interaction for all user groups (demo paper)

Ramin Yaghoubzadeh; Stefan Kopp

The dialogue management framework flexdiam was designed to afford people across a wide spectrum of cognitive capabilities access to a spoken-dialogue controlled assistive system, aiming for a conversational speech style combined with incremental feedback and information update. The architecture is able to incorporate uncertainty and natural repair mechanisms in order to fix problems quickly in an interactive process – with flexibility with respect to individual users’ capabilities. It was designed and evaluated in a user-centered approach in cooperation with a large health care provider. We present the architecture and showcase the resulting autonomous prototype for schedule management and accessible communication.

Collaboration


Dive into the Ramin Yaghoubzadeh's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Karola Pitsch

University of Duisburg-Essen

View shared research outputs
Top Co-Authors

Avatar

Carolin Straßmann

University of Duisburg-Essen

View shared research outputs
Top Co-Authors

Avatar

Nicole C. Krämer

University of Duisburg-Essen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christiane Opfermann

University of Duisburg-Essen

View shared research outputs
Researchain Logo
Decentralizing Knowledge