Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yasmine Arafa is active.

Publication


Featured researches published by Yasmine Arafa.


intelligent user interfaces | 2003

Scripting embodied agents behaviour with CML: character markup language

Yasmine Arafa; Abe Mamdani

Embodied agents present ongoing challenging agenda for research in multi-modal user interfaces and human-computer-interaction. Such agent metaphors will only be widely applicable to online applications when there is a standardised way to map underlying engines with the visual presentation of the agents. This paper delineates the functions and specifications of a mark-up language for scripting the animation of virtual characters. The language is called: Character Mark-up Language (CML) and is an XML-based character attribute definition and animation scripting language designed to aid in the rapid incorporation of lifelike characters/agents into online applications or virtual reality worlds. This multi-modal scripting language is designed to be easily understandable by human animators and easily generated by a software process such as software agents. CML is constructed based jointly on motion and multi-modal capabilities of virtual life-like figures. The paper further illustrates the constructs of the language and describes a real-time execution architecture that demonstrates the use of such a language as a 4G language to easily utilise and integrate MPEG-4 media objects in online interfaces and virtual environments


adaptive agents and multi-agents systems | 2003

Character animation scripting languages: a comparison

Yasmine Arafa; Kaveh Kamyab; Ebrahim Mamdani

Designing lifelike animated agents presents a challenging agenda for research. Such agent metaphors will only be widely applicable to real-time applications when there is a standardised way to map underlying engines with the visual presentation of the agents. As a number of such scripting languages are now emerging, there appears to be the need for the research community to look at and agree upon the requirements of and the expectations from them.In this paper we address the current fragmentation in the field of embodied character animation and representation. We outline functions and specifications of the markup languages for scripting animation and the representing attributes of virtual characters.


intelligent user interfaces | 2000

Virtual personal service assistants: towards real-time characters with artificial hearts

Yasmine Arafa; Abe Mamdani

Over the last years there has been a growing consensus that new generation interfaces turn their focus on the human element by enriching an Affective dimension. Affective generation of autonomous agent behaviour aspires to give computer interfaces emotional states that relate and take into account user as well as system environment considerations. Internally, through computational models of artificial hearts (emotion and personality), and externally through believable multi-modal expression augmented with quasi-human characteristics. Computational models of affect are addressing problems of how agents arrive at a given affective state. Much of this work is targeting the entertainment environment and generally does not address the requirements of multi-agent systems, where behaviour is dynamically changing based on agent goals as well as the shared data and knowledge. This paper discusses one of the requirements for real-time realisation of Personal Service Assistant interface characters. We describe an approach to enabling the computational perception required for the automated generation of affective behaviour in multi-agent real-time environments. This uses a current agent communication language so as they not only convey the semantic content of knowledge exchange but also they can communicate affective attitudes about the shared knowledge.


Life-like characters | 2004

Toward a Unified Scripting Language: Lessons Learned from Developing CML and AML

Yasmine Arafa; Kaveh Kamyab; Ebrahim Mamdani

Life-like animated agents present a challenging ongoing agenda for research. Such agent metaphors will only be widely applicable to online applications when there is a standardized way to map underlying engines with the visual presentation of the agents. This chapter delineates functions and specifications of two markup languages for scripting the animation of virtual characters. The first language is Character Markup Language (CML) which is an XML-based, embodied agent, character attribute, definition and animation scripting language designed to aid in the rapid incorporation of life-like agents into online applications or virtual reality worlds. CML is constructed based jointly on motion and multi-modal capabilities of virtual human figures. The other is Avatar Markup Language (AML) which is also an XML-based multi-modal scripting language designed to be easily understandable by human animators as well as easily generated by a software process such as an agent. We illustrate the constructs of the two languages and look at some examples of usage. The experience gained through the development of two such languages with different approaches yet similar aims highlights the need for a degree of unification. This is especially true given that a number of other similar languages exist as illustrated in other parts of this book. We attempt to define metrics for comparison of a set of these languages with the aim of identifying salient constructs for a unified scripting language.


active media technology | 2001

Building Multi-modal Personal Sales Agents as Interfaces to E-commerce Applications

Yasmine Arafa; Abe Mamdani

The research presented explores a new paradigm for human-computer interaction with electronic retailing applications. A paradigm that deploys face-to-face interaction with intelligent, visual, lifelike, multimodal conversational agents, which take on the role of electronic sales assistants. This paper discusses the motivations for enriching current e-commerce application interfaces with multi-modal interface agents and discusses the technical development issues they raise, as realised in the MAPPA (EU project EP28831) system architecture design and development.The paper addresses three distinct components of an overall framework for developing lifelike, multi-modal agents for real-time and dynamic applications: Knowledge Representation and Manipulation, Grounded Affect Models, and the convergence of both into support for multimedia visualisation of lifelike, social behaviour. The research presents a novel specification for such a medium and a functional agent-based system scenario (e-commerce) that is implemented with it. Setting forth a framework for building multi-modal interface agents and yielding a conversational form of human-machine interaction, which may have potential for shaping tomorrows interface to the world of e-commerce.


systems man and cybernetics | 2000

Face-to-face interaction with an electronic personal sales assistant

Yasmine Arafa; Abe Mamdani

The research presented explores a new paradigm for e-commerce human-computer-interaction, presentation and personalisation. It describes face-to-face conversational interaction with intelligent, visual, lifelike electronic sales assistant agents. This paradigm joins two research areas: that of visual and conversational interface agents which provide an intuitive metaphor for human-computer interaction based on face-to-face conversation; and that of knowledge-based, market-aware, user-aware personal agents that can adapt to individual consumer preferences and requirements in a service-based electronic market. By combining these two areas we yield a conversational form of human-machine interaction that has potential for shaping tomorrows interface to the world of e-commerce.


international conference on multimodal interfaces | 2002

Multi-modal embodied agents scripting

Yasmine Arafa; Abe Mamdani

Embodied agents present ongoing challenging agenda for research in multi-modal user interfaces and human-computer-interaction. Such agent metaphors will only be widely applicable to online applications when there is a standardised way to map underlying engines with the visual presentation of the agents. This paper delineates the functions and specifications of a mark-up language for scripting the animation of virtual characters. The language is called: Character Mark-up Language (CML) and is an XML-based character attribute definition and animation scripting language designed to aid in the rapid incorporation of lifelike characters/agents into online applications or virtual reality worlds. This multi-modal scripting language is designed to be easily understandable by human animators and easily generated by a software process such as software agents. CML is constructed based jointly on motion and multi-modal capabilities of virtual life-like figures. The paper further illustrates the constructs of the language and describes a real-time execution architecture that demonstrates the use of such a language as a 4G language to easily utilise and integrate MPEG-4 media objects in online interfaces and virtual environments.


systems man and cybernetics | 1999

Virtual personal service assistants: real-time characters with artificial hearts

Yasmine Arafa; Abe Mamdani

There has been a growing consensus that new generation interfaces turn their focus on the human element by enriching human-computer communication with an affective dimension. Affective generation of autonomous agent behaviour aspires to give computer interfaces emotional states that relate and take into account user as well as system environment considerations. Internally, through computational models of artificial hearts, and externally through believable multi-modal expression augmented with quasi-human characteristics. Computational models of affect are addressing problems of how agents arrive at a given affective state and how these states are expressed through natural multimodal communicative interaction. The paper discusses one of the requirements for real-time realisation of personal service assistant interface characters. We describe an operational approach to enabling the computational perception required for the automated generation of affective behaviour through inter-agent communication in multi-agent real-time environments. The research is investigating the potential of extending current agent communication languages so as they not only convey the semantic content of knowledge exchange but also they can communicate affective attitudes about the shared knowledge. Providing a necessary component of the framework required for real-time autonomous agent development with which we may bridge the gap between current research in psychological theory and practical implementation of social multi-agent systems.


international conference on multimodal interfaces | 2000

A Framework for Supporting Multimodal Conversational Characters in a Multi-agent System

Yasmine Arafa; Abe Mamdani

This paper discusses the computational framework for enabling multimodal conversational interface agents embodied in lifelike characters within a multi-agent environment. It is generally argued that one of the problems with such interface characters today is their inability to respond believably or adequately to the context of an interaction and the surrounding environment. Affective behaviour is used to better express responses to interaction context and provide more believable visual expressive responses. We describe an operational approach to enabling the computational perception required for the automated generation of affective behaviour through inter-agent communication in multi-agent real-time environments. The research is investigating the potential of extending current agent communication languages so as they not only convey the semantic content of knowledge exchange but also they can communicate affective attitudes about the shared knowledge. Providing a necessary component of the framework required for autonomous agent development with which we may bridge the gap between current research in psychological theory and practical implementation of social multi-agent systems.


systems man and cybernetics | 1999

Modelling interface agents for personality-based behaviour

Yasmine Arafa; Patricia Charlton; Abe Mamdani

Visually synthesising the metaphor of life-like visual personal service assistants (PSAs) is the focus of much current research. Often the complex visual requirements of the PSA means that little attention is given to the underlying infrastructure necessary to support the reasoning requirements of such a metaphor. This lack of support, from a software engineering perspective, means that many designs and implementations are application specific thus re-use of the designs and implementations are limited. To address this problem our research goes behind the scenes of visual stage representation to provide a technical infrastructure enabling the creation of more believable life-like interface characters based on a structured meta-representation called the Asset Description Language (ADL). The language is extended to enable agents to communicate emotions with other agents (software or human). Hence this explicit representation of emotions within the communication means that an agent can reason about these emotions and respond in an appropriate manner either visually to a human or through communication with another agent. This response permits an agent to exhibit personality. Through the provision of a structured meta-representation we aim to evaluate the effect personality-based behaviour on interactive user interfaces as well as on other service agents in a multi-agent environment. As the representation provides some basic general design primitives which are not application specific means that re-use of our infrastructure by other developers is possible.

Collaboration


Dive into the Yasmine Arafa's collaboration.

Top Co-Authors

Avatar

Abe Mamdani

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kaveh Kamyab

Imperial College London

View shared research outputs
Top Co-Authors

Avatar

Jeremy Pitt

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pat Fehin

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Simon Martin

Imperial College London

View shared research outputs
Researchain Logo
Decentralizing Knowledge