Andrew Marriott
Curtin University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andrew Marriott.
active media technology | 2001
Andrew Marriott; Simon Beard; John Stallo; Quoc Huynh
The computer revolution in Active Media Technology has recently made it possible to have Talking Head interfaces to applications and information. Users may, with plain English queries, interact with a lifelike computer generated image that responds to them with computer generated speech using textual information coming from a knowledge base. This paper details the research being done at Curtin University in creating a Virtual Human Markup Language (VHML) that allows these interactive Talking Heads to be directed by text marked up in XML. This direction makes the interaction more effective. The language is designed to accommodate the various aspects of Human-Computer Interaction with regards to Facial Animation, Body Animation, Dialogue Manager interaction, Text to Speech production, Emotional Representation plus Hyper and Multi Media information. This paper also points to audio and visual examples of the use of the language as well as user evaluation of an interactive Talking Head that uses VHML. VHML is currently being used in several Talking Head applications as well as a Mentoring System. Finally we discuss planned future experiments using VHML for two Talking Head demonstrations / evaluations. The VHML development and implementation is part of a three-year European Union Fifth Framework project called InterFace.
affective computing and intelligent interaction | 2005
He Xiao; Donald Reid; Andrew Marriott; E. K. Gulland
Curtin University’s Talking Heads (TH) combine an MPEG-4 compliant Facial Animation Engine (FAE), a Text To Emotional Speech Synthesiser (TTES), and a multi-modal Dialogue Manager (DM), that accesses a Knowledge Base (KB) and outputs Virtual Human Markup Language (VHML) text which drives the TTES and FAE. A user enters a question and an animated TH responds with a believable and affective voice and actions. However, this response to the user is normally marked up in VHML by the KB developer to produce the required facial gestures and emotional display. A real person does not react by fixed rules but on personality, beliefs, good and bad previous experiences, and training. This paper reviews personality theories and models relevant to THs, and then discusses the research at Curtin over the last five years in implementing and evaluating personality models. Finally the paper proposes an active, adaptive personality model to unify that work.
Life-like characters | 2004
Andrew Marriott; Simon Beard
With emotion described as “the organism’s interface to the world outside” (Scherer [45]), there has been great interest in the role of emotion in speech and gestures in making Human—Virtual Human interfaces more effective. Miller [33] suggests that only 7% of a message is sent through words: the remainder is sent through facial expressions (55%) and vocal intonation (38%). Therefore in both analysis of human conversations and in the synthesis of Virtual Humans, expressive emotion and gestures need to be catered for to ensure that the intent of the message is not lost. A radical paradigm change occurred in going from text entry to the mouse-pointer concepts of a Graphical User Interface. In a similar way, it is now necessary for a total user input paradigm, adding video and audio input to the existing methods, to become the predominant Computer Human Interaction (CHI) of the future. This complete interaction is referred to as a gestalt user interface: an interface that should be reactive to, and proactive of, the perceived desires of the user through emotion and gesture. The formal specification, development, implementation, and evaluation of a gestalt User Interface (gUI) language is necessary to provide a stable, consistent base for future research into multi-modal Human interfaces in general, and specifically to Embodied Character Agents. This chapter details some early research on language design and also an implementation evaluation.
Proceedings of the Third Annual Conference of AI, Simulation, and Planning in High Autonomy Systems 'Integrating Perception, Planning and Action'. | 1992
Andrew Marriott; Toto Widyanto
This paper presents a method of applying a graphical locomotion model to a behavioural animation system. The locomotion models (actors) are driven by their motives and needs, aided by their visual perception systems: they are capable of detecting corners and edges of the environment so they can move without colliding into any obstacle. Each actor may regard other actors as being friendly or frightening, decisions may be made by the actors to approach, to avoid, to grasp, to eat. The graphical model must be capable of performing these actions in a realistic manner. The 2-0 nature of the behavioural animation system is implimented in 3-0 by assuming that the actors are anchored to the 2-0 plane. This still allows flexible locomotion for most models.
Journal of Research and Practice in Information Technology | 2000
Andrew Marriott; Simon Beard; H. Haddad; Roberto Pockaj; John Stallo; Quoc Huynh; B. Tschirren
Archive | 2003
Andrew Marriott
international conference on semantic computing | 2001
Andrew Marriott; Simon Beard; John Stallo; Quoc Huynh
Internet commerce and software agents: cases, technologies and opportunities | 2001
Andrew Marriott; Roberto Pockaj; Craig M. Parker
Archive | 2006
Andrew Marriott; Amel Holic; Donald Reid
Archive | 2003
Igor S. Pandzic; Michele Cannella; Franck Davoine; Robert Forchheimer; Fabio Lavagetto; Haibo Li; Andrew Marriott; Sotiris Malassiotis; Montse Pardàs; Roberto Pockaj; Gael Sannier