Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christine L. Lisetti is active.

Publication


Featured researches published by Christine L. Lisetti.


international conference on robotics and automation | 2002

Emotion-based control of cooperating heterogeneous mobile robots

Robin R. Murphy; Christine L. Lisetti; Russell Tardif; Liam Irish; Aaron Gage

Previous experiences show that it is possible for agents such as robots cooperating asynchronously on a sequential task to enter deadlock, where one robot does not fulfil its obligations in a timely manner due to hardware or planning failure, unanticipated delays, etc. Our approach uses a formal multilevel hierarchy of emotions where emotions both modify active behaviors at the sensory-motor level and change the set of active behaviors at the schematic level. The resulting implementation of a team of heterogeneous robots using a hybrid deliberative/reactive architecture produced the desired emergent societal behavior. Data collected at two different public venues illustrate how a dependent agent selects new behaviors (e.g., stop serving, move to intercept the refiner) to compensate for delays from a subordinate agent (e.g., blocked by the audience). The subordinate also modifies the intensity of its active behaviors in response to feedback from the dependent agent. The agents communicate asynchronously through knowledge query and manipulation language via wireless Ethernet.


User Modeling and User-adapted Interaction | 2002

Modeling Multimodal Expression of User's Affective Subjective Experience

Nadia Bianchi-Berthouze; Christine L. Lisetti

With the growing importance of information technology in our everyday life, new types of applications are appearing that require the understanding of information in a broad sense. Information that includes affective and subjective content plays a major role not only in an individual’s cognitive processes but also in an individual’s interaction with others. We identify three key points to be considered when developing systems that capture affective information: embodiment (experiencing physical reality), dynamics (mapping experience and emotional state with its label) and adaptive interaction (conveying emotive response, responding to a recognized emotional state). We present two computational systems that implement those principles: MOUE (Model Of User Emotions) is an emotion recognition system that recognizes the user’s emotion from his/her facial expressions, and from it, adaptively builds semantic definitions of emotion concepts using the user’s feedback; MIKE (Multimedia Interactive Environment for Kansei communication) is an interactive adaptive system that, along with the user, co-evolves a language for communicating over subjective impressions.


international conference on user modeling, adaptation, and personalization | 2001

Emotions and Personality in Agent Design and Modeling

Piotr J. Gmytrasiewicz; Christine L. Lisetti

Our research combines the principled paradigm of rational agent design based on decision theory with formal definitions of the emotional states and personality of an artificial intelligent agent. We view the emotional states as the agents decision-making modes, predisposing the agent to make its choices in a specific, yet rational, way. Change of the emotional state, say due to an external stimulus, invokes a transformation of the agents decision-making behavior. We define personality as consisting of the agents emotional states together with the specification of transitions taking place among the states. To model the personalities and emotional states of other agents and humans, we additionally provide a definition of a personality models of other agents. Our definition allows the personality models to be learned over the course of multiple interactions with the users and other agents.


systems man and cybernetics | 2004

A social informatics approach to human-robot interaction with a service social robot

Christine L. Lisetti; Sarah M. Brown; Kaye Alvarez; Andreas H. Marpaung

The development of an autonomous social robot, Cherry, is occurring in tandem with studies gaining potential user preferences, likes, dislikes, and perceptions of her features. Thus far, results have indicated that individuals 1) believe that service robots with emotion and personality capabilities would make them more acceptable in everyday roles in human life, 2) prefer that robots communicate via both human-like facial expressions, voice, and text-based media, 3) become more positive about the idea of service and social robots after exposure to the technology, and 4) find the appearance and facial features of Cherry pleasing. The results of these studies provide the basis for future research efforts, which are discussed.


intelligent user interfaces | 1998

Panel on affect and emotion in the user interface

Barbara Hayes-Roth; Gene Ball; Christine L. Lisetti; Rosalind W. Picard; Andrew Stern

INTRODUCTION Intelligence. So much of our technology revolves around intelligence: technology in support of intellectual activities; the goal of engineering artificial intelligence; the need for intelligence in the user interface. And yet, so much of everyday life is really about affect and emotion: differences in performance under conditions that are supportive, threatening, or punishing; the challenges of conflict resolution and cooperation among heterogeneous groups of people; the implicit messages of body language and conversational style; the spirit-sustaining texture of our affective relationships with family and friends.


acm transactions on management information systems | 2013

I Can Help You Change! An Empathic Virtual Agent Delivers Behavior Change Health Interventions

Christine L. Lisetti; Reza Amini; Ugan Yasavur; Naphtali Rishe

We discuss our approach to developing a novel modality for the computer-delivery of Brief Motivational Interventions (BMIs) for behavior change in the form of a personalized On-Demand VIrtual Counselor (ODVIC), accessed over the internet. ODVIC is a multimodal Embodied Conversational Agent (ECA) that empathically delivers an evidence-based behavior change intervention by adapting, in real-time, its verbal and nonverbal communication messages to those of the user’s during their interaction. We currently focus our work on excessive alcohol consumption as a target behavior, and our approach is adaptable to other target behaviors (e.g., overeating, lack of exercise, narcotic drug use, non-adherence to treatment). We based our current approach on a successful existing patient-centered brief motivational intervention for behavior change---the Drinker’s Check-Up (DCU)---whose computer-delivery with a text-only interface has been found effective in reducing alcohol consumption in problem drinkers. We discuss the results of users’ evaluation of the computer-based DCU intervention delivered with a text-only interface compared to the same intervention delivered with two different ECAs (a neutral one and one with some empathic abilities). Users rate the three systems in terms of acceptance, perceived enjoyment, and intention to use the system, among other dimensions. We conclude with a discussion of how our positive results encourage our long-term goals of on-demand conversations, anytime, anywhere, with virtual agents as personal health and well-being helpers.


Proceedings of the 1st ACM international workshop on Human-centered multimedia | 2006

Toward multimodal fusion of affective cues

Marco Paleari; Christine L. Lisetti

During face to face communication, it has been suggested that as much as 70% of what people communicate when talking directly with others is through paralanguage involving multiple modalities combined together (e.g. voice tone and volume, body language). In an attempt to render humancomputer interaction more similar to human-human communication and enhance its naturalness, research on sensory acquisition and interpretation of single modalities of human expressions have seen ongoing progress over the last decade. These progresses are rendering current research on artificial sensor fusion of multiple modalities an increasingly important research domain in order to reach better accuracy of congruent messages on the one hand, and possibly to be able to detect incongruent messages across multiple modalities (incongruency being itself a message about the nature of the information being conveyed). Accurate interpretation of emotional signals - quintessentially multimodal - would hence particularly benefit from multimodal sensor fusion and interpretation algorithms. In this paper we provide a state of the art multimodal fusion and describe one way to implement a generic framework for multimodal emotion recognition. The system is developed within the MAUI framework [31] and Scherers Component Process Theory (CPT) [49, 50, 51, 24, 52], with the goal to be modular and adaptive. We want the designed framework to be able to accept different single and multi modality recognition systems and to automatically adapt the fusion algorithm to find optimal solutions. The system also aims to be adaptive to channel (and system) reliability.


Journal of Visual Languages and Computing | 2006

MAUI avatars: Mirroring the user's sensed emotions via expressive multi-ethnic facial avatars

Fatma Nasoz; Christine L. Lisetti

Abstract In this paper we describe the multimodal affective user interface (MAUI) we created to capture its users’ emotional physiological signals via wearable computers and visualize the categorized signals in terms of recognized emotion. MAUI aims at (1) giving feedback to the users about their emotional states via various modalities (e.g. mirroring the users’ facial expressions and describing verbally the emotional state via an anthropomorphic avatar) and (2) animating the avatars facial expressions based on the users’ captured signals. We first describe a version of MAUI which we developed as an in-house research tool for developing and testing affective computing research. We also discuss applications for which building intelligent user interfaces similar to MAUI can be useful and we suggest ways of adapting the MAUI approach to fit those specific applications.


acm multimedia | 2002

Multimodal affective driver interfaces for future cars

Fatma Nasoz; Onur Ozyer; Christine L. Lisetti; Neal Finkelstein

In this paper, we uncover a new potential application for multimedia technologies: car interfaces for enhanced drivers safety. We also describe the experiment we conducted in order to map certain physiological signals (galvanic skin response, heart beat, and temperature) to certain emotions (Neutral, Anger, Fear, Sadness, and Frustration). We demonstrate the results we gained and describe how we use these results to our Multimodal Affective Driver Interface for the drivers of the future cars.


international conference on user modeling, adaptation, and personalization | 2007

A User Model of Psycho-physiological Measure of Emotion

Olivier Villon; Christine L. Lisetti

The interpretation of physiological signals in terms of emotion requires an appropriate mapping between physiological features and emotion representations. We present a user model associating psychological and physiological representation of emotion in order to bring findings from the psychophysiology domain into User-Modeling computational techniques. We discuss results based on an experiment we performed based on bio-sensors to get physiological measure of emotion, involving 40 subjects.

Collaboration


Dive into the Christine L. Lisetti's collaboration.

Top Co-Authors

Avatar

Ugan Yasavur

Florida International University

View shared research outputs
Top Co-Authors

Avatar

Fatma Nasoz

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Naphtali Rishe

Florida International University

View shared research outputs
Top Co-Authors

Avatar

Reza Amini

Florida International University

View shared research outputs
Top Co-Authors

Avatar

Piotr J. Gmytrasiewicz

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andreas H. Marpaung

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Eva Hudlicka

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Ian Horswill

Northwestern University

View shared research outputs
Top Co-Authors

Avatar

Cédric Buche

École nationale d'ingénieurs de Brest

View shared research outputs
Researchain Logo
Decentralizing Knowledge