Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jina Lee is active.

Publication


Featured researches published by Jina Lee.


intelligent virtual agents | 2008

Multi-party, Multi-issue, Multi-strategy Negotiation for Multi-modal Virtual Agents

David R. Traum; Stacy Marsella; Jonathan Gratch; Jina Lee; Arno Hartholt

We present a model of negotiation for virtual agents that extends previous work to be more human-like and applicable to a broader range of situations, including more than two negotiators with different goals, and negotiating over multiple options. The agents can dynamically change their negotiating strategies based on the current values of several parameters and factors that can be updated in the course of the negotiation. We have implemented this model and done preliminary evaluation within a prototype training system and a three-party negotiation with two virtual humans and one human.


intelligent virtual agents | 2006

Nonverbal behavior generator for embodied conversational agents

Jina Lee; Stacy Marsella

Believable nonverbal behaviors for embodied conversational agents (ECA) can create a more immersive experience for users and improve the effectiveness of communication. This paper describes a nonverbal behavior generator that analyzes the syntactic and semantic structure of the surface text as well as the affective state of the ECA and annotates the surface text with appropriate nonverbal behaviors. A number of video clips of people conversing were analyzed to extract the nonverbal behavior generation rules. The system works in real-time and is user-extensible so that users can easily modify or extend the current behavior generation rules.


intelligent virtual agents | 2007

The Rickel Gaze Model: A Window on the Mind of a Virtual Human

Jina Lee; Stacy Marsella; David R. Traum; Jonathan Gratch; Brent J. Lance

Gaze plays a large number of cognitive, communicative and affective roles in face-to-face human interaction. To build a believable virtual human, it is imperative to construct a gaze model that generates realistic gaze behaviors. However, it is not enough to merely imitate a persons eye movements. The gaze behaviors should reflect the internal states of the virtual human and users should be able to derive them by observing the behaviors. In this paper, we present a gaze model driven by the cognitive operations; the model processes the virtual humans reasoning, dialog management, and goals to generate behaviors that reflect the agents inner thoughts. It has been implemented in our virtual human system and operates in real-time. The gaze model introduced in this paper was originally designed and developed by Jeff Rickel but has since been extended by the authors.


intelligent virtual agents | 2012

Incremental dialogue understanding and feedback for multiparty, multimodal conversation

David R. Traum; David DeVault; Jina Lee; Zhiyang Wang; Stacy Marsella

In order to provide comprehensive listening behavior, virtual humans engaged in dialogue need to incrementally listen, interpret, understand, and react to what someone is saying, in real time, as they are saying it. In this paper, we describe an implemented system for engaging in multiparty dialogue, including incremental understanding and a range of feedback. We present an FML message extension for feedback in multipary dialogue that can be connected to a feedback realizer. We also describe how the important aspects of that message are calculated by different modules involved in partial input processing as a speaker is talking in a multiparty dialogue.


intelligent virtual agents | 2011

Towards more comprehensive listening behavior: beyond the bobble head

Zhiyang Wang; Jina Lee; Stacy Marsella

Realizing effective listening behavior in virtual humans has become a key area of research, especially as research has sought to realize more complex social scenarios involving multiple participants and bystanders. A human listeners nonverbal behavior is conditioned by a variety of factors, from current speakers behavior to the listeners role and desire to participate in the conversation and unfolding comprehension of the speaker. Similarly, we seek to create virtual humans able to provide feedback based on their participatory goals and their partial understanding of, and reaction to, the relevance of what the speaker is saying as the speaker speaks. Based on a survey of existing psychological literature as well as recent technological advances in recognition and partial understanding of natural language, we describe a model of how to integrate these factors into a virtual human that behaves consistently with these goals. We then discuss how the model is implemented into a virtual human architecture and present an evaluation of behaviors used in the model.


intelligent virtual agents | 2012

Modeling speaker behavior: a comparison of two approaches

Jina Lee; Stacy Marsella

Virtual agents are autonomous software characters that support social interactions with human users. With the emergence of better graphical representation and control over the virtual agents embodiment, communication through nonverbal behaviors has become an active research area. Researchers have taken different approaches to author the behaviors of virtual agents. In this work, we present our machine learning-based approach to model nonverbal behaviors, in which we explore several different learning techniques (HMM, CRF, LDCRF) to predict speakers head nods and eyebrow movements. Quantitative measurements show that LDCRF yields the best learning rate for both head nod and eyebrow movements. An evaluation study was also conducted to compare the behaviors generated by the Machine Learning-based models described in this paper to a Literature-based model.


affective computing and intelligent interaction | 2009

Learning models of speaker head nods with affective information

Jina Lee; Helmut Prendinger; Alena Neviarouskaya; Stacy Marsella

During face-to-face conversation, the speakers head is continually in motion. These movements serve a variety of important communicative functions, and may also be influenced by our emotions. The goal for this work is to build a domain-independent model of speakers head movements and investigate the effect of using affective information during the learning process. Once the model is learned, it can later be used to generate head movements for virtual agents. In this paper, we describe our machine-learning approach to predict speakers head nods using an annotated corpora of face-to-face human interaction and emotion labels generated by an affect recognition model. We describe the feature selection process, training process, and the comparison of results of the learned models under varying conditions. The results show that using affective information can help predict head nods better than when no affective information is used.


Autonomous Agents and Multi-Agent Systems | 2013

Multi-party, multi-role comprehensive listening behavior

Zhiyang Wang; Jina Lee; Stacy Marsella

Realizing effective listening behavior in virtual humans has become a key area of research, especially as research has sought to realize more complex social scenarios involving multiple participants and bystanders. A human listener’s nonverbal behavior is conditioned by a variety of factors, from current speaker’s behavior to the listener’s role and desire to participate in the conversation and unfolding comprehension of the speaker. Similarly, we seek to create virtual humans able to provide feedback based on their participatory goals and their unfolding understanding of, and reaction to, the relevance of what the speaker is saying as the speaker speaks. Based on a survey of existing psychological literature as well as recent technological advances in recognition and partial understanding of natural language, we describe a model of how to integrate these factors into a virtual human that behaves consistently with these goals. We then discuss how the model is implemented into a virtual human architecture and present an evaluation of behaviors used in the model.


adaptive agents and multi agents systems | 2009

Learning a model of speaker head nods using gesture corpora

Jina Lee; Stacy Marsella


intelligent virtual agents | 2011

Modeling side participants and bystanders: the importance of being a laugh track

Jina Lee; Stacy Marsella

Collaboration


Dive into the Jina Lee's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

David R. Traum

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Zhiyang Wang

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

David DeVault

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Jonathan Gratch

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Arno Hartholt

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Helmut Prendinger

National Institute of Informatics

View shared research outputs
Researchain Logo
Decentralizing Knowledge