Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sin-Hwa Kang is active.

Publication


Featured researches published by Sin-Hwa Kang.


Computer Animation and Virtual Worlds | 2010

Virtual humans elicit socially anxious interactants' verbal self-disclosure

Sin-Hwa Kang; Jonathan Gratch

Realistic character animation requires elaborate rigging built on top of high quality 3D models. Sophisticated anatomically based rigs are often the choice of visual effect studios where life-like animation of CG characters is the primary objective. However, rigging a character with a muscular-skeletal system is very involving and time-consuming process, even for professionals. Although, there have been recent research efforts to automate either all or some parts of the rigging process, the complexity of anatomically based rigging nonetheless opens up new research challenges. We propose a new method to automate anatomically based rigging that transfers an existing rig of one character to another. The method is based on a data interpolation in the surface and volume domain, where various rigging elements can be transferred between different models. As it only requires a small number of corresponding input feature points, users can produce highly detailed rigs for a variety of desired character with ease. Copyright


intelligent virtual agents | 2011

It's in their eyes: a study on female and male virtual humans' gaze

Philipp Kulms; Nicole C. Krämer; Jonathan Gratch; Sin-Hwa Kang

Social psychological research demonstrates that the same behavior might lead to different evaluations depending on whether it is shown by a man or a woman. With a view to design decisions with regard to virtual humans it is relevant to test whether this pattern also applies to gendered virtual humans. In a 2×2 between subjects experiment we manipulated the Rapport Agents gaze behavior and its gender in order to test whether especially female agents are evaluated more negatively when they do not show gender specific immediacy behavior and avoid gazing at the interaction partner. Instead of this interaction effect we found two main effects: gaze avoidance was evaluated negatively and female agents were rated more positively than male agents.


Computers in Human Behavior | 2013

The impact of avatar realism and anonymity on effective communication via mobile devices

Sin-Hwa Kang; James H. Watt

This research investigates the impact on social communication quality of using anonymous avatars during small-screen mobile audio/visual communications. Elements of behavioral and visual realism of avatars are defined, as is an elaborated three-component measure of communication quality called Social Copresence. Experimental results with 196 participants participating in a social interaction using a simulated mobile device with varied levels of avatar visual and behavioral realism showed higher levels of avatar Kinetic Conformity and Fidelity produced increased perceived Social Richness of Medium, while higher avatar Anthropomorphism produced higher levels of Psychological Copresence and Interactant Satisfaction with Communication. Increased levels of avatar Anonymity produced decreases in Social Copresence, but these were smaller when avatars possessed higher levels of visual and behavioral realism.


intelligent virtual agents | 2015

A Platform for Building Mobile Virtual Humans

Andrew W. Feng; Anton Leuski; Stacy Marsella; Dan Casas; Sin-Hwa Kang; Ari Shapiro

We describe an authoring framework for developing virtual humans on mobile applications. The framework abstracts many elements needed for virtual human generation and interaction, such as the rapid development of nonverbal behavior, lip syncing to speech, dialogue management, access to speech transcription services, and access to mobile sensors such as the microphone, gyroscope and location components.


Computer Animation and Virtual Worlds | 2017

Motion recognition of self and others on realistic 3D avatars

Sahil Narang; Andrew Best; Andrew W. Feng; Sin-Hwa Kang; Dinesh Manocha; Ari Shapiro

Current 3D capture and modeling technology can rapidly generate highly photo‐realistic 3D avatars of human subjects. However, while the avatars look like their human counterparts, their movements often do not mimic their own due to existing challenges in accurate motion capture and retargeting. A better understanding of factors that influence the perception of biological motion would be valuable for creating virtual avatars that capture the essence of their human subjects. To investigate these issues, we captured 22 subjects walking in an open space. We then performed a study where participants were asked to identify their own motion in varying visual representations and scenarios. Similarly, participants were asked to identify the motion of familiar individuals. Unlike prior studies that used captured footage with simple “point‐light” displays, we rendered the motion on photo‐realistic 3D virtual avatars of the subject. We found that self‐recognition was significantly higher for virtual avatars than with point‐light representations. Users were more confident of their responses when identifying their motion presented on their virtual avatar. Recognition rates varied considerably between motion types for recognition of others, but not for self‐recognition. Overall, our results are consistent with previous studies that used recorded footage and offer key insights into the perception of motion rendered on virtual avatars.


intelligent virtual agents | 2015

Smart Mobile Virtual Humans: “Chat with Me!”

Sin-Hwa Kang; Andrew W. Feng; Anton Leuski; Dan Casas; Ari Shapiro

In this study, we are interested in exploring whether people would talk with 3D animated virtual humans using a smartphone for a longer amount of time as a sign of feeling rapport [5], compared to non-animated or audio-only characters in everyday life. Based on previous studies [2, 7, 10], users prefer animated characters in emotionally engaged interactions when the characters were displayed on mobile devices, yet in a lab setting. We aimed to reach a broad range of users outside of the lab in natural settings to investigate the potential of our virtual human on smartphones to facilitate casual, yet emotionally engaging conversation. We also found that the literature has not reached a consensus regarding the ideal gaze patterns for a virtual human, one thing researchers agree on is that inappropriate gaze could negatively impact conversations at times, even worse than receiving no visual feedback at all [1, 4]. Everyday life may bring the experience of awkwardness or uncomfortable sentiments in reaction to continuous mutual gaze. On the other hand, gaze aversion could also make a speaker think their partner is not listening. Our work further aims to address this question of what constitutes appropriate eye gaze in emotionally engaged interactions.


Computer Animation and Virtual Worlds | 2017

Social influence of humor in virtual human counselor's self-disclosure

Sin-Hwa Kang; David M. Krum; Peter Khooshabeh; Thai Phan; Chien-Yen Chang; Ori Amir; Rebecca Lin

We explored the social influence of humor in a virtual human counselors self‐disclosure while also varying the ethnicity of the virtual counselor. In a 2 × 3 experiment (humor and ethnicity of the virtual human counselor), participants experienced counseling interview interactions via Skype on a smartphone. We measured user responses to and perceptions of the virtual human counselor. The results demonstrate that humor positively affects user responses to and perceptions of a virtual counselor. The results further suggest that matching styles of humor with a virtual counselors ethnicity influences user responses and perceptions. The results offer insight into the effective design and development of realistic and believable virtual human counselors. Furthermore, they illuminate the potential use of humor to enhance self‐disclosure in human–agent interactions.


motion in games | 2016

Study comparing video-based characters and 3D-based characters on mobile devices for chat

Sin-Hwa Kang; Andrew W. Feng; Mike Seymour; Ari Shapiro

This study explores presentation techniques for a chat-based virtual human that communicates engagingly with users. Interactions with the virtual human occur via a smartphone outside of the lab in natural settings. Our work compares the responses of users who interact with an animated virtual character as opposed to a real human video character capable of displaying realistic backchannel behaviors. An audio-only interface is compared additionally with the two types of characters. The findings of our study suggest that people are socially attracted to a 3D animated character that does not display backchannel behaviors more than a real human video character that presents realistic backchannel behaviors. People engage in conversation more by talking for a longer amount of time when they interact with a 3D animated virtual human that exhibits realistic backchannel behaviors, compared to communicating with a real human video character that does not display backchannel behaviors.


intelligent virtual agents | 2011

Modeling nonverbal behavior of a virtual counselor during intimate self-disclosure

Sin-Hwa Kang; Candy L. Sidner; Jonathan Gratch; Ron Artstein; Lixing Huang; Louis-Philippe Morency

Humans often share personal information with others in order to create social connections. Sharing personal information is especially important in counseling interactions [2]. Research studying the relationship between intimate self-disclosure and human behavior critically informs the development of virtual agents that create rapport with human interaction partners. One significant example of this application is using virtual agents as counselors in psychotherapeutic situations. The capability of expressing different intimacy levels is key to a successful virtual counselor to reciprocally induce disclosure in clients. Nonverbal behavior is considered critical for indicating intimacy [1] and is important when designing a social virtual agent such as a counselor. One key research question is how to properly express intimate self-disclosure. In this study, our main goal is to find what types of interviewees’ nonverbal behavior is associated with different intimacy levels of verbal self-disclosure. Thus, we investigated humans’ nonverbal behavior associated to self-disclosure during interview setting (with intimate topics).


international conference on distributed, ambient, and pervasive interactions | 2017

Social Impact of Enhanced Gaze Presentation Using Head Mounted Projection

David M. Krum; Sin-Hwa Kang; Thai Phan; Lauren Cairco Dukes; Mark T. Bolas

Projected displays can present life-sized imagery of a virtual human character that can be seen by multiple observers. However, typical projected displays can only render that virtual human from a single viewpoint, regardless of whether head tracking is employed. This results in the virtual human being rendered from an incorrect perspective for most individuals in a group of observers. This could result in perceptual miscues, such as the “Mona Lisa” effect, causing the virtual human to appear as if it is simultaneously gazing and pointing at all observers in the room regardless of their location. This may be detrimental to training scenarios in which all trainees must accurately assess where the virtual human is looking or pointing a weapon. In this paper, we discuss our investigations into the presentation of eye gaze using REFLCT, a previously introduced head mounted projective display. REFLCT uses head tracked, head mounted projectors and retroreflective screens to present personalized, perspective correct imagery to multiple users without the occlusion of a traditional head mounted display. We examined how head mounted projection for enhanced presentation of eye gaze might facilitate or otherwise affect social interactions during a multi-person guessing game of “Twenty Questions.”

Collaboration


Dive into the Sin-Hwa Kang's collaboration.

Top Co-Authors

Avatar

Jonathan Gratch

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Andrew W. Feng

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Ari Shapiro

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

James H. Watt

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

David M. Krum

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Mark T. Bolas

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Ning Wang

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Anton Leuski

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Candy L. Sidner

Worcester Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge