Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Junghyun Ahn is active.

Publication


Featured researches published by Junghyun Ahn.


The Visual Computer | 2010

From sentence to emotion: a real-time three-dimensional graphics metaphor of emotions extracted from text

Stéphane Gobron; Junghyun Ahn; Georgious Paltoglou; Mike Thelwall; Daniel Thalmann

This paper presents a novel concept: a graphical representation of human emotion extracted from text sentences. The major contributions of this paper are the following. First, we present a pipeline that extracts, processes, and renders emotion of 3D virtual human (VH). The extraction of emotion is based on data mining statistic of large cyberspace databases. Second, we propose methods to optimize this computational pipeline so that real-time virtual reality rendering can be achieved on common PCs. Third, we use the Poisson distribution to transfer database extracted lexical and language parameters into coherent intensities of valence and arousal—parameters of Russell’s circumplex model of emotion. The last contribution is a practical color interpretation of emotion that influences the emotional aspect of rendered VHs. To test our method’s efficiency, computational statistics related to classical or untypical cases of emotion are provided. In order to evaluate our approach, we applied our method to diverse areas such as cyberspace forums, comics, and theater dialogs.


international conference on computational linguistics | 2013

Damping sentiment analysis in online communication: discussions, monologs and dialogs

Mike Thelwall; Kevan Buckley; Georgios Paltoglou; Marcin Skowron; David Garcia; Stéphane Gobron; Junghyun Ahn; Arvid Kappas; Dennis Küster; Janusz A. Hołyst

Sentiment analysis programs are now sometimes used to detect patterns of sentiment use over time in online communication and to help automated systems interact better with users. Nevertheless, it seems that no previous published study has assessed whether the position of individual texts within on-going communication can be exploited to help detect their sentiments. This article assesses apparent sentiment anomalies in on-going communication --- texts assigned significantly different sentiment strength to the average of previous texts --- to see whether their classification can be improved. The results suggest that a damping procedure to reduce sudden large changes in sentiment can improve classification accuracy but that the optimal procedure will depend on the type of texts processed.


eurographics | 2011

An interdisciplinary VR-architecture for 3D chatting with non-verbal communication

Stéphane Gobron; Junghyun Ahn; Quentin Silvestre; Daniel Thalmann; Stefan Rank; Marcin Skowron; Georgios Paltoglou; Mike Thelwall

The communication between avatar and agent has already been treated from different but specialized perspectives. In contrast, this paper gives a balanced view of every key architectural aspect: from text analysis to computer graphics, the chatting system and the emotional model. Non-verbal communication, such as facial expression, gaze, or head orientation is crucial to simulate realistic behavior, but is still an aspect neglected in the simulation of virtual societies. In response, this paper aims to present the necessary modularity to allow virtual humans (VH) conversation with consistent facial expression -either between two users through their avatars, between an avatar and an agent, or even between an avatar and a Wizard of Oz. We believe such an approach is particularly suitable for the design and implementation of applications involving VHs interaction in virtual worlds. To this end, three key features are needed to design and implement this system entitled 3D-emoChatting. First, a global architecture that combines components from several research fields. Second, a real-time analysis and management of emotions that allows interactive dialogues with non-verbal communication. Third, a model of a virtual emotional mind called emoMind that allows to simulate individual emotional characteristics. To conclude the paper, we briefly present the basic description of a user-test which is beyond the scope of the present paper.


articulated motion and deformable objects | 2012

An event-based architecture to manage virtual human non-verbal communication in 3d chatting environment

Stéphane Gobron; Junghyun Ahn; David Garcia; Quentin Silvestre; Daniel Thalmann; Ronan Boulic

Non-verbal communication (NVC) makes up about two-thirds of all communication between two people or between one speaker and a group of listeners. However, this fundamental aspect of communicating is mostly omitted in 3D social forums or virtual world oriented games. This paper proposes an answer by presenting a multi-user 3D-chatting system enriched with NVC relative to motion. This event-based architecture tries to recreate a context by extracting emotional cues from dialogs and derives virtual human potential body expressions from that event triggered context model. We structure the paper by expounding the system architecture enabling the modeling NVC in a multi-user 3D-chatting environment. There, we present the transition from dialog-based emotional cues to body language, and the management of NVC events in the context of a virtual reality client-server system. Finally, we illustrate the results with graphical scenes and a statistical analysis representing the increase of events due to NVC.


Computer Animation and Virtual Worlds | 2013

Asymmetric facial expressions: revealing richer emotions for embodied conversational agents

Junghyun Ahn; Stéphane Gobron; Daniel Thalmann; Ronan Boulic

In this paper, we propose a method to achieve effective facial emotional expressivity for embodied conversational agents by considering two types of asymmetry when exploiting the valence–arousal–dominance representation of emotions. Indeed, the asymmetry of facial expressions helps to convey complex emotional feelings such as conflicting and/or hidden emotions due to social conventions. To achieve such a higher degree of facial expression in a generic way, we propose a new model for mapping the valence–arousal–dominance emotion model onto a set of 12 scalar facial part actions built mostly by combining pairs of antagonist action units from the Facial Action Coding System. The proposed linear model can automatically drive a large number of autonomous virtual humans or support the interactive design of complex facial expressions over time. By design, our approach produces symmetric facial expressions, as expected for most of the emotional spectrum. However, more complex ambivalent feelings can be produced when differing emotions are applied on the left and right sides of the face. We conducted an experiment on static images produced by our approach to compare the expressive power of symmetric and asymmetric facial expressions for a set of eight basic and complex emotions. Results confirm both the pertinence of our general mapping for expressing basic emotions and the significant improvement brought by asymmetry for expressing ambivalent feelings. Copyright


motion in games | 2011

Long term real trajectory reuse through region goal satisfaction

Junghyun Ahn; Stéphane Gobron; Quentin Silvestre; Horesh Ben Shitrit; Mirko Raca; Julien Pettré; Daniel Thalmann; Pascal Fua; Ronan Boulic

This paper is motivated by the objective of improving the realism of real-time simulated crowds by reducing short term collision avoidance through long term anticipation of pedestrian trajectories. For this aim, we choose to reuse outdoor pedestrian trajectories obtained with non-invasive means. This initial step is achieved by analyzing the recordings of multiple synchronized video cameras. In a second off-line stage, we fit as long as possible trajectory segments within predefined paths made of a succession of region goals. The concept of region goal is exploited to enforce the principle of “sufficient satisfaction”: it allows the pedestrians to relax the prescribed trajectory to the traversal of successive region goals. However, even if a fitted trajectory is modified due to collision avoidance, we are still able to make long-term trajectory anticipation and distribute the collision avoidance shift over a long distance.


virtual reality continuum and its applications in industry | 2012

Within-crowd immersive evaluation of collision avoidance behaviors

Junghyun Ahn; Nan Wang; Daniel Thalmann; Ronan Boulic

In this paper, we first present our crowd simulation method, Trajectory Variant Shift (TVS) based on real pedestrian trajectories re-use. We detail how to re-use and shift these trajectories to avoid collisions while retaining the liveliness of captured data. Second, we conducted a user study in a four-screen CAVE to compare our approach with three others when the subject is standing within the crowd to perform a visual search task (waiting for a specific person). Results confirm that our approach is considered as good as the state of the art regarding subjects spatial awareness within the crowd, and better regarding not only the perceived liveliness of the crowd, but also the comfort in the CAVE.


motion in games | 2012

Conveying Real-Time Ambivalent Feelings through Asymmetric Facial Expressions

Junghyun Ahn; Stéphane Gobron; Daniel Thalmann; Ronan Boulic

Achieving effective facial emotional expressivity within a real-time rendering constraint requests to leverage on all possible inspiration sources and especially from the observations of real individuals. One of them is the frequent asymmetry of facial expressions of emotions, which allows to express complex emotional feelings such as suspicion, smirk, and hidden emotion due to social conventions. To achieve such a higher degree of facial expression, we propose a new model for mapping emotions onto a small set of 1D Facial Part Actions (FPA)s that act on antagonist muscle groups or on individual head orientation degree of freedoms. The proposed linear model can automatically drive a large number of autonomous virtual humans or support the interactive design of complex facial expressions over time.


Applied Artificial Intelligence | 2017

Evaluating the Sensitivity to Virtual Characters Facial Asymmetry in Emotion Synthesis

Nan Wang; Junghyun Ahn; Ronan Boulic

ABSTRACT The use of expressive Virtual Characters is an effective complementary means of communication for social networks offering multi-user 3D-chatting environment. In such contexts, the facial expression channel offers a rich medium to translate the ongoing emotions conveyed by the text-based exchanges. However, until recently, only purely symmetric facial expressions have been considered for that purpose. In this article we examine human sensitivity to facial asymmetry in the expression of both basic and complex emotions. The rationale for introducing asymmetry in the display of facial expressions stems from two well-established observations in cognitive neuroscience: first that the expression of basic emotions generally displays a small asymmetry, second that more complex emotions such as ambivalent feeling may reflect in the partial display of different, potentially opposite, emotions on each side of the face. A frequent occurrence of this second case results from the conflict between the truly felt emotion and the one that should be displayed due to social conventions. Our main hypothesis is that a much larger expressive and emotional space can only be automatically synthesized by means of facial asymmetry when modeling emotions with a general Valence-Arousal-Dominance dimensional approach. Besides, we want also to explore the general human sensitivity to the introduction of a small degree of asymmetry into the expression of basic emotions. We conducted an experiment by presenting 64 pairs of static facial expressions, one symmetric and one asymmetric, illustrating eight emotions (three basic and five complex ones) alternatively for a male and a female character. Each emotion was presented four times by swapping the symmetric and asymmetric positions and by mirroring the asymmetrical expression. Participants were asked to grade, on a continuous scale, the correctness of each facial expression with respect to a short definition. Results confirm the potential of introducing facial asymmetry for a subset of the complex emotions. Guidelines are proposed for designers of embodied conversational agent and emotionally reflective avatars.


the florida ai research society | 2011

No peanut! Affective Cues for the Virtual Bartender

Marcin Skowron; Georgios Paltoglou; Junghyun Ahn; Stéphane Gobron

Collaboration


Dive into the Junghyun Ahn's collaboration.

Top Co-Authors

Avatar

Daniel Thalmann

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Stéphane Gobron

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Ronan Boulic

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Quentin Silvestre

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Marcin Skowron

Austrian Research Institute for Artificial Intelligence

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Georgios Paltoglou

University of Wolverhampton

View shared research outputs
Top Co-Authors

Avatar

Mike Thelwall

University of Wolverhampton

View shared research outputs
Top Co-Authors

Avatar

Arvid Kappas

Jacobs University Bremen

View shared research outputs
Top Co-Authors

Avatar

Nan Wang

École Polytechnique Fédérale de Lausanne

View shared research outputs
Researchain Logo
Decentralizing Knowledge