Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Isabelle Hupont is active.

Publication


Featured researches published by Isabelle Hupont.


quality of multimedia experience | 2015

How do new visual immersive systems influence gaming QoE? A use case of serious gaming with Oculus Rift

Isabelle Hupont; Joaquin Gracia; Luis Miguel Sanagustín; Miguel Angel Gracia

The recent introduction in the market of low cost yet high fidelity head-mounted displays with stereoscopic 3D perspective such as Oculus Rift opens the door to novel virtual reality experiences for gaming. However, QoE metrics and methodologies for this kind of platforms are still unexplored. In this paper, a comparative study on how these novel displays impact gaming QoE with regard to conventional 2D PC screens is presented. Through an experiment where we invited 22 users to play with a virtual forklift driving serious game in both environments, we demonstrate that Oculus broadly increases the sense of immersion in the 3D world as well as perceived game usability. Affective factors are also deeply impacted by this platform that shoots up amazement, astonishment and excitement levels. Nevertheless, a worrying aspect is the high percentage of persons that report feelings of nausea after wearing the goggles.


Pattern Analysis and Applications | 2013

Facial emotional classification: from a discrete perspective to a continuous emotional space

Isabelle Hupont; Sandra Baldassarri; Eva Cerezo

User emotion detection is a very useful input to develop affective computing strategies in modern human computer interaction. In this paper, an effective system for facial emotional classification is described. The main distinguishing feature of our work is that the system does not simply provide a classification in terms of a set of discrete emotional labels, but that it operates in a continuous 2D emotional space enabling a wide range of intermediary emotional states to be obtained. As output, an expressional face is represented as a point in a 2D space characterized by evaluation and activation factors. The classification method is based on a novel combination of five classifiers and takes into consideration human assessment for the evaluation of the results. The system has been tested with an extensive universal database so that it is capable of analyzing any subject, male or female of any age and ethnicity. The results are very encouraging and show that our classification strategy is consistent with human brain emotional classification mechanisms.


systems, man and cybernetics | 2010

Sensing facial emotions in a continuous 2D affective space

Isabelle Hupont; Eva Cerezo; Sandra Baldassarri

The interpretation of user facial expressions is a very useful method for emotional sensing and it constitutes an indispensable part of affective Human Computer Interface designs. Facial expressions are often classified into one of several basic emotion categories. This categorical approach seems poor to treat faces with blended emotions, as well as to measure the intensity of a given emotion. This paper presents an effective system for facial emotional classification, where facial expressions are evaluated with a psychological 2-dimensional continuous affective approach. At its output, an expressional face is represented as a point in a 2D space characterized by evaluation and activation factors. The proposed system first starts with a classification method in discrete categories based on a novel combination of classifiers, that is subsequently mapped in a 2D space in order to be able to consider intermediate emotional states. The system has been tested with an extensive universal database and human assessment has been taken into consideration in the evaluation of results.


Proceedings of the International Workshop on Affective-Aware Virtual Agents and Social Robots | 2009

Hybrid text affect sensing system for emotional language analysis

Rafael Del-Hoyo; Isabelle Hupont; Francisco J. Lacueva; David Abadía

It is argued that for a computer to be able to interact with humans it needs to have the communication skills of humans. One of the main skills of human computer intelligent interaction is the affective aspect of communication, and language is one of the main ways for humans to express emotions. In order to analyze the emotions in language, it is necessary to study the general tone of conversation and the semantic content. This paper explores the influence of affective information about words in sentiment analysis and presents a hybrid statistical-semantic system for opinion detection in Spanish language texts. Affect sensing analysis in language content is very important for achieving realistic interaction with intelligent virtual agents, however it is still an unexplored field nowadays.


Proceedings of the Third COST 2102 international training school conference on Toward autonomous, adaptive, and context-aware multimodal interfaces: theoretical and practical issues | 2010

Sentic avatar: multimodal affective conversational agent with common sense

Erik Cambria; Isabelle Hupont; Amir Hussain; Eva Cerezo; Sandra Baldassarri

The capability of perceiving and expressing emotions through different modalities is a key issue for the enhancement of human-computer interaction. In this paper we present a novel architecture for the development of intelligent multimodal affective interfaces. It is based on the integration of Sentic Computing, a new opinion mining and sentiment analysis paradigm based on AI and Semantic Web techniques, with a facial emotional classifier and Maxine, a powerful multimodal animation engine for managing virtual agents and 3D scenarios. One of the main distinguishing features of the system is that it does not simply perform emotional classification in terms of a set of discrete emotional labels but it operates in a continuous 2D emotional space, enabling the integration of the different affective extraction modules in a simple and scalable way.


articulated motion and deformable objects | 2008

Effective Emotional Classification Combining Facial Classifiers and User Assessment

Isabelle Hupont; Sandra Baldassarri; Rafael Del Hoyo; Eva Cerezo

An effective method for the automatic classification of facial expressions into emotional categories is presented. The system is able to classify the user facial expression in terms of the six Ekmans universal emotions (plus the neutral one), giving a membership confidence value to each emotional category. The method is capable of analysing any subject, male or female of any age and ethnicity. The classification strategy is based on a combination (weighted majority voting) of the five most used classifiers. Another significant difference with other works is that human assessment is taken into account in the evaluation of the results. The information obtained from the users classification makes it possible to verify the validity of our results and to increase the performance of our method.


Archive | 2008

Affective Embodied Conversational Agents for Natural Interaction

Eva Cerezo; Sandra Baldassarri; Isabelle Hupont; Francisco J. Serón

Human computer intelligent interaction is an emerging field aimed at providing natural ways for humans to use computers as aids. It is argued that for a computer to be able to interact with humans it needs to have the communication skills of humans. One of these skills is the affective aspect of communication, which is recognized to be a crucial part of human intelligence and has been argued to be more fundamental in human behaviour and success in social life than intellect (Vesterinen, 2001; Pantic, 2005). Embodied conversational agents, ECAs (Casell et al., 2000), are graphical interfaces capable of using verbal and non-verbal modes of communication to interact with users in computerbased environments. These agents are sometimes just as an animated talking face, may be displaying simple facial expressions and, when using speech synthesis, with some kind of lip synchronization, and sometimes they have sophisticated 3D graphical representation, with complex body movements and facial expressions. An important strand of emotion-related research in human-computer interaction is the simulation of emotional expressions made by embodied computer agents (Creed & Beale, 2005). The basic requirement for a computer to express emotions is to have channels of communication such as voice, image and an ability to communicate affection over those channels. Therefore, interface designers often emulate multimodal human-human communication by including emotional expressions and statements in their interfaces through the use of textual content, speech (synthetic and recorded) and synthetic facial expressions, making the agents truly “social actors” (Reeves & Nass, 1996). Several studies have illustrated that our ability to recognise the emotional facial expressions of embodied computer agents is very similar to that of identifying human facial expressions (Bartneck, 2001). Related to agent’s voice, experiments have demonstrated that subjects can recognize the emotional expressions of an agent (Creed & Beale, 2006) whose voice varies widely in pitch, tempo and loudness and its facial expressions match the emotion it is expressing. But, what about the impact of these social actors? Recent research focuses on the psychological impact of affective agents endowed with the ability to behave empathically with the user (Brave et al., 2005; Isbister, 2006; Yee et al., 2007; Prendinger & Ishizuka, 2004; Picard, 2003). The findings demonstrate that bringing about empathic agents is important in


ambient intelligence | 2012

Emotional facial sensing and multimodal fusion in a continuous 2D affective space

Eva Cerezo; Isabelle Hupont; Sandra Baldassarri; Sergio Ballano

This paper deals with two main research focuses on Affective Computing: facial emotion recognition and multimodal fusion of affective information coming from different channels. The facial sensing system developed implements an emotional classification mechanism that combines, in a novel and robust manner, the five most commonly used classifiers in the field of affect sensing, obtaining at the output an associated weight of the facial expression to each of the six Ekman’s universal emotional categories plus the neutral. The system is able to analyze any subject, male or female, of any age, and ethnicity and has been validated by means of statistical evaluation strategies, such as cross-validation, classification accuracy ratios and confusion matrices. The categorical facial sensing system has been subsequently expanded to a continuous 2D affective space which has made it also possible to face the problem of multimodal human affect recognition. A novel fusion methodology able to fuse any number of affective modules, with very different time-scales and output labels, is proposed. It relies on the 2D Whissell affective space and is able to output a continuous emotional path characterizing the user’s affective progress over time. A Kalman filtering technique controls this path in real-time to ensure temporal consistency and robustness to the system. Moreover, the methodology is adaptive to eventual temporal changes in the reliability of the different inputs’ quality. The potential of the multimodal fusion methodology is demonstrated by fusing dynamic affective information extracted from different channels (video, typed-in text and emoticons) of an Instant Messaging tool.


2011 IEEE Workshop on Affective Computational Intelligence (WACI) | 2011

Scalable multimodal fusion for continuous affect sensing

Isabelle Hupont; Sergio Ballano; Sandra Baldassarri; Eva Cerezo

The success of affective interfaces lies in the fusion of emotional information coming from different modalities. This paper proposes a scalable methodology for fusing multiple affect sensing modules, allowing the subsequent addition of new modules without having to retrain the existing ones. It relies on a 2-dimensional affective model and is able to output a continuous emotional path characterizing the users affective progress over time.


international conference on human computer interaction | 2011

Recognizing emotions from video in a continuous 2D space

Sergio Ballano; Isabelle Hupont; Eva Cerezo; Sandra Baldassarri

This paper proposes an effective system for continuous facial affect recognition from videos. The system operates in a continuous 2D emotional space, characterized by evaluation and activation factors. It makes use, for each video frame, of a classification method able to output the exact location (2D point coordinates) of a still facial image in that space. It also exploits the Kalman filtering technique to control the 2D point movement along the affective space over time and to improve the robustness of the method by predicting its future locations in cases of temporal facial occlusions or inaccurate tracking.

Collaboration


Dive into the Isabelle Hupont's collaboration.

Top Co-Authors

Avatar

Eva Cerezo

University of Zaragoza

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Erik Cambria

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cristina Manresa-Yee

University of the Balearic Islands

View shared research outputs
Top Co-Authors

Avatar

Francisco J. Perales

University of the Balearic Islands

View shared research outputs
Top Co-Authors

Avatar

Javier Varona

University of the Balearic Islands

View shared research outputs
Researchain Logo
Decentralizing Knowledge