Héctor Perez Martínez
University of Malta
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Héctor Perez Martínez.
IEEE Computational Intelligence Magazine | 2013
Héctor Perez Martínez; Yoshua Bengio; Georgios N. Yannakakis
More than 15 years after the early studies in Affective Computing (AC), [1] the problem of detecting and modeling emotions in the context of human-computer interaction (HCI) remains complex and largely unexplored. The detection and modeling of emotion is, primarily, the study and use of artificial intelligence (AI) techniques for the construction of computational models of emotion. The key challenges one faces when attempting to model emotion [2] are inherent in the vague definitions and fuzzy boundaries of emotion, and in the modeling methodology followed. In this context, open research questions are still present in all key components of the modeling process. These include, first, the appropriateness of the modeling tool employed to map emotional manifestations and responses to annotated affective states; second, the processing of signals that express these manifestations (i.e., model input); and third, the way affective annotation (i.e., model output) is handled. This paper touches upon all three key components of an affective model (i.e., input, model, output) and introduces the use of deep learning (DL) [3], [4], [5] methodologies for affective modeling from multiple physiological signals.
User Modeling and User-adapted Interaction | 2010
Georgios N. Yannakakis; Héctor Perez Martínez; Arnav Jhala
Information about interactive virtual environments, such as games, is perceived by users through a virtual camera. While most interactive applications let users control the camera, in complex navigation tasks within 3D environments users often get frustrated with the interaction. In this paper, we propose inclusion of camera control as a vital component of affective adaptive interaction in games. We investigate the impact of camera viewpoints on psychophysiology of players through preference surveys collected from a test game. Data is collected from players of a 3D prey/predator game in which player experience is directly linked to camera settings. Computational models of discrete affective states of fun, challenge, boredom, frustration, excitement, anxiety and relaxation are built on biosignal (heart rate, blood volume pulse and skin conductance) features to predict the pairwise self-reported emotional preferences of the players. For this purpose, automatic feature selection and neuro-evolutionary preference learning are combined providing highly accurate affective models. The performance of the artificial neural network models on unseen data reveals accuracies of above 80% for the majority of discrete affective states examined. The generality of the obtained models is tested in different test-bed game environments and the use of the generated models for creating adaptive affect-driven camera control in games is discussed.
IEEE Transactions on Affective Computing | 2014
Héctor Perez Martínez; Georgios N. Yannakakis; John Hallam
How should affect be appropriately annotated and how should machine learning best be employed to map manifestations of affect to affect annotations? What is the use of ratings of affect for the study of affective computing and how should we treat them? These are the key questions this paper attempts to address by investigating the impact of dissimilar representations of annotated affect on the efficacy of affect modelling. In particular, we compare several different binary-class and pairwise preference representations for automatically learning from ratings of affect. The representations are compared and tested on three datasets: one synthetic dataset (testing “in vitro ”) and two affective datasets (testing “in vivo”). The synthetic dataset couples a number of attributes with generated rating values. The two affective datasets contain physiological and contextual user attributes, and speech attributes, respectively; these attributes are coupled with ratings of various affective and cognitive states. The main results of the paper suggest that ratings (when used) should be naturally transformed to ordinal (ranked) representations for obtaining more reliable and generalisable models of affect. The findings of this paper have a direct impact on affect annotation and modelling research but, most importantly, challenge the traditional state-of-practice in affective computing and psychometrics at large.
international conference on multimodal interfaces | 2011
Héctor Perez Martínez; Georgios N. Yannakakis
Temporal data from multimodal interaction such as speech and bio-signals cannot be easily analysed without a preprocessing phase through which some key characteristics of the signals are extracted. Typically, standard statistical signal features such as average values are calculated prior to the analysis and, subsequently, are presented either to a multimodal fusion mechanism or a computational model of the interaction. This paper proposes a feature extraction methodology which is based on frequent sequence mining within and across multiple modalities of user input. The proposed method is applied for the fusion of physiological signals and gameplay information in a game survey dataset. The obtained sequences are analysed and used as predictors of user affect resulting in computational models of equal or higher accuracy compared to the models built on standard statistical features.
affective computing and intelligent interaction | 2011
Héctor Perez Martínez; Maurizio Garbarino; Georgios N. Yannakakis
This paper examines the generality of features extracted from heart rate (HR) and skin conductance (SC) signals as predictors of self-reported player affect expressed as pairwise preferences. Artificial neural networks are trained to accurately map physiological features to expressed affect in two dissimilar and independent game surveys. The performance of the obtained affective models which are trained on one game is tested on the unseen physiological and selfreported data of the other game. Results in this early study suggest that there exist features of HR and SC such as average HR and one and two-step SC variation that are able to predict affective states across games of different genre and dissimilar game mechanics.
affective computing and intelligent interaction | 2009
Héctor Perez Martínez; Arnav Jhala; Georgios N. Yannakakis
Information about interactive virtual environments, such as games, is perceived by users through a virtual camera. While most interactive applications let the users control the camera, in complex navigation tasks within 3D environments users often get frustrated with the interaction. In this paper, we motivate for the inclusion of camera control as a vital component of affective adaptive interaction in games and investigate the impact of camera viewpoints on psy-chophysiology of players through an evaluation game survey experiment. The statistical analysis presented demonstrates that emotional responses and physiological indexes are affected by camera settings.
Frontiers in ICT | 2015
Georgios N. Yannakakis; Héctor Perez Martínez
Are ratings of any use in human-computer interaction and user studies at large? If ratings are of limited use, is there a better alternative for quantitative subjective assessment? Beyond the intrinsic shortcomings of human reporting, there are a number of supplementary limitations and fundamental methodological flaws associated with rating-based questionnaires --- i.e. questionnaires that ask participants to rate their level of agreement with a given statement such as a Likert item. While the effect of these pitfalls has been largely downplayed, recent findings from diverse areas of study question the reliability of using ratings. Rank-based questionnaires --- i.e. questionnaires that ask participants to rank two or more options --- appear as the evident alternative that not only eliminates the core limitations of ratings but also simplifies the use of sound methodologies that yield more reliable models of the underlying reported construct: user emotion, preference, or opinion. This paper solicits recent findings from various disciplines interlinked with psychometrics and offers a quick guide for the use, processing and analysis of rank-based questionnaires for the unique advantages they offer. The paper challenges the traditional state-of-practice in human-computer interaction and psychometrics directly contributing towards a paradigm shift in subjective reporting.
international conference on multimodal interfaces | 2014
Héctor Perez Martínez; Georgios N. Yannakakis
Multimodal datasets often feature a combination of continuous signals and a series of discrete events. For instance, when studying human behaviour it is common to annotate actions performed by the participant over several other modalities such as video recordings of the face or physiological signals. These events are nominal, not frequent and are not sampled at a continuous rate while signals are numeric and often sampled at short fixed intervals. This fundamentally different nature complicates the analysis of the relation among these modalities which is often studied after each modality has been summarised or reduced. This paper investigates a novel approach to model the relation between such modality types bypassing the need for summarising each modality independently of each other. For that purpose, we introduce a deep learning model based on convolutional neural networks that is adapted to process multiple modalities at different time resolutions we name deep multimodal fusion. Furthermore, we introduce and compare three alternative methods (convolution, training and pooling fusion) to integrate sequences of events with continuous signals within this model. We evaluate deep multimodal fusion using a game user dataset where player physiological signals are recorded in parallel with game events. Results suggest that the proposed architecture can appropriately capture multimodal information as it yields higher prediction accuracies compared to single-modality models. In addition, it appears that pooling fusion, based on a novel filter-pooling method provides the more effective fusion approach for the investigated types of data.
computational intelligence and games | 2010
Héctor Perez Martínez; Kenneth Hullett; Georgios N. Yannakakis
In this paper we propose a methodology for improving the accuracy of models that predict self-reported player pairwise preferences. Our approach extends neuro-evolutionary preference learning by embedding a player modeling module for the prediction of player preferences. Player types are identified using self-organization and feed the preference learner. Our experiments on a dataset derived from a game survey of subjects playing a 3D prey/predator game demonstrate that the player model-driven preference learning approach proposed improves the performance of preference learning significantly and shows promise for the construction of more accurate cognitive and affective models.
Proceedings of the 3rd international workshop on Affective interaction in natural environments | 2010
Héctor Perez Martínez; Georgios N. Yannakakis
Automatic feature selection is a critical step towards the generation of successful computational models of affect. This paper presents a genetic search-based feature selection method which is developed as a global-search algorithm for improving the accuracy of the affective models built. The method is tested and compared against sequential forward feature selection and random search in a dataset derived from a game survey experiment which contains bimodal input features (physiological and gameplay) and expressed pairwise preferences of affect. Results suggest that the proposed method is capable of picking subsets of features that generate more accurate affective models.