Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mojtaba Khomami Abadi is active.

Publication


Featured researches published by Mojtaba Khomami Abadi.


IEEE Transactions on Affective Computing | 2018

ASCERTAIN: Emotion and Personality Recognition Using Commercial Sensors

Ramanathan Subramanian; Julia Wache; Mojtaba Khomami Abadi; Radu L. Vieriu; Stefan Winkler; Nicu Sebe

We present ASCERTAIN—a multimodal databa<bold>AS</bold>e for impli<bold>C</bold>it p<bold>ER</bold>sonali<bold> T</bold>y and <bold>A</bold>ffect recognit<bold>I</bold>o<bold>N</bold> using commercial physiological sensors. To our knowledge, ASCERTAIN is the first database to connect <italic>personality traits</italic> and <italic>emotional states </italic> via <italic>physiological responses</italic>. ASCERTAIN contains big-five personality scales and emotional self-ratings of 58 users along with their Electroencephalogram (EEG), Electrocardiogram (ECG), Galvanic Skin Response (GSR) and facial activity data, recorded using off-the-shelf sensors while viewing affective movie clips. We first examine relationships between users’ affective ratings and personality scales in the context of prior observations, and then study linear and non-linear physiological correlates of emotion and personality. Our analysis suggests that the emotion-personality relationship is better captured by non-linear rather than linear statistics. We finally attempt binary emotion and personality trait recognition using physiological features. Experimental results cumulatively confirm that personality differences are better revealed while comparing user responses to emotionally homogeneous videos, and above-chance recognition is achieved for both affective and personality dimensions.


affective computing and intelligent interaction | 2013

User-centric Affective Video Tagging from MEG and Peripheral Physiological Responses

Mojtaba Khomami Abadi; Seyed Mostafa Kia; Ramanathan Subramanian; Paolo Avesani; Nicu Sebe

This paper presents a new multimodal database and the associated results for characterization of affect (valence, arousal and dominance) using the Magneto encephalogram (MEG) brain signals and peripheral physiological signals (horizontal EOG, ECG, trapezius EMG). We attempt single-trial classification of affect in movie and music video clips employing emotional responses extracted from eighteen participants. The main findings of this study are that: (i) the MEG signal effectively encodes affective viewer responses, (ii) clip arousal is better predicted by MEG, while peripheral physiological signals are more effective for predicting valence and (iii) prediction performance is better for movie clips as compared to music video clips.


international conference on image analysis and processing | 2015

Movie Genre Classification by Exploiting MEG Brain Signals

Pouya Ghaemmaghami; Mojtaba Khomami Abadi; Seyed Mostafa Kia; Paolo Avesani; Nicu Sebe

Genre classification is an essential part of multimedia content recommender systems. In this study, we provide experimental evidence for the possibility of performing genre classification based on brain recorded signals. The brain decoding paradigm is employed to classify magnetoencephalography (MEG) data presented in [1] to four genre classes: Comedy, Romantic, Drama, and Horror. Our results show that: 1) there is a significant correlation between audio-visual features of movies and corresponding brain signals specially in the visual and temporal lobes; 2) the genre of movie clips can be classified with an accuracy significantly over the chance level using the MEG signal. On top of that we show that the combination of multimedia features and MEG-based features achieves the best accuracy. Our study provides a primary step towards user-centric media content retrieval using brain signals.


international conference on image processing | 2015

Cluster encoding for modelling temporal variation in video

Negar Rostamzadeh; Jasper R. R. Uijlings; Ionut Mironica; Mojtaba Khomami Abadi; Bogdan Ionescu; Nicu Sebe

Classical Bag-of-Words methods represent videos by modeling the variation of local visual descriptors throughout the video. In this approach they mix variation in time and space indiscriminately while these dimensions are fundamentally different. Therefore, in this paper we present a novel method for video representation which explicitly captures temporal variation over time. We do this by first creating frame-based features using standard Bag-of-Words techniques. To model the variation in time over these frame-based features, we introduce Hard and Soft Cluster Encoding, novel techniques to model variation inspired by the Fisher Kernel [1] and VLAD [2]. Results on the Rochester ADL [3] and Blip10k [4] datasets show that our method yields improvements of respectively 6.6% and 7.4% over our baselines. On Blip10k we outperform the state-of-the-art by 3.6% when using only visual features.


Proceedings of the 2014 International ACM Workshop on Crowdsourcing for Multimedia | 2014

A Multi-task Learning Framework for Time-continuous Emotion Estimation from Crowd Annotations

Mojtaba Khomami Abadi; Azad Abad; Ramanathan Subramanian; Negar Rostamzadeh; Elisa Ricci; Jagannadan Varadarajan; Nicu Sebe

We propose Multi-task learning (MTL) for time-continuous or dynamic emotion (valence and arousal) estimation in movie scenes. Since compiling annotated training data for dynamic emotion prediction is tedious, we employ crowdsourcing for the same. Even though the crowdworkers come from various demographics, we demonstrate that MTL can effectively discover (1) consistent patterns in their dynamic emotion perception, and (2) the low-level audio and video features that contribute to their valence, arousal (VA) elicitation. Finally, we show that MTL-based regression models, which simultaneously learn the relationship between low-level audio-visual features and high-level VA ratings from a collection of movie scenes, can predict VA ratings for time-contiguous snippets from each scene more effectively than scene-specific models.


IEEE Transactions on Affective Computing | 2015

DECAF: MEG-Based Multimodal Database for Decoding Affective Physiological Responses

Mojtaba Khomami Abadi; Ramanathan Subramanian; Seyed Mostafa Kia; Paolo Avesani; Ioannis Patras; Nicu Sebe


affective computing and intelligent interaction | 2013

Multimodal Engagement Classification for Affective Cinema

Mojtaba Khomami Abadi; Jacopo Staiano; Alessandro Cappelletti; Massimo Zancanaro; Nicu Sebe


ieee international conference on automatic face gesture recognition | 2015

Inference of personality traits and affect schedule by analysis of spontaneous reactions to affective videos

Mojtaba Khomami Abadi; Juan Abdon Miranda Correa; Julia Wache; Heng Yang; Ioannis Patras; Nicu Sebe


international conference on multimedia retrieval | 2016

A Quality Adaptive Multimodal Affect Recognition System for User-Centric Multimedia Indexing

Rishabh Gupta; Mojtaba Khomami Abadi; Jesús Alejandro Cárdenes Cabré; Fabio Morreale; Tiago H. Falk; Nicu Sebe


arXiv: Neurons and Cognition | 2017

AMIGOS: A Dataset for Affect, Personality and Mood Research on Individuals and Groups

Juan Abdon Miranda-Correa; Mojtaba Khomami Abadi; Nicu Sebe; Ioannis Patras

Collaboration


Dive into the Mojtaba Khomami Abadi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ioannis Patras

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elisa Ricci

fondazione bruno kessler

View shared research outputs
Researchain Logo
Decentralizing Knowledge