Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mohamed Abouelenien is active.

Publication


Featured researches published by Mohamed Abouelenien.


international conference on multimodal interfaces | 2014

Deception detection using a multimodal approach

Mohamed Abouelenien; Verónica Pérez-Rosas; Rada Mihalcea; Mihai Burzo

In this paper we address the automatic identification of deceit by using a multimodal approach. We collect deceptive and truthful responses using a multimodal setting where we acquire data using a microphone, a thermal camera, as well as physiological sensors. Among all available modalities, we focus on three modalities namely, language use, physiological response, and thermal sensing. To our knowledge, this is the first work to integrate these specific modalities to detect deceit. Several experiments are carried out in which we first select representative features for each modality, and then we analyze joint models that integrate several modalities. The experimental results show that the combination of features from different modalities significantly improves the detection of deceptive behaviors as compared to the use of one modality at a time. Moreover, the use of non-contact modalities proved to be comparable with and sometimes better than existing contact-based methods. The proposed method increases the efficiency of detecting deceit by avoiding human involvement in an attempt to move towards a completely automated non-invasive deception detection process.


international conference on multimodal interfaces | 2015

Deception Detection using Real-life Trial Data

Verónica Pérez-Rosas; Mohamed Abouelenien; Rada Mihalcea; Mihai Burzo

Hearings of witnesses and defendants play a crucial role when reaching court trial decisions. Given the high-stake nature of trial outcomes, implementing accurate and effective computational methods to evaluate the honesty of court testimonies can offer valuable support during the decision making process. In this paper, we address the identification of deception in real-life trial data. We introduce a novel dataset consisting of videos collected from public court trials. We explore the use of verbal and non-verbal modalities to build a multimodal deception detection system that aims to discriminate between truthful and deceptive statements provided by defendants and witnesses. We achieve classification accuracies in the range of 60-75% when using a model that extracts and fuses features from the linguistic and gesture modalities. In addition, we present a human deception detection study where we evaluate the human capability of detecting deception in trial hearings. The results show that our system outperforms the human capability of identifying deceit.


empirical methods in natural language processing | 2015

Verbal and Nonverbal Clues for Real-life Deception Detection

Verónica Pérez-Rosas; Mohamed Abouelenien; Rada Mihalcea; Yao Xiao; Cj Linton; Mihai Burzo

Deception detection has been receiving an increasing amount of attention from the computational linguistics, speech, and multimodal processing communities. One of the major challenges encountered in this task is the availability of data, and most of the research work to date has been conducted on acted or artificially collected data. The generated deception models are thus lacking real-world evidence. In this paper, we explore the use of multimodal real-life data for the task of deception detection. We develop a new deception dataset consisting of videos from reallife scenarios, and build deception tools relying on verbal and nonverbal features. We achieve classification accuracies in the range of 77-82% when using a model that extracts and fuses features from the linguistic and visual modalities. We show that these results outperform the human capability of identifying deceit.


IEEE Transactions on Information Forensics and Security | 2017

Detecting Deceptive Behavior via Integration of Discriminative Features From Multiple Modalities

Mohamed Abouelenien; Verónica Pérez-Rosas; Rada Mihalcea; Mihai Burzo

Deception detection has received an increasing amount of attention in recent years, due to the significant growth of digital media, as well as increased ethical and security concerns. Earlier approaches to deception detection were mainly focused on law enforcement applications and relied on polygraph tests, which had proved to falsely accuse the innocent and free the guilty in multiple cases. In this paper, we explore a multimodal deception detection approach that relies on a novel data set of 149 multimodal recordings, and integrates multiple physiological, linguistic, and thermal features. We test the system on different domains, to measure its effectiveness and determine its limitations. We also perform feature analysis using a decision tree model, to gain insights into the features that are most effective in detecting deceit. Our experimental results indicate that our multimodal approach is a promising step toward creating a feasible, non-invasive, and fully automated deception detection system.


pervasive technologies related to assistive environments | 2016

Human Acute Stress Detection via Integration of Physiological Signals and Thermal Imaging

Mohamed Abouelenien; Mihai Burzo; Rada Mihalcea

Daily pressure, work load, and family responsibilities among other factors impose increasing levels of stress on different individuals. Hence, detecting stress as early as possible can potentially reduce the severe consequences and risks that someone may experience. In this paper, we develop a novel dataset to detect acute stress using 50 subjects. We additionally analyze different features extracted automatically from the thermal and physiological modalities. Furthermore, we develop a system that integrates both thermal and physiological features for improved stress detection rates. Our system achieves promising results exceeding 75% accuracy and has the potential to be further improved by adding additional modalities, which can provide a useful and reliable approach in early detection of stress.


ASME 2014 International Mechanical Engineering Congress and Exposition, IMECE 2014 | 2014

Using Infrared Thermography and Biosensors to Detect Thermal Discomfort in a Building’s Inhabitants

Mihai Burzo; Mohamed Abouelenien; Verónica Pérez-Rosas; Cakra Wicaksono; Yong Tao; Rada Mihalcea

This paper lays the grounds for a new methodology for detecting thermal discomfort, which can potentially reduce the building energy usage while improving the comfort of its inhabitants. The paper describes our explorations in automatic human discomfort prediction using physiological signals directly collected from a buildings inhabitants. Using infrared thermography, as well as several other bio-sensors (galvanic skin response, heart rate tracker, respiration rate tracker), we record a building’s inhabitants under various thermal conditions (hot, cold, neutral), and consequently build a multimodal model that can automatically detect thermal discomfort.The paper makes two important contributions. First, we introduce a novel dataset, consisting of sensorial measurements of human behavior under varied comfort/discomfort conditions. The change in physiological signals of the human body are monitored for several subjects, for different comfort levels in an indoor environment. Second, using the dataset obtained in the first step, we build a model that identifies the relationship between human factors, as tracked through infrared thermography and other bio-sensors, and environmental conditions related to discomfort. Third, we measure the correlation between sensorial measurements collected from the user and self-reported levels of discomfort, and hence identify the sensorial measurements that are predictive of human discomfort. The final goal is to automatically predict the level of discomfort of a building inhabitant without any explicit input from the user.This human-centered discomfort prediction model is expected to enable innovative adaptive control scenarios for a built environment conditions in real time, as well as a significant reduction in building energy usage directly related to human occupancy and their desired comfort levels.Copyright


international conference on computing communication and networking technologies | 2012

Feature and decision level fusion for action recognition

Mohamed Abouelenien; Yiwen Wan; Abdullah Saudagar

Classification of actions by human actors from video enables new technologies in diverse areas such as surveillance and content-based retrieval. We propose and evaluate alternative models, one based on feature-level fusion and the second on decision-level fusion. Both models employ direct classification - inferring from low-level features the nature of the action. Interesting points are assumed to have salient 3D (spatial plus temporal) gradients that distinguish them from their neighborhoods. They are identified using three distinct 3D interesting-point detectors. Each detected interest point set is represented as a bag-of-words descriptor. The feature level fusion model concatenates descriptors subsequently used as input to a classifier. The decision level fusion uses an ensemble and majority voting scheme. Public data sets consisting of hundreds of action videos were used in testing. Within the test videos, multiple actors performed various actions including walking, running, jogging, handclapping, boxing, and waving. Performance comparison showed very high classification accuracy for both models with feature-level fusion having an edge. For feature-level fusion the novelty is the fused histogram of visual words derived from different sets of interesting points detected by different saliency detectors. For decision fusion besides Adaboost the majority voting scheme is also utilized in ensemble classifiers based on support vector machines, knearest neighbor, and decision trees. The main contribution, however, is the comparison between the models and, drilling down, the performance of different base classifiers, and different interest point detectors for human motion recognition


pervasive technologies related to assistive environments | 2015

Cascaded multimodal analysis of alertness related features for drivers safety applications

Mohamed Abouelenien; Mihai Burzo; Rada Mihalcea

Drowsy driving has a strong influence on the road traffic safety. Relying on improvements of sensorial technologies, a multimodal approach can provide features that can be more effective in detecting the level of alertness of the drivers. In this paper, we analyze a multimodal alertness dataset that contains physiological, environmental, and vehicular features provided by Ford to determine the effect of following a multimodal approach compared to relying on single modalities. Moreover, we propose a cascaded system that uses sequential feature selection, time-series feature extraction, and decision fusion to capture discriminative patterns in the data. Our experimental results confirm the effectiveness of our system in improving alertness detection rates and provide guidelines of the specific modalities and approaches that can be used for improved alertness detection.


Proceedings of the 2015 ACM on Workshop on Multimodal Deception Detection | 2015

Trimodal Analysis of Deceptive Behavior

Mohamed Abouelenien; Rada Mihalcea; Mihai Burzo

The need arises for developing a more reliable deception detection system to address the shortcomings of the traditional polygraph tests and the dependability on physiological indicators of deceit. This paper describes a new deception detection dataset, provides a novel comparison between three modalities to identify deception including the visual, thermal, and physiological domains, and analyzes whether certain facial areas are more capable of indicating deceit. Our experimental results show a promising performance especially with the thermal modality, and provide guidelines for our data collection process and future work.


Archive | 2018

Multimodal deception detection

Mihai Burzo; Mohamed Abouelenien

Deception detection has received an increasing amount of attention due to the significant growth of digital media, as well as increased ethical and security concerns. Earlier approaches to deception detection were mainly focused on law enforcement applications and relied on polygraph tests, which had proven to falsely accuse the innocent and free the guilty in multiple cases. More recent work on deception has expanded to other applications, such as deception detection in social media, interviews, or deception in daily life. Moreover, recent research on deception detection has brought together scientists from fields as diverse as computational linguistics, speech processing, computer vision, psychology, and physiology, which makes this problem particularly appealing for multimodal processing. This chapter will overview the state-of-the-art in multimodal deception detection, covering physiological (e.g., biosensors and thermal imaging), visual (e.g., facial expressions and gestures), speech (e.g., pitch and pause length), and linguistic modalities. We will describe the features that are typically extracted from each of these modalities, as well as means to combine these modalities into an overall system that can detect deception in multimodal content. We will cover methods that make use of lab recordings, as well as methods that rely on reallife data (e.g., recent work on multimodal deception detection from trial data). In general, a multimodal approach where features from different streams are integrated is found to lead to an improved performance as compared to the use of single modalities. 7.

Collaboration


Dive into the Mohamed Abouelenien's collaboration.

Top Co-Authors

Avatar

Mihai Burzo

University of Michigan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiaohui Yuan

University of North Texas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge