Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Emad Barsoum is active.

Publication


Featured researches published by Emad Barsoum.


international conference on multimodal interfaces | 2016

Training deep networks for facial expression recognition with crowd-sourced label distribution

Emad Barsoum; Cha Zhang; Cristian Canton Ferrer; Zhengyou Zhang

Crowd sourcing has become a widely adopted scheme to collect ground truth labels. However, it is a well-known problem that these labels can be very noisy. In this paper, we demonstrate how to learn a deep convolutional neural network (DCNN) from noisy labels, using facial expression recognition as an example. More specifically, we have 10 taggers to label each input image, and compare four different approaches to utilizing the multiple labels: majority voting, multi-label learning, probabilistic label drawing, and cross-entropy loss. We show that the traditional majority voting scheme does not perform as well as the last two approaches that fully leverage the label distribution. An enhanced FER+ data set with multiple labels for each face image will also be shared with the research community.


international conference on multimodal interfaces | 2016

Emotion recognition in the wild from videos using images

Sarah Adel Bargal; Emad Barsoum; Cristian Canton Ferrer; Cha Zhang

This paper presents the implementation details of the proposed solution to the Emotion Recognition in the Wild 2016 Challenge, in the category of video-based emotion recognition. The proposed approach takes the video stream from the audio-video trimmed clips provided by the challenge as input and produces the emotion label corresponding to this video sequence. This output is encoded as one out of seven classes: the six basic emotions (Anger, Disgust, Fear, Happiness, Sad, Surprise) and Neutral. Overall, the system consists of several pipelined modules: face detection, image pre-processing, deep feature extraction, feature encoding and, finally, an SVM classification. This system achieves 59.42% validation accuracy, surpassing the competition baseline of 38.81%. With regard to test data, our system achieves 56.66% recognition rate, also improving the competition baseline of 40.47%.


international conference on acoustics, speech, and signal processing | 2017

Automatic speech emotion recognition using recurrent neural networks with local attention

Seyedmahdad Mirsamadi; Emad Barsoum; Cha Zhang

Automatic emotion recognition from speech is a challenging task which relies heavily on the effectiveness of the speech features used for classification. In this work, we study the use of deep learning to automatically discover emotionally relevant features from speech. It is shown that using a deep recurrent neural network, we can learn both the short-time frame-level acoustic features that are emotionally relevant, as well as an appropriate temporal aggregation of those features into a compact utterance-level representation. Moreover, we propose a novel strategy for feature pooling over time which uses local attention in order to focus on specific regions of a speech signal that are more emotionally salient. The proposed solution is evaluated on the IEMOCAP corpus, and is shown to provide more accurate predictions compared to existing emotion recognition algorithms.


Archive | 2012

Visual indication of graphical user interface relationship

Emad Barsoum; Chad Wesley Wahlin


Archive | 2012

Client rendering of latency sensitive game features

David James Quinn; Emad Barsoum; Charles Claudius Marais; John Raymond Justice; Krassimir E. Karamfilov; Roderick M. Toll


Archive | 2011

Calculating metabolic equivalence with a computing device

Emad Barsoum; Ron Forbes; Tommer Leyvand; Tim Gerken


Archive | 2016

TWO-DIMENSIONAL INFRARED DEPTH SENSING

Ben Butler; Vladimir Tankovich; Cem Keskin; Sean Ryan Fanello; Shahram Izadi; Emad Barsoum; Simon P. Stachniak; Yichen Wei


computer vision and pattern recognition | 2017

HP-GAN: Probabilistic 3D Human Motion Prediction via GAN

Emad Barsoum; John R. Kender; Zicheng Liu


Archive | 2012

Client side processing of game controller input

Krassimir E. Karamfilov; Emad Barsoum; Charles Claudius Marais; John Raymond Justice; David James Quinn; Roderick M. Toll


arXiv: Computer Vision and Pattern Recognition | 2016

Articulated Hand Pose Estimation Review.

Emad Barsoum

Collaboration


Dive into the Emad Barsoum's collaboration.

Researchain Logo
Decentralizing Knowledge