Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fatemeh Noroozi is active.

Publication


Featured researches published by Fatemeh Noroozi.


IEEE Transactions on Affective Computing | 2017

Audio-Visual Emotion Recognition in Video Clips

Fatemeh Noroozi; Marina Marjanovic; Angelina Njeguš; Sergio Escalera; Gholamreza Anbarjafari

This paper presents a multimodal emotion recognition system, which is based on the analysis of audio and visual cues. From the audio channel, Mel-Frequency Cepstral Coefficients, Filter Bank Energies and prosodic features are extracted. For the visual part, two strategies are considered. First, facial landmarks’ geometric relations, i.e., distances and angles, are computed. Second, we summarize each emotional video into a reduced set of key-frames, which are taught to visually discriminate between the emotions. In order to do so, a convolutional neural network is applied to key-frames summarizing videos. Finally, confidence outputs of all the classifiers from all the modalities are used to define a new feature space to be learned for final emotion label prediction, in a late fusion/stacking fashion. The experiments conducted on the SAVEE, eNTERFACE’05, and RML databases show significant performance improvements by our proposed system in comparison to current alternatives, defining the current state-of-the-art in all three databases.


International Journal of Speech Technology | 2017

Vocal-based emotion recognition using random forests and decision tree

Fatemeh Noroozi; Tomasz Sapiński; Dorota Kamińska; Gholamreza Anbarjafari

This paper proposes a new vocal-based emotion recognition method using random forests, where pairs of the features on the whole speech signal, namely, pitch, intensity, the first four formants, the first four formants bandwidths, mean autocorrelation, mean noise-to-harmonics ratio and standard deviation, are used in order to recognize the emotional state of a speaker. The proposed technique adopts random forests to represent the speech signals, along with the decision-trees approach, in order to classify them into different categories. The emotions are broadly categorised into the six groups, which are happiness, fear, sadness, neutral, surprise, and disgust. The Surrey Audio-Visual Expressed Emotion database is used. According to the experimental results using leave-one-out cross-validation, by means of combining the most significant prosodic features, the proposed method has an average recognition rate of


Seventh International Conference on Graphic and Image Processing (ICGIP 2015) | 2015

Block based image compression technique using rank reduction and wavelet difference reduction

Anastasia Bolotnikova; Pejman Rasti; Andres Traumann; Iiris Lüsi; Morteza Daneshmand; Fatemeh Noroozi; Kadri Samuel; Suman Sarkar; Gholamreza Anbarjafari


signal processing and communications applications conference | 2017

Speech-based emotion recognition and next reaction prediction

Fatemeh Noroozi; Neda Akrami; Gholamreza Anbarjafari

66.28\%


international conference on pattern recognition | 2016

Fusion of classifier predictions for audio-visual emotion recognition

Fatemeh Noroozi; Marina Marjanovic; Angelina Njeguš; Sergio Escalera; Gholamreza Anbarjafari


arXiv: Computer Vision and Pattern Recognition | 2018

3D Scanning: A Comprehensive Survey.

Morteza Daneshmand; Ahmed Helmi; Egils Avots; Fatemeh Noroozi; Fatih Alisinanoglu; Hasan Sait Arslan; Jelena Gorbova; Rain Eric Haamer; Cagri Ozcinar; Gholamreza Anbarjafari

66.28%, and at the highest level, the recognition rate of


IEEE Transactions on Affective Computing | 2018

Survey on Emotional Body Gesture Recognition

Ciprian A. Corneanu; Fatemeh Noroozi; Dorota Kamińska; Tomasz Sapiński; Sergio Escalera; Gholamreza Anbarjafari


Journal of The Audio Engineering Society | 2017

Supervised Vocal-Based Emotion Recognition Using Multiclass Support Vector Machine, Random Forests, and Adaboost

Fatemeh Noroozi; Dorota Kamińska; Tomasz Sapiński; Gholamreza Anbarjafari

78\%


Eurasip Journal on Image and Video Processing | 2017

Recognition of identical twins using fusion of various facial feature extractors

Ayman Afaneh; Fatemeh Noroozi; Önsen Toygar


ieee international conference on automatic face gesture recognition | 2018

Deep Multimodal Pain Recognition: A Database and Comparison of Spatio-Temporal Visual Modalities

Mohammad Ahsanul Haque; Ruben B. Bautista; Fatemeh Noroozi; Kaustubh Kulkarni; Christian B. Laursen; Ramin Irani; Marco Bellantonio; Sergio Escalera; Gholamreza Anbarjafari; Kamal Nasrollahi; Ole Kæseler Andersen; Erika G. Spaich; Thomas B. Moeslund

78% has been obtained, which belongs to the happiness voice signals. The proposed method has

Collaboration


Dive into the Fatemeh Noroozi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dorota Kamińska

Lodz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Tomasz Sapiński

Lodz University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge