Fatemeh Noroozi
University of Tartu
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Fatemeh Noroozi.
IEEE Transactions on Affective Computing | 2017
Fatemeh Noroozi; Marina Marjanovic; Angelina Njeguš; Sergio Escalera; Gholamreza Anbarjafari
This paper presents a multimodal emotion recognition system, which is based on the analysis of audio and visual cues. From the audio channel, Mel-Frequency Cepstral Coefficients, Filter Bank Energies and prosodic features are extracted. For the visual part, two strategies are considered. First, facial landmarks’ geometric relations, i.e., distances and angles, are computed. Second, we summarize each emotional video into a reduced set of key-frames, which are taught to visually discriminate between the emotions. In order to do so, a convolutional neural network is applied to key-frames summarizing videos. Finally, confidence outputs of all the classifiers from all the modalities are used to define a new feature space to be learned for final emotion label prediction, in a late fusion/stacking fashion. The experiments conducted on the SAVEE, eNTERFACE’05, and RML databases show significant performance improvements by our proposed system in comparison to current alternatives, defining the current state-of-the-art in all three databases.
International Journal of Speech Technology | 2017
Fatemeh Noroozi; Tomasz Sapiński; Dorota Kamińska; Gholamreza Anbarjafari
This paper proposes a new vocal-based emotion recognition method using random forests, where pairs of the features on the whole speech signal, namely, pitch, intensity, the first four formants, the first four formants bandwidths, mean autocorrelation, mean noise-to-harmonics ratio and standard deviation, are used in order to recognize the emotional state of a speaker. The proposed technique adopts random forests to represent the speech signals, along with the decision-trees approach, in order to classify them into different categories. The emotions are broadly categorised into the six groups, which are happiness, fear, sadness, neutral, surprise, and disgust. The Surrey Audio-Visual Expressed Emotion database is used. According to the experimental results using leave-one-out cross-validation, by means of combining the most significant prosodic features, the proposed method has an average recognition rate of
Seventh International Conference on Graphic and Image Processing (ICGIP 2015) | 2015
Anastasia Bolotnikova; Pejman Rasti; Andres Traumann; Iiris Lüsi; Morteza Daneshmand; Fatemeh Noroozi; Kadri Samuel; Suman Sarkar; Gholamreza Anbarjafari
signal processing and communications applications conference | 2017
Fatemeh Noroozi; Neda Akrami; Gholamreza Anbarjafari
66.28\%
international conference on pattern recognition | 2016
Fatemeh Noroozi; Marina Marjanovic; Angelina Njeguš; Sergio Escalera; Gholamreza Anbarjafari
arXiv: Computer Vision and Pattern Recognition | 2018
Morteza Daneshmand; Ahmed Helmi; Egils Avots; Fatemeh Noroozi; Fatih Alisinanoglu; Hasan Sait Arslan; Jelena Gorbova; Rain Eric Haamer; Cagri Ozcinar; Gholamreza Anbarjafari
66.28%, and at the highest level, the recognition rate of
IEEE Transactions on Affective Computing | 2018
Ciprian A. Corneanu; Fatemeh Noroozi; Dorota Kamińska; Tomasz Sapiński; Sergio Escalera; Gholamreza Anbarjafari
Journal of The Audio Engineering Society | 2017
Fatemeh Noroozi; Dorota Kamińska; Tomasz Sapiński; Gholamreza Anbarjafari
78\%
Eurasip Journal on Image and Video Processing | 2017
Ayman Afaneh; Fatemeh Noroozi; Önsen Toygar
ieee international conference on automatic face gesture recognition | 2018
Mohammad Ahsanul Haque; Ruben B. Bautista; Fatemeh Noroozi; Kaustubh Kulkarni; Christian B. Laursen; Ramin Irani; Marco Bellantonio; Sergio Escalera; Gholamreza Anbarjafari; Kamal Nasrollahi; Ole Kæseler Andersen; Erika G. Spaich; Thomas B. Moeslund
78% has been obtained, which belongs to the happiness voice signals. The proposed method has