Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anastasia Pampouchidou is active.

Publication


Featured researches published by Anastasia Pampouchidou.


acm multimedia | 2016

Depression Assessment by Fusing High and Low Level Features from Audio, Video, and Text

Anastasia Pampouchidou; Olympia Simantiraki; Amir Fazlollahi; Matthew Pediaditis; Dimitris Manousos; Alexandros Roniotis; Giorgos A. Giannakakis; Fabrice Meriaudeau; Panagiotis G. Simos; Kostas Marias; Fan Yang; Manolis Tsiknakis

Depression is a major cause of disability world-wide. The present paper reports on the results of our participation to the depression sub-challenge of the sixth Audio/Visual Emotion Challenge (AVEC 2016), which was designed to compare feature modalities (audio, visual, interview transcript-based) in gender-based and gender-independent modes using a variety of classification algorithms. In our approach, both high and low level features were assessed in each modality. Audio features were extracted from the low-level descriptors provided by the challenge organizers. Several visual features were extracted and assessed including dynamic characteristics of facial elements (using Landmark Motion History Histograms and Landmark Motion Magnitude), global head motion, and eye blinks. These features were combined with statistically derived features from pre-extracted features (emotions, action units, gaze, and pose). Both speech rate and word-level semantic content were also evaluated. Classification results are reported using four different classification schemes: i) gender-based models for each individual modality, ii) the feature fusion model, ii) the decision fusion model, and iv) the posterior probability classification model. Proposed approaches outperforming the reference classification accuracy include the one utilizing statistical descriptors of low-level audio features. This approach achieved f1-scores of 0.59 for identifying depressed and 0.87 for identifying not-depressed individuals on the development set and 0.52/0.81, respectively for the test set.


international conference of the ieee engineering in medicine and biology society | 2015

Extraction of facial features as indicators of stress and anxiety.

Matthew Pediaditis; Giorgos A. Giannakakis; Franco Chiarugi; Dimitris Manousos; Anastasia Pampouchidou; Eirini Christinaki; Galateia Iatraki; Eleni Kazantzaki; Panagiotis G. Simos; Kostas Marias; Manolis Tsiknakis

Stress and anxiety heavily affect the human wellbeing and health. Under chronic stress, the human body and mind suffers by constantly mobilizing all of its resources for defense. Such a stress response can also be caused by anxiety. Moreover, excessive worrying and high anxiety can lead to depression and even suicidal thoughts. The typical tools for assessing these psycho-somatic states are questionnaires, but due to their shortcomings, by being subjective and prone to bias, new more robust methods based on facial expression analysis have emerged. Going beyond the typical detection of 6 basic emotions, this study aims to elaborate a set of facial features for the detection of stress and/or anxiety. It employs multiple methods that target each facial region individually. The features are selected and the classification performance is measured based on a dataset consisting 23 subjects. The results showed that with feature sets of 9 and 10 features an overall accuracy of 73% is reached.


international conference of the ieee engineering in medicine and biology society | 2016

Video-based depression detection using local Curvelet binary patterns in pairwise orthogonal planes

Anastasia Pampouchidou; Kostas Marias; Manolis Tsiknakis; Panagiotis G. Simos; Fan Yang; Guillaume Lemaitre; Fabrice Meriaudeau

Depression is an increasingly prevalent mood disorder. This is the reason why the field of computer-based depression assessment has been gaining the attention of the research community during the past couple of years. The present work proposes two algorithms for depression detection, one Frame-based and the second Video-based, both employing Curvelet transform and Local Binary Patterns. The main advantage of these methods is that they have significantly lower computational requirements, as the extracted features are of very low dimensionality. This is achieved by modifying the previously proposed algorithm which considers Three-Orthogonal-Planes, to only Pairwise-Orthogonal-Planes. Performance of the algorithms was tested on the benchmark dataset provided by the Audio/Visual Emotion Challenge 2014, with the person-specific system achieving 97.6% classification accuracy, and the person-independed one yielding promising preliminary results of 74.5% accuracy. The paper concludes with open issues, proposed solutions, and future plans.


international conference on signal and image processing applications | 2015

Designing a framework for assisting depression severity assessment from facial image analysis

Anastasia Pampouchidou; Kostas Marias; Manolis Tsiknakis; Panagiotis G. Simos; Fan Yang; Fabrice Meriaudeau

Depression is one of the most common mental disorders affecting millions of people worldwide. Developing adjunct tools aiding depression assessment is expected to impact overall health outcomes and treatment cost reduction. To this end, platforms designed for automatic and non-invasive depression assessment could help in detecting signs of the disease on a regular basis, without requiring the physical presence of a mental health professional. Despite the different approaches that can be found in the literature, both in terms of methods and algorithms, a fully satisfactory system for the automatic assessment of depression severity has not been presented as yet. This paper describes a proposed algorithm for dynamically analyzing facial expressions using robust descriptors in order to compose a novel feature selection as well as an effective classification process. Additionally a preliminary evaluation of the system is presented, by applying local curvelet binary patterns in three orthogonal planes for depression severity assessment.


IEEE Transactions on Affective Computing | 2017

Automatic Assessment of Depression Based on Visual Cues: A Systematic Review

Anastasia Pampouchidou; Panagiotis G. Simos; Kostas Marias; Fabrice Meriaudeau; Fan Yang; Matthew Pediaditis; Manolis Tsiknakis

Automatic depression assessment based on visual cues is a rapidly growing research domain. The present exhaustive review of existing approaches as reported in over sixty publications during the last ten years focuses on image processing and machine learning algorithms. Visual manifestations of depression, various procedures used for data collection, and existing datasets are summarized. The review outlines methods and algorithms for visual feature extraction, dimensionality reduction, decision methods for classification and regression approaches, as well as different fusion strategies. A quantitative meta-analysis of reported results, relying on performance metrics robust to chance, is included, identifying general trends and key unresolved issues to be considered in future studies of automatic depression assessment utilizing visual cues alone or in combination with vocal or verbal cues.


MindCare/Fabulous | 2016

Stress Detection from Speech Using Spectral Slope Measurements

Olympia Simantiraki; Giorgos A. Giannakakis; Anastasia Pampouchidou; Manolis Tsiknakis

Automatic detection of emotional stress is an active research domain, which has recently drawn increasing attention, mainly in the fields of computer science, linguistics, and medicine. In this study, stress is automatically detected by employing speech-derived features. Related studies utilize features such as overall intensity, MFCCs, Teager Energy Operator, and pitch. The present study proposes a novel set of features based on the spectral tilt of the glottal source and of the speech signal itself. The proposed features rely on the Probability Density Function of the estimated spectral slopes, and consist of the three most probable slopes from the glottal source, as well as the corresponding three slopes of the speech signal, obtained on a word level. The performance of the proposed method is evaluated on the simulated dataset of the SUSAS corpus, achieving recognition accuracy of \(92.06\%\), when the Random Forests classifier is used.


international conference on imaging systems and techniques | 2016

Automated characterization of mouth activity for stress and anxiety assessment

Anastasia Pampouchidou; Matthew Pediaditis; Franco Chiarugi; Kostas Marias; Panagiotis G. Simos; Fan Yang; Fabrice Meriaudeau; Manolis Tsiknakis

Non-verbal information portrayed by human facial expression, apart from emotional cues also encompasses information relevant to psychophysical status. Mouth activities in particular have been found to correlate with signs of several conditions; depressed people smile less, while those in fatigue yawn more. In this paper, we present a semi-automated, robust and efficient algorithm for extracting mouth activity from video recordings based on Eigen-features and template-matching. The algorithm was evaluated for mouth openings and mouth deformations, on a minimum specification dataset of 640×480 resolution and 15 fps. The extracted features were the signals of mouth expansion (openness estimation) and correlation (deformation estimation). The achieved classification accuracy reached 89.17%. A second series of experimental results, for the preliminary evaluation of the proposed algorithm in assessing stress/anxiety, took place using an additional dataset. The proposed algorithm showed consistent performance across both datasets, which indicates high robustness. Furthermore, normalized openings per minute, and average openness intensity were extracted as video-based features, resulting in a significant difference between video recordings of stressed/anxious versus relaxed subjects.


conference of the international speech communication association | 2017

Glottal Source Features for Automatic Speech-Based Depression Assessment.

Olympia Simantiraki; Paulos Charonyktakis; Anastasia Pampouchidou; Manolis Tsiknakis; Martin Cooke


Special Session on Signals and Signs Understanding for Personalized Guidance to Promote Healthy Lifestyles | 2016

Facial Signs and Psycho-physical Status Estimation for Well-being Assessment

Franco Chiarugi; Galateia Iatraki; Eirini Christinaki; Dimitris Manousos; Giorgos A. Giannakakis; Matthew Pediaditis; Anastasia Pampouchidou; Kostas Marias; Manolis Tsiknakis


international conference of the ieee engineering in medicine and biology society | 2017

Facial geometry and speech analysis for depression detection

Anastasia Pampouchidou; Olympia Simantiraki; Calliope-Marina Vazakopoulou; Charikleia Chatzaki; Matthew Pediaditis; Anna Maridaki; Kostas Marias; Panagiotis G. Simos; Fan Yang; Fabrice Meriaudeau; Manolis Tsiknakis

Collaboration


Dive into the Anastasia Pampouchidou's collaboration.

Top Co-Authors

Avatar

Manolis Tsiknakis

Technological Educational Institute of Crete

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fan Yang

University of Burgundy

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anna Maridaki

Technological Educational Institute of Crete

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Calliope-Marina Vazakopoulou

Technological Educational Institute of Crete

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexandros Roniotis

Technological Educational Institute of Crete

View shared research outputs
Researchain Logo
Decentralizing Knowledge