Theodoros Kostoulas
University of Patras
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Theodoros Kostoulas.
Journal of Mental Health | 2012
Fernando Fernández-Aranda; Susana Jiménez-Murcia; Juan José Santamaría; Katarina Gunnard; Antonio Soto; Elias Kalapanidas; Richard Bults; Costas Davarakis; Todor Ganchev; Roser Granero; Dimitri Konstantas; Theodoros Kostoulas; Tony Lam; Mikkel Lucas; Cristina Masuet-Aumatell; Maher H. Moussa; Jeppe Nielsen; Eva Penelo
Background Previous review studies have suggested that computer games can serve as an alternative or additional form of treatment in several areas (schizophrenia, asthma or motor rehabilitation). Although several naturalistic studies have been conducted showing the usefulness of serious video games in the treatment of some abnormal behaviours, there is a lack of serious games specially designed for treating mental disorders. Aim The purpose of our project was to develop and evaluate a serious video game designed to remediate attitudinal, behavioural and emotional processes of patients with impulse-related disorders. Method and results The video game was created and developed within the European research project PlayMancer. It aims to prove potential capacity to change underlying attitudinal, behavioural and emotional processes of patients with impulse-related disorders. New interaction modes were provided by newly developed components, such as emotion recognition from speech, face and physiological reactions, while specific impulsive reactions were elicited. The video game uses biofeedback for helping patients to learn relaxation skills, acquire better self-control strategies and develop new emotional regulation strategies. In this article, we present a description of the video game used, rationale, user requirements, usability and preliminary data, in several mental disorders.
Expert Systems With Applications | 2012
Theodoros Kostoulas; Iosif Mporas; Otilia Kocsis; Todor Ganchev; Nikos Katsaounos; Juan José Santamaría; Susana Jiménez-Murcia; Fernando Fernández-Aranda; Nikos Fakotakis
We describe a novel design, implementation and evaluation of a speech interface, as part of a platform for the development of serious games. The speech interface consists of the speech recognition component and the emotion recognition from speech component. The speech interface relies on a platform designed and implemented to support the development of serious games, which supports cognitive-based treatment of patients with mental disorders. The implementation of the speech interface is based on the Olympus/RavenClaw framework. This framework has been extended for the needs of the specific serious games and the respective application domain, by integrating new components, such as emotion recognition from speech. The evaluation of the speech interface utilized purposely collected domain-specific dataset. The speech recognition experiments show that emotional speech moderately affects the performance of the speech interface. Furthermore, the emotion detectors demonstrated satisfying performance for the emotion states of interest, Anger and Boredom, and contributed towards successful modelling of the patients emotion status. The performance achieved for speech recognition and for the detection of the emotional states of interest was satisfactory. Recent evaluation of the serious games showed that the patients started to show new coping styles with negative emotions in normal stress life situations.
text speech and dialogue | 2010
Theodoros Kostoulas; Todor Ganchev; Alexandros Lazaridis; Nikos Fakotakis
In the present work we aim at performance optimization of a speaker-independent emotion recognition system through speech feature selection process. Specifically, relying on the speech feature set defined in the Interspeech 2009 Emotion Challenge, we studied the relative importance of the individual speech parameters, and based on their ranking, a subset of speech parameters that offered advantageous performance was selected. The affect-emotion recognizer utilized here relies on a GMM-UBM-based classifier. In all experiments, we followed the experimental setup defined by the Interspeech 2009 Emotion Challenge, utilizing the FAU Aibo Emotion Corpus of spontaneous, emotionally coloured speech. The experimental results indicate that the correct choice of the speech parameters can lead to better performance than the baseline one.
Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction | 2008
Theodoros Kostoulas; Todor Ganchev; Nikos Fakotakis
In the present work we report results from on-going research activity in the area of speaker-independent emotion recognition. Experimentations are performed towards examining the behavior of a detector of negative emotional states over non-acted/acted speech. Furthermore, a score-level fusion of two classifiers on utterance level is applied, in attempt to improve the performance of the emotion recognizer. Experimental results demonstrate significant differences on recognizing emotions on acted/real-world speech.
PLOS ONE | 2014
Salomé Tárrega; Ana B. Fagundo; Susana Jiménez-Murcia; Roser Granero; Cristina Giner-Bartolomé; Laura Forcano; Isabel Sánchez; Juan José Santamaría; Maher Ben-Moussa; Nadia Magnenat-Thalmann; Dimitri Konstantas; Mikkel Lucas; Jeppe Lund Nielsen; Richard Bults; Tony Lam; Theodoros Kostoulas; Nikos Fakotakis; Nadine Riesco; Ines Wolz; Josep Comin-Colet; Valentina Cardi; Janet Treasure; José Antonio Fernández-Formoso; José M. Menchón; Fernando Fernández-Aranda
Expression of emotional state is considered to be a core facet of an individuals emotional competence. Emotional processing in BN has not been often studied and has not been considered from a broad perspective. This study aimed at examining the implicit and explicit emotional expression in BN patients, in the acute state and after recovery. Sixty-three female participants were included: 22 BN, 22 recovered BN (R-BN), and 19 healthy controls (HC). The clinical cases were drawn from consecutive admissions and diagnosed according to DSM-IV-TR diagnostic criteria. Self reported (explicit) emotional expression was measured with State-Trait Anger Expression Inventory-2, State-Trait Anxiety Inventory, and Symptom Check List-90 items-Revised. Emotional facial expression (implicit) was recorded by means of an integrated camera (by detecting Facial Feature Tracking), during a 20 minutes therapeutic video game. In the acute illness explicit emotional expression [anxiety (p<0.001) and anger (p<0.05)] was increased. In the recovered group this was decreased to an intermediate level between the acute illness and healthy controls [anxiety (p<0.001) and anger (p<0.05)]. In the implicit measurement of emotional expression patients with acute BN expressed more joy (p<0.001) and less anger (p<0.001) than both healthy controls and those in the recovered group. These findings suggest that there are differences in the implicit and explicit emotional processing in BN, which is significantly reduced after recovery, suggesting an improvement in emotional regulation.
international conference on tools with artificial intelligence | 2007
Iosif Mporas; Todor Ganchev; Mihalis Siafarikas; Theodoros Kostoulas
In this work, we present comparative evaluation of the practical value of some recently proposed speech parameterizations on the speech recognition task. Specifically, in a common experimental setup we evaluate recent discrete wavelet-packet transform (DWPT)-based speech features against traditional techniques, such as the Mel-frequency cepstral coefficients (MFCC) and perceptual linear predictive (PLP) cepstral coefficients that presently dominate the speech recognition field. The relative ranking of eleven sets of speech features is presented.This paper presents a multi-sensor fusion strategy for a novel road-matching method designed to support real-time navigational features within advanced driving-assistance systems. Managing multi- hypotheses is a useful strategy for the road-matching problem. The multi-sensor fusion and multi-modal estimation are realized using Dynamical Bayesian Network. Experimental results, using data from Anti- lock Braking System (ABS) sensors, a differential Global Positioning System (GPS) receiver and an accurate digital roadmap, illustrate the performances of this approach, especially in ambiguous situations.
quality of multimedia experience | 2015
Theodoros Kostoulas; Guillaume Chanel; Michal Muszynski; Patrizia Lombardo; Thierry Pun
Affective computing is an important research area of computer science, with strong ties with humanities in particular. In this work we detail recent research activities towards determining moments of aesthetic importance in movies, on the basis of the reactions of multiple spectators. These reactions correspond to the multimodal reaction profile of a group of people and are computed from their physiological and behavioral signals. The highlight identification system using the reaction profile is evaluated on the basis of annotated aesthetic moments. The proposed architecture shows significant ability to determine moments of aesthetic importance, despite the challenges resulting from its operation in ecological situation, i.e. real-life recordings of the reactions of spectators watching a film in the movie theater.
panhellenic conference on informatics | 2009
Iosif Mporas; Todor Ganchev; Theodoros Kostoulas; Katia Lida Kermanidis; Nikos Fakotakis
In the present work we study the performance of a speech recognizer for the Greek language, in a smart-home environment. This recognizer operates in spoken interaction scenarios, where the users are able to control various home appliances. In contrast to command and control systems, in our application the users speak spontaneously, beyond the use of a standardized set of isolated commands. The operational performance was tested over various environmental conditions, for two different types of microphones. In all experiments, regardless of the difference in the word error rates obtained for different scenarios, a task completion rate of 100% was observed.
international conference industrial engineering other applications applied intelligent systems | 2008
Theodoros Kostoulas; Iosif Mporas; Todor Ganchev; Nikos Fakotakis
The present work studies the effect of emotional speech on a smart-home application. Specifically, we evaluate the recognition performance of the automatic speech recognition component of a smart-home dialogue system for various categories of emotional speech. The experimental results reveal that word recognition rate for emotional speech varies significantly across different emotion categories.
international conference on tools with artificial intelligence | 2007
Theodoros Kostoulas; Todor Ganchev; Iosif Mporas; Nikos Fakotakis
In the present work we evaluate a detector of negative emotional states (DNES) that serves the purpose of enhancing a spoken dialogue system, which operates in smart-home environment. The DNES component is based on Gaussian mixture models (GMMs) and a set of commonly used speech features. In comprehensive performance evaluation we utilized a well-known acted speech database and real-world speech recordings. The real-world speech was collected during interaction of naive users with our smart-home spoken dialogue system. The experimental results show that the accuracy of recognizing negative emotions on the real- world data is lower than the one reported when testing on the acted speech database, though much promising, considering that, often, humans are unable to distinguish the emotion of other humans judging only from speech.