Stylianos Asteriadis
Maastricht University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Stylianos Asteriadis.
Multimedia Tools and Applications | 2009
Stylianos Asteriadis; Paraskevi K. Tzouveli; Kostas Karpouzis; Stefanos D. Kollias
Most e-learning environments which utilize user feedback or profiles, collect such information based on questionnaires, resulting very often in incomplete answers, and sometimes deliberate misleading input. In this work, we present a mechanism which compiles feedback related to the behavioral state of the user (e.g. level of interest) in the context of reading an electronic document; this is achieved using a non-intrusive scheme, which uses a simple web camera to detect and track the head, eye and hand movements and provides an estimation of the level of interest and engagement with the use of a neuro-fuzzy network initialized from evidence from the idea of Theory of Mind and trained from expert-annotated data. The user does not need to interact with the proposed system, and can act as if she was not monitored at all. The proposed scheme is tested in an e-learning environment, in order to adapt the presentation of the content to the user profile and current behavioral state. Experiments show that the proposed system detects reading- and attention-related user states very effectively, in a testbed where children’s reading performance is tracked.
artificial intelligence applications and innovations | 2007
George Caridakis; Ginevra Castellano; Loic Kessous; Amaryllis Raouzaiou; Lori Malatesta; Stylianos Asteriadis; Kostas Karpouzis
In this paper we present a multimodal approach for the recognition of eight emotions that integrates information from facial expressions, body movement and gestures and speech. We trained and tested a model with a Bayesian classifier, using a multimodal corpus with eight emotions and ten subjects. First individual classifiers were trained for each modality. Then data were fused at the feature level and the decision level. Fusing multimodal data increased very much the recognition rates in comparison with the unimodal systems: the multimodal approach gave an improvement of more than 10% with respect to the most successful unimodal system. Further, the fusion performed at the feature level showed better results than the one performed at the decision level.
Pattern Recognition | 2009
Stylianos Asteriadis; Nikos Nikolaidis; Ioannis Pitas
A novel method for eye and mouth detection and eye center and mouth corner localization, based on geometrical information is presented in this paper. First, a face detector is applied to detect the facial region, and the edge map of this region is calculated. The distance vector field of the face is extracted by assigning to every facial image pixel a vector pointing to the closest edge pixel. The x and y components of these vectors are used to detect the eyes and mouth regions. Luminance information is used for eye center localization, after removing unwanted effects, such as specular highlights, whereas the hue channel of the lip area is used for the detection of the mouth corners. The proposed method has been tested on the XM2VTS and BioID databases, with very good results.
Proceedings of the International Workshop on Affective-Aware Virtual Agents and Social Robots | 2009
Stylianos Asteriadis; Dimitris Soufleros; Kostas Karpouzis; Stefanos D. Kollias
We present a new dataset, ideal for Head Pose and Eye Gaze Estimation algorithm testings. Our dataset was recorded using a monocular system, and no information regarding camera or environment parameters is offered, making the dataset ideal to be tested with algorithms that do not utilize such information and do not require any specific equipment in terms of hardware.
IEEE Transactions on Systems, Man, and Cybernetics | 2013
Noor Shaker; Stylianos Asteriadis; Georgios N. Yannakakis; Kostas Karpouzis
Estimating affective and cognitive states in conditions of rich human-computer interaction, such as in games, is a field of growing academic and commercial interest. Entertainment and serious games can benefit from recent advances in the field as, having access to predictors of the current state of the player (or learner) can provide useful information for feeding adaptation mechanisms that aim to maximize engagement or learning effects. In this paper, we introduce a large data corpus derived from 58 participants that play the popular Super Mario Bros platform game and attempt to create accurate models of player experience for this game genre. Within the view of the current research, features extracted both from player gameplay behavior and game levels, and player visual characteristics have been used as potential indicators of reported affect expressed as pairwise preferences between different game sessions. Using neuroevolutionary preference learning and automatic feature selection, highly accurate models of reported engagement, frustration, and challenge are constructed (model accuracies reach 91%, 92%, and 88% for engagement, frustration, and challenge, respectively). As a step further, the derived player experience models can be used to personalize the game level to desired levels of engagement, frustration, and challenge as game content is mapped to player experience through the behavioral and expressivity patterns of each player.
Journal on Multimodal User Interfaces | 2010
Christopher E. Peters; Stylianos Asteriadis; Kostas Karpouzis
This paper investigates the use of a gaze-based interface for testing simple shared attention behaviours during an interaction scenario with a virtual agent. The interface is non-intrusive, operating in real-time using a standard web-camera for input, monitoring users’ head directions and processing them in real-time for resolution to screen coordinates. We use the interface to investigate user perception of the agent’s behaviour during a shared attention scenario. Our aim is to elaborate important factors to be considered when constructing engagement models that must account not only for behaviour in isolation, but also for the context of the interaction, as is the case during shared attention situations.
International Journal of Computer Vision | 2014
Stylianos Asteriadis; Kostas Karpouzis; Stefanos D. Kollias
Estimating the focus of attention of a person highly depends on her/his gaze directionality. Here, we propose a new method for estimating visual focus of attention using head rotation, as well as fuzzy fusion of head rotation and eye gaze estimates, in a fully automatic manner, without the need for any special hardware or a priori knowledge regarding the user, the environment or the setup. Instead, we propose a system aimed at functioning under unpretending conditions, only with the usage of simple hardware, like a normal web-camera. Our system is aimed at functioning in a human-computer interaction environment, considering a person is facing a monitor with a camera adjusted on top. To this aim, we propose in this paper two novel techniques, based on local and appearance information, estimating head rotation, and we adaptively fuse them in a common framework. The system is able to recognize head rotational movement, under translational movements of the user towards any direction, without any knowledge or a-priori estimate of the user’s distance from the camera or camera intrinsic parameters.
international conference on human computer interaction | 2009
Stylianos Asteriadis; Kostas Karpouzis; Stefanos D. Kollias
In this paper, we present our work towards estimating the engagement of a person to the displayed information of a computer monitor. Deciding whether a user is attentive or not, and frustrated or not, helps adapting the displayed information of a computer in special environments, such as e-learning. The aim of the current work is the development of a method that can work user-independently, without necessitating special lighting conditions and with only requirements in terms of hardware, a computer and a web-camera.
international conference on artificial neural networks | 2008
Stylianos Asteriadis; Kostas Karpouzis; Stefanos D. Kollias
User attention recognition in front of a monitor or a specific task is a crucial issue in many applications, ranging from e-learning to driving. Visual input is very important when extracting information regarding a users attention when recorded with a camera. However, intrusive equipment (special helmets, glasses equipped with cameras recording the eye movements, etc.) impose constraints on users spontaneity, especially when the target group consists of under aged users. In this paper, we propose a system for inferring user attention (state) in front of a computer monitor, only with the usage of a simple camera. The system can be used for real time applications and does not need calibration in terms of camera parameters. It can function under normal lighting conditions and needs no adaptation for each user.
Proceedings of the 2010 workshop on Eye gaze in intelligent human machine interaction | 2010
Stylianos Asteriadis; Kostas Karpouzis; Stefanos D. Kollias
Head pose together with eye gaze are a reliable indication regarding the estimate of the focus of attention of a person standing in front of a camera, with applications ranging from drivers attention estimation to meeting environments. As gaze indication, eye gaze in non-intrusive or non highly specialized environments is, most times, difficult to detect and, when possible, combination with head pose is necessary. Also, in order to successfully track the rotation angles of the head, a priori knowledge regarding the equipment setup parameters is needed, or specialized hardware, that can be intrusive is required. Here, we propose a novel facial feature tracker that uses Distance Vector Fields (DVFs) and, combined with a new technique for face tracking, successfully detects facial feature positions during an image sequence and estimates head pose parameters. No a priori knowledge regarding camera or environmental parameters is needed for our technique.