Efthymios Alepis
University of Piraeus
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Efthymios Alepis.
international conference on information technology new generations | 2008
George A. Tsihrintzis; Maria Virvou; Efthymios Alepis; Ioanna-Ourania Stathopoulou
In this paper, we investigate the possibility of improving the accuracy of visual-facial emotion recognition through use of additional (complementary) keyboard-stroke information. The investigation is based on two empirical studies that we have conducted involving human subjects and human observers. The studies were concerned with the recognition of emotions from a visual-facial modality and keyboard-stroke information, respectively. They were inspired by the relative shortage of such previous research in empirical work concerning the strengths and weaknesses of each modality so that the extent can be determined to which the keyboard-stroke information complements and improves the emotion recognition accuracy of the visual-facial modality. Specifically, our research focused on the recognition of six basic emotion states, namely happiness, sadness, surprise, anger and disgust as well as the emotionless state which we refer to as neutral. We have found that the visual-facial modality may allow the recognition of certain states, such as neutral and surprise, with sufficient accuracy. However, its accuracy in recognizing anger and happiness can be improved significantly if assisted by keyboard-stroke information.
Expert Systems With Applications | 2011
Efthymios Alepis; Maria Virvou
Web-based education is particularly appropriate for remote teaching and learning at any time and place, away from classrooms and does not necessarily require the presence of a human instructor. The need for time and place independence is even greater in some cases, such as for medical instructors who are usually doctors that have to treat patients on top of their tutoring duties. However, this independence from real teachers and classrooms may influence negatively the students who may feel deprived of the benefits of human-human interaction. In this paper we describe a novel approach for incorporating affective characteristics into e-learning through an authoring tool. The authoring tool incorporates and adapts principles of a cognitive theory for modeling possible emotional states that a tutoring agent may use for educational purposes. Medical instructors may use this authoring tool to create their own educational characters that will interact affectively with their students in the e-learning environment.
Knowledge Based Systems | 2010
Ioanna-Ourania Stathopoulou; Efthymios Alepis; George A. Tsihrintzis; Maria Virvou
Towards realizing a multimodal affect recognition system, we are considering the advantages of assisting a visual-facial expression recognition system with keyboard-stroke pattern information. Our work is based on the assumption that the visual-facial and keyboard modalities are complementary to each other and that their combination can significantly improve the accuracy in affective user models. Specifically, we present and discuss the development and evaluation process of two corresponding affect recognition subsystems, with emphasis on the recognition of six basic emotional states, namely happiness, sadness, surprise, anger and disgust as well as the emotion-less state which we refer to as neutral. We find that emotion recognition by the visual-facial modality can be aided greatly by keyboard-stroke pattern information and the combination of the two modalities can lead to better results towards building a multimodal affect recognition system.
Archive | 2008
Efthymios Alepis; Maria Virvou; Katerina Kabassi; Dimitriou St
Summary. This chapter presents an affective bi-modal Intelligent Tutoring System (ITS) with emphasis on the early stages of its creation. Affective ITSs are expected to provide a more human-like interaction between students and educational software. The ITS is named Edu-Affe-Mikey and its tutoring domain is Medicine. The two modes of interaction presented in this chapter concern the keyboard and the microphone. Emotions of students are recognised by each modality separately and then, evidence from the two modalities is combined through a decision making theory. After emotion recognition has been performed Edu-Affe-Mikey adapts dynamically its tutoring behaviour to an appropriate emotion of an animated tutoring agent. In this respect an affective and adaptive interaction is achieved in the interactions of the student with the ITS, by performing both affect recognition and affect generation.
Intelligent Decision Technologies | 2007
Efthymios Alepis; Maria Virvou; Katerina Kabassi
This paper presents the development process of an Intelligent Tutoring System that performs emotion recognition on the basis of two modalities: keyboard and microphone. The system uses a multi-criteria theory to combine information about the user emotions through the two modalities. In this paper we focus on presenting and discussing the experimental studies that were conducted for the purposes of the design of the system. The experimental studies yield results concerning the way that human observers perform emotion recognition on other humans and then we show how we use these results to design the reasoning mechanism of the Intelligent Tutoring system that performs emotion recognition.
international conference on knowledge-based and intelligent information and engineering systems | 2007
Maria Virvou; George A. Tsihrintzis; Efthymios Alepis; Ioanna-Ourania Stathopoulou; Katerina Kabassi
In this paper, we present and discuss two empirical studies that we have conducted involving human subjects and human observers concerning the recognition of emotions from audio-lingual and visual-facial modalities. Many researchers agree that these modalities are complementary to each other and that the combination of the two can improve the accuracy in affective user models. However, there is a shortage of research in empirical work concerning the strengths and weaknesses of each modality so that more accurate recognizers can be built. In our research, we have investigated the recognition of emotions from the above mentioned modalities with respect to 6 basic emotional states, namely happiness,sadness, surprise, angerand disgustas well as the emotionless state which we refer to as neutral. We have found that certain states such as neutral happiness and surprise are more clearly recognized from the visual-facial modality whereas sadness and disgust are more clearly recognized from the audio-lingual modality.
international conference on tools with artificial intelligence | 2010
Efthymios Alepis; Ioanna-Ourania Stathopoulou; Maria Virvou; George A. Tsihrintzis; Katerina Kabassi
Towards building a multimodal affect recognition system, we have built a facial expression recognition system and a audio-lingual affect recognition system. In this paper, we present and discuss the development and evaluation process of the two subsystems, concerning the recognition of emotions from audio-lingual and visual-facial modalities. Many researchers agree that these modalities are complementary to each other and that the combination of the two can improve the accuracy in affective user models. Therefore in this paper we present a combination of two modes using multi-criteria decision making theories. The resulted system takes advantage of the strengths of each mode and is more accurate in emotion recognition.
International Journal of Interactive Mobile Technologies (ijim) | 2009
Efthymios Alepis; Maria Virvou; Katerina Kabassi
One important field where mobile technology can make significant contributions is education. However one criticism in mobile education is that students receive impersonal teaching. Affective computing may give a solution to this problem. In this paper we describe an affective bi-modal educational system for mobile devices. In our research we describe a novel approach of combining information from two modalities namely the keyboard and the microphone through a multi-criteria decision making theory.
Multimedia Tools and Applications | 2012
Efthymios Alepis; Maria Virvou
In this paper, we investigate an object oriented (OO) architecture for multimodal emotion recognition in interactive applications through mobile phones or handheld devices. Mobile phones are different from desktop computers since mobile phones are not performing any processing involving emotion recognition whereas desktop computers can perform such processing. In fact, in our approach, mobile phones have to pass all data collected to a server and then perform emotion recognition. The object oriented architecture that we have created, combines evidence from multiple modalities of interaction, namely the mobile device’s keyboard and the mobile device’s microphone, as well as data from emotion stereotypes. Moreover, the OO method classifies them into well structured objects with their own properties and methods. The resulting emotion detection server is capable of using and handling transmitted information from different mobile sources of multimodal data during human-computer interaction. As a test bed for the affective mobile interaction we have used an educational application that is incorporated into the mobile system.
New Directions in Intelligent Interactive Multimedia | 2008
Efthymios Alepis; Maria Virvou; Katerina Kabassi
This paper describes a novel research approach for affective reasoning that aims at recognizing user emotions within an educational application. The novel approach is based on information about users that arises from two modalities (keyboard and microphone) and is processed based on a combination of the user stereotypes theory and a decision making theory. The resulting system is called Educational Affective Tutor (EAT). EAT is an educational system that helps students learn geography and supports bi-modal interaction. The main focus of this paper is to show how affect recognition is designed based on and empirical study aimed at finding common user reactions that expressed user feelings while they interacted with computers.