Evgenios Vlachos
Aalborg University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Evgenios Vlachos.
international conference on social robotics | 2012
Evgenios Vlachos; Henrik Schärfe
This work presents a method for designing facial interfaces for sociable android robots with respect to the fundamental rules of human affect expression. Extending the work of Paul Ekman towards a robotic direction, we follow the judgment-based approach for evaluating facial expressions to test in which case an android robot like the Geminoid|DK ---a duplicate of an Original person- reveals emotions convincingly; when following an empirical perspective, or when following a theoretical one. The methodology includes the processes of acquiring the empirical data, and gathering feedback on them. Our findings are based on the results derived from a number of judgments, and suggest that before programming the facial expressions of a Geminoid, the Original should pass through the proposed procedure. According to our recommendations, the facial expressions of an android should be tested by judges, even in cases that no Original is engaged in the android face creation.
international conference on human-computer interaction | 2013
Evgenios Vlachos; Henrik Schärfe
Our society is on the borderline of information era, experiencing a transition towards a robotic one. Humanoid and android robots are entering with a steady pace into our everyday lives taking up roles related to companionship, partnership, wellness, healthcare, and education among others. The fusion of information technology, ubiquitous computing, robotics, and android science has generated the Geminoid Reality. The Geminoid is a teleoperated, connected to a computer network, android robot that works as a duplicate of an existing person. A motion-capture system tracks facial expressions, and head movements of the operator, and transmits them to the robot, overriding at run-time the preprogrammed configurations of the robots actuators. The Geminoid Reality is combining the Visual Reality (users’ and robot’s point of view) with an Augmented one (operator’s point of view) into a new kind of mixed reality involving physical embodiment, and representation, causing the ownership transfer, and blended presence phenomena.
International Journal of Social Robotics | 2016
Elizabeth Jochum; Evgenios Vlachos; Anja Christoffersen; Sally Grindsted Nielsen; Ibrahim A. Hameed; Zheng-Hua Tan
This paper describes an innovative approach for studying interaction between humans and care robots. Using live theatrical performance, we developed a play that depicts a plausible, future care scenario between a human and a socially assistive robot. We used an expanded version of the Godspeed Questionnaire to measure the audiences’ reactions to the robot, the observed interactions between the human and the robot, and their overall reactions to the performance. We present our results and propose a methodology and guidelines for using applied theatre as a platform to study human robot interaction (HRI). Unlike other HRI studies, the subject of our research is not the user who interacts with the robot but rather the audiences observing the HRI. We consider the technical and artistic challenges of designing and staging a believable care scenario that could potentially influence the perception and acceptance of care robots. This study marks a first step towards designing a robust framework for combining applied theatre with HRI research.
international conference on social computing | 2014
Evgenios Vlachos; Henrik Schärfe
The topic of human robot interaction HRI is an important part of human computer interaction HCI. Robots are more and more used in a social context, and in this paper we try to formulate a research agenda concerning ethical issues around social HRI in order to be prepared for future scenarios where robots may be a naturally integrated part of human society. We outline different paradigms to describe the role of social robots in communication processes with humans, and connect HRI with the topic of persuasive technology in health care, to critically reflect the potential benefits of using social robots as persuasive agents. The ability of a robotic system to conform to the demands behaviors, understanding, roles, and tasks that arise from the place the robot is designed to perform, affect the user and his/er sense of place attachment. Places are constantly changing, and so do interactions, thus robotic systems should continually adjust to change by modifying their behavior accordingly.
robot and human interactive communication | 2015
Evgenios Vlachos; Henrik Schärfe
Expectation and intention understanding through nonverbal behavior is a key topic of interest in socially embedded robots. This study presents the results of an open-ended evaluation method pertaining to the interpretation of Android facial expressions by adult subjects through an online survey with video stimuli. An open-ended question yields more spontaneous answers regarding the situation that can be associated with the synthetic emotional displays of an Android face. The robot used was the Geminoid-DK, while communicating the six basic emotions. The filtered results revealed situations highly relevant to the portrayed facial expressions for the emotions of Surprise, Fear, Anger, and Happiness, and less relevant for the emotions of Disgust, and Sadness. Statistical analysis indicated the existence of a moderate degree of correlation between the emotions of Fear-Surprise, and a high degree of correlation between the pair Disgust-Sadness. With a set of validated facial expressions prior to nonverbal emotional communication, androids and other humanoids can convey more accurate messages to their interaction partners, and overcome the limitations of their current limited affective interface.
robot and human interactive communication | 2015
Evgenios Vlachos; Elizabeth Jochum; Henrik Schärfe
This paper presents the results of a field-experiment focused on the head orientation behavior of users in short-term dyadic interactions with an android (male) robot in a playful context, as well as on the duration of the interactions. The robotic trials took place in an art exhibition where participants approached the robot either in groups, or alone, and were let free to either engage, or not in conversation. Our initial hypothesis that participants in groups would show increased rates of head turning behavior-since the turn-taking activity would include more participants-in contrast to those who came alone was not confirmed. Analysis of the results indicated that, on the one hand, gender did not play any significant role in head orientation, a behavior connected tightly to attention direction, and on the other hand, female participants have spent significantly more time with the robot than male participants. The findings suggest that androids have the ability to maintain the focus of attention during short-term interactions within a playful context, and that robots can be sufficiently studied in art settings. This study provides an insight on how users communicate with an android robot, and on how to design meaningful human robot social interaction for real life situations.
Volume 3: Engineering Systems; Heat Transfer and Thermal Engineering; Materials and Tribology; Mechatronics; Robotics | 2014
Evgenios Vlachos; Henrik Schärfe
Humans have adjusted their space, their actions, and their performed tasks according to their morphology, abilities, and limitations. Thus, the properties of a social robot should fit within these predetermined boundaries when, and if it is beneficial for the user, and the notion of the task. On such occasions, android and humanoid hand models should have similar structure, functions, and performance as the human hand. In this paper we present the anatomy, and the key functionalities of the human hand followed by a literature review on android/humanoid hands for grasping and manipulating objects, as well as prosthetic hands, in order to inform roboticists about the latest available technology, and assist their efforts to describe the state-of-the-art in this field.Copyright
agent and multi agent systems technologies and applications | 2015
Evgenios Vlachos; Henrik Schärfe
Using their face as their prior affective interface, android robots and other agents embody emotional facial expressions, and convey messages on their identity, gender, age, race, and attractiveness. We are examining whether androids can convey emotionally relevant information via their static facial signals, just as humans do. Based on the fact that social information can be accurately identified from still images of nonexpressive unknown faces, a judgment paradigm was employed to discover, and compare the style of facial expressions of the Geminoid-DK android (modeled after an actual human) and its’ Original (the actual human). The emotional judgments were achieved through an online survey with video-stimuli and questionnaires, following a forced-choice design. Analysis of the results indicated that the emotional judgments for the Geminoid-DK highly depend on the emotional judgments initially made for the Original, suggesting that androids inherit the same style of facial expression as their originals. Our findings support the case of designing android faces after specific actual persons who portray facial features that are familiar to the users, and also relevant to the notion of the robotic task, in order to increase the chance of sustaining a more emotional interaction.
intelligent networking and collaborative systems | 2012
Evgenios Vlachos
This work presents a method that provides adaptive support to those engaged in learning activities. It proposes a way for acquiring content knowledge in a specific domain by using Learning Objects (LOs) and suggests a pattern for designing and connecting these LOs for the creation of a course. The Spiral-in Method (SiM) encloses pedagogical and didactic potentials, addresses issues on both the educator and the group learners and implements personalized mechanisms. This methodology structures the design process into four distinct phases, fragmentation, coordination, combination and grouping. The starting point for this conquest to knowledge is estimated through the decomposition of a subject matter, combined with a series of questions that (1) set the goals and the preferences of the learners and (2) extract information about their prior knowledge on the subject matter. According to the answers given, LOs are created and connected in a linear structure, like a spiral. The LOs are grouped together into lessons attempting to satisfy short-term learning outcomes. The spiral has to be fully wrapped for the possession of the subject matter.
International Journal of Social Robotics | 2018
Zheng-Hua Tan; Nicolai Bæk Thomsen; Xiaodong Duan; Evgenios Vlachos; Sven Ewan Shepstone; Morten Højfeldt Rasmussen; Jesper Lisby Højvang
We present one way of constructing a social robot, such that it is able to interact with humans using multiple modalities. The robotic system is able to direct attention towards the dominant speaker using sound source localization and face detection, it is capable of identifying persons using face recognition and speaker identification and the system is able to communicate and engage in a dialog with humans by using speech recognition, speech synthesis and different facial expressions. The software is built upon the open-source robot operating system framework and our software is made publicly available. Furthermore, the electrical parts (sensors, laptop, base platform, etc.) are standard components, thus allowing for replicating the system. The design of the robot is unique and we justify why this design is suitable for our robot and the intended use. By making software, hardware and design accessible to everyone, we make research in social robotics available to a broader audience. To evaluate the properties and the appearance of the robot we invited users to interact with it in pairs (active interaction partner/observer) and collected their responses via an extended version of the Godspeed Questionnaire. Results suggest an overall positive impression of the robot and interaction experience, as well as significant differences in responses based on type of interaction and gender.