Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Konstantinos Tsiakas is active.

Publication


Featured researches published by Konstantinos Tsiakas.


pervasive technologies related to assistive environments | 2015

A multimodal adaptive session manager for physical rehabilitation exercising

Konstantinos Tsiakas; Manfred Huber; Fillia Makedon

Physical exercising is an essential part of any rehabilitation plan. The subject must be committed to a daily exercising routine, as well as to a frequent contact with the therapist. Rehabilitation plans can be quite expensive and time-consuming. On the other hand, tele-rehabilitation systems can be really helpful and efficient for both subjects and therapists. In this paper, we present ReAdapt, an adaptive module for a tele-rehabilitation system that takes into consideration the progress and performance of the exercising utilizing multisensing data and adjusts the session difficulty resulting to a personalized session. Multimodal data such as speech, facial expressions and body motion are being collected during the exercising and feed the system to decide on the exercise and session difficulty. We formulate the problem as a Markov Decision Process and apply a Reinforcement Learning algorithm to train and evaluate the system on simulated data.


international conference on social robotics | 2016

Adaptive Robot Assisted Therapy Using Interactive Reinforcement Learning

Konstantinos Tsiakas; Maria Dagioglou; Vangelis Karkaletsis; Fillia Makedon

In this paper, we present an interactive learning and adaptation framework that facilitates the adaptation of an interactive agent to a new user. We argue that Interactive Reinforcement Learning methods can be utilized and integrated to the adaptation mechanism, enabling the agent to refine its learned policy in order to cope with different users. We illustrate our framework with a use case in the domain of Robot Assisted Therapy. We present our results of the learning and adaptation experiments against different simulated users, showing the motivation of our work and discussing future directions towards the definition and implementation of our proposed framework.


intelligent user interfaces | 2016

Facilitating Safe Adaptation of Interactive Agents using Interactive Reinforcement Learning

Konstantinos Tsiakas

In this paper, we propose a learning framework for the adaptation of an interactive agent to a new user. We focus on applications where safety and personalization are essential, as Rehabilitation Systems and Robot Assisted Therapy. We argue that interactive learning methods can be utilised and combined into the Reinforcement Learning framework, aiming at a safe and tailored interaction.


pervasive technologies related to assistive environments | 2016

An Interactive Learning and Adaptation Framework for Adaptive Robot Assisted Therapy

Konstantinos Tsiakas; Michalis Papakostas; Benjamin Chebaa; Dylan Ebert; Vangelis Karkaletsis; Fillia Makedon

In this paper, we present an interactive learning and adaptation framework. The framework combines Interactive Reinforcement Learning methods to effectively adapt and refine a learned policy to cope with new users. We argue that implicit feedback provided by the primary user and guidance from a secondary user can be integrated to the adaptation mechanism, resulting at a tailored and safe interaction. We illustrate this framework with a use case in Robot Assisted Therapy, presenting a Robot Yoga Trainer that monitors a yoga training session, adjusting the session parameters based on human motion activity recognition and evaluation through depth data, to assist the user complete the session, following a Reinforcement Learning approach.


ieee symposium series on computational intelligence | 2016

An intelligent Interactive Learning and Adaptation framework for robot-based vocational training

Konstantinos Tsiakas; Maher Abujelala; Alexandros Lioulemes; Fillia Makedon

In this paper, we propose an Interactive Learning and Adaptation framework for Human-Robot Interaction in a vocational setting. We show how Interactive Reinforcement Learning (RL) techniques can be applied to such HRI applications in order to promote effective interaction. We present the framework by showing two different use cases in a vocational setting. In the first use case, the robot acts as a trainer, assisting the user while the user is solving the Towers of Hanoi problem. In the second use case, a robot and a human operator collaborate towards solving a synergistic construction or assembly task. We show how RL is used in the proposed framework and discuss its effectiveness in the two different vocational use cases, the Robot Assisted Training and the Human-Robot Collaboration case.


international conference on universal access in human computer interaction | 2014

A Dialogue System for Ensuring Safe Rehabilitation

Alexandros Papangelis; Georgios Galatas; Konstantinos Tsiakas; Alexandros Lioulemes; Dimitrios Zikos; Fillia Makedon

Dialogue Systems DS are intelligent user interfaces, able to provide intuitive and natural interaction with their users, through a variety of modalities. We present, here, a DS whose purpose is to ensure that patients are consistently and correctly performing rehabilitative exercises, in a tele-rehabilitation scenario. More specifically, our DS operates in collaboration with a remote rehabilitation system, where users suffering from injuries, degenerative disorders and others, perform exercises at home under the remote supervision of a therapist. The DS interacts with the users and makes sure that they perform their prescribed exercises correctly and according to the specified, by the therapist, protocol. To this end, various sensors are utilized, such as Microsofts Kinect, the Wi-Patch and others.


pervasive technologies related to assistive environments | 2015

A multimodal adaptive dialogue manager for depressive and anxiety disorder screening: a Wizard-of-Oz experiment

Konstantinos Tsiakas; Lynette Watts; Cyril Lutterodt; Theodoros Giannakopoulos; Alexandros Papangelis; Robert J. Gatchel; Vangelis Karkaletsis; Fillia Makedon

In this paper, we present an Adaptive Multimodal Dialogue System for Depressive and Anxiety Disorders Screening (DADS). The system interacts with the user through verbal and non-verbal communication to elicit the information needed to make referrals and recommendations for depressive and anxiety disorders while encouraging the user and keeping them calm. We designed the problem using interconnected Markov Decision Processes using sub-goals to deal with the large state space. We present the problem formulation and the experimental procedure for the training data collection and the system training following the methodology of Wizard-of-Oz experiments.


pervasive technologies related to assistive environments | 2015

An interactive framework for learning user-object associations through human-robot interaction

Michalis Papakostas; Konstantinos Tsiakas; Natalie Parde; Vangelis Karkaletsis; Fillia Makedon

A great deal of recent research has focused on social and assistive robots that can achieve a more natural and realistic interaction between the agent and its environment. Following this direction, this paper aims to establish a computational framework that can associate objects with their uses and their basic characteristics in an automated manner. The goal is to continually enrich the robots knowledge regarding objects that are important to the user, through verbal interaction. We address the problem of learning correlations between object properties and human needs by associating visual with verbal information. Although the visual information can be acquired directly by the robot, the verbal information is acquired via interaction with a human user. Users provide descriptions of the objects for which the robot has captured visual information, and these two sources of information are combined automatically. We present a general model for learning these associations using Gaussian Mixture Models. Since learning is based on a probabilistic model, the approach handles uncertainty, redundancy, and irrelevant information. We illustrate the capabilities of our approach by presenting the results of an initial experiment run in a laboratory environment, and we describe the set of modules that support the proposed framework.


pervasive technologies related to assistive environments | 2018

v-CAT: A Cyberlearning Framework for Personalized Cognitive Skill Assessment and Training

Michalis Papakostas; Konstantinos Tsiakas; Maher Abujelala; Morris D. Bell; Fillia Makedon

Recent research has shown that hundreds of millions of workers worldwide may lose their jobs to robots and automation by 2030, impacting over 40 developed and emerging countries and affecting more than 800 types of jobs. While automation promises to increase productivity and relieve workers from tedious or heavy-duty tasks, it can also widen the gap, leaving behind workers who lack automation training. In this project, we propose to build a technologically based, personalized vocational cyberlearning training system, where the user is assessed while immersed in a simulated workplace/factory task environment, and the system collecting and analyzing multisensory cognitive, behavioral and physiological data. Such a system, will produce recommendations to support targeted vocational training decision-making. The focus is on collecting and analyzing specific neurocognitive functions that include, working memory, attention, cognitive overload and cognitive flexibility. Collected data are analyzed to reveal, in iterative fashion, relationships between physiological and cognitive performance metrics, and how these relate to work-related behavioral patterns that require special vocational training.


pervasive technologies related to assistive environments | 2018

A Taxonomy in Robot-Assisted Training: Current Trends, Needs and Challenges

Konstantinos Tsiakas; Vangelis Karkaletsis; Fillia Makedon

In this paper, we present a taxonomy in Robot-Assisted Training; a growing body of research in Human-Robot Interaction which focuses on how robotic agents and devices can be used to enhance users performance during a cognitive or physical training task. The proposed taxonomy includes a set of parameters that characterize such systems, in order to highlight the current research trends and needs for the design, development and evaluation of Robot-Assisted Training systems. Towards this direction, we review related taxonomies in Human Robot Interaction, as well as recent works and applications in Robot-Assisted Training. The motivation of this research is to identify and discuss issues and challenges, focusing on the personalization aspects of a Robot-Assisted Training system.

Collaboration


Dive into the Konstantinos Tsiakas's collaboration.

Top Co-Authors

Avatar

Fillia Makedon

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Michalis Papakostas

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Maher Abujelala

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Natalie Parde

University of North Texas

View shared research outputs
Top Co-Authors

Avatar

Alexandros Lioulemes

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Dimitrios Zikos

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Theodoros Giannakopoulos

National and Kapodistrian University of Athens

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexandros Papangelis

University of Texas at Arlington

View shared research outputs
Researchain Logo
Decentralizing Knowledge