Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nicholas Vretos.
artificial intelligence in education | 2018
Dorothea Tsatsou; Andrew Pomazanskyi; Enrique Hortal; Evaggelos Spyrou; Helen C. Leligou; Stylianos Asteriadis; Nicholas Vretos; Petros Daras
This paper introduces an end-to-end solution for dynamic adaptation of the learning experience for learners of different personal needs, based on their behavioural and affective reaction to the learning activities. Personal needs refer to what learner already know, what they need to learn, their intellectual and physical capacities and their learning styles.
advanced video and signal based surveillance | 2017
Federico Alvarez; Mirela Popa; Nicholas Vretos; Alberto Belmonte-Hernandez; Stelios Asteriadis; Vassilis Solachidis; Triana Mariscal; Dario Dotti; Petros Daras
The analysis of multimodal data collected by innovative imaging sensors, Internet of Things (IoT) devices and user interactions, can provide smart and automatic distant monitoring of patients and reveal valuable insights for early detection and/or prevention of events related to their health situation. In this paper, we present a platform called ICT4LIFE which starting from low-level data capturing and performing multimodal fusion to extract relevant features, can perform high-level reasoning to provide relevant data on monitoring and evolution of the patient, and trigger proper actions for improving the quality of life of the patient.
2017 9th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games) | 2017
Dorothea Tsatsou; Nicholas Vretos; Petros Daras
In next generation technology-enhanced learning environments, intelligent educational systems can benefit from tapping into multi-agent, adaptive, gamified learning experiences, which transform the traditional instructional paradigm from classroom-based learning to personalised learning in any setting, whether collective or individual. Such settings enable learning targeted to each individuals learning styles and needs, through the use of autonomous technological agents as actuators of the learning process. Learning components which will respond to the needs of such an educational framework should provide capabilities for adaptive, affective and interactive learning, automatic feedback and automatic assessment of the learners behavioural state. A novel methodology is proposed to model such components, which focuses on the representation and management of learning objects (LOs) for any educational domain, any type of learner and learning style and any learning methodology, while fostering non-linearity in the educational process. This methodology is supported by a strategy for modelling and adapting re-usable learning objectives, coupled with an ontology that enables scalable and personalized decision-making over learning activities on autonomous devices, enabling dynamic modularisation of learning material during the learning process.
2017 12th International Workshop on Semantic and Social Media Adaptation and Personalization (SMAP) | 2017
Dimitrios Antonaras; Charis Pavlidis; Nicholas Vretos; Petros Daras
Previous studies of robots used in learning environments suggest that the interaction between learner and robot is able to enhance the learning procedure towards a better engagement of the learner. Moreover, intelligent robots can also adapt their behavior during a learning process according to certain criteria resulting in increasing cognitive learning gains. Motivated by these results, we propose a novel Human Robot Interaction framework where the robot adjusts its behavior to the affect state of the learner. Our framework uses the theory of flow to label different affect states (i.e., engagement, boredom and frustration) and adapt the robots actions. Based on the automatic recognition of these states, through visual cues, our method adapt the learning actions taking place at this moment and performed by the robot. This results in keeping the learner at most times engaged in the learning process. In order to recognizing the affect state of the user a two step approach is followed. Initially we recognize the facial expressions of the learner and therefore we map these to an affect state. Our algorithm perform well even in situations where the environment is noisy due to the presence of more than one person and/or situations where the face is partially occluded.
pervasive technologies related to assistive environments | 2018
Vassilis Solachidis; Ioannis Paliokas; Nicholas Vretos; Konstantinos Votis; Ulises Cortés; Dimitrios Tzovaras
This paper compares two methodological approaches derived from the EU Horizon 2020 funded projects CAREGIVERSPROMMD (C-MMD)1 and ICT4LIFE2. Both approaches were initiated in 2016 with the ambition to provide new integrated care services to people living with cognitive impairments, including Dementia, Alzheimer and Parkinson disease, as well as to their home caregivers towards a long-term increase in quality of life and autonomy at home. An outline of the disparities and similarities related to non-pharmacological interventions introduced by the two projects to foster treatment adherence was made. Both approaches have developed software solutions, including social platforms, notifications, Serious Games, user monitoring and support services aimed at developing the concepts of self-care, active patients and integrated care. Besides their differences, both projects can be benefited by knowledge and technology exchange, pilot results sharing and possible users exchange if possible in the near future.
Virtual Reality | 2018
Nicholas Vretos; Petros Daras; Stylianos Asteriadis; Enrique Hortal; Esam Ghaleb; Evaggelos Spyrou; Helen C. Leligou; Panagiotis Karkazis; Panagiotis Trakadas; Kostantinos Assimakopoulos
Currently, in all augmented reality (AR) or virtual reality (VR) educational experiences, the evolution of the experience (game, exercise or other) and the assessment of the user’s performance are based on her/his (re)actions which are continuously traced/sensed. In this paper, we propose the exploitation of the sensors available in the AR/VR systems to enhance the current AR/VR experiences, taking into account the users’ affect state that changes in real time. Adapting the difficulty level of the experience to the users’ affect state fosters their engagement which is a crucial issue in educational environments and prevents boredom and anxiety. The users’ cues are processed enabling dynamic user profiling. The detection of the affect state based on different sensing inputs, since diverse sensing devices exist in different AR/VR systems, is investigated, and techniques that have been undergone validation using state-of-the-art sensors are presented.
IEEE MultiMedia | 2018
Federico Alvarez; Mirela Popa; Vassilios Solachidis; Gustavo Hernandez-Penaloza; Alberto Belmonte-Hernandez; Stylianos Asteriadis; Nicholas Vretos; Marcos Quintana; Thomas Theodoridis; Dario Dotti; Petros Daras
The analysis of multimodal data collected by innovative imaging sensors, Internet of Things devices, and user interactions can provide smart and automatic distant monitoring of Parkinsons and Alzheimers patients and reveal valuable insights for early detection and/or prevention of events related to their health. This article describes a novel system that involves data capturing and multimodal fusion to extract relevant features, analyze data, and provide useful recommendations. The system gathers signals from diverse sources in health monitoring environments, understands the user behavior and context, and triggers proper actions for improving the patients quality of life. The system offers a multimodal, multi-patient, versatile approach not present in current developments. It also offers comparable or improved results for detection of abnormal behavior in daily motion. The system was implemented and tested during 10 weeks in real environments involving 18 patients.
advanced video and signal based surveillance | 2017
Thomas Theodoridis; Vassilis Solachidis; Nicholas Vretos; Petros Daras; Dario Dotti; Mirela Popa; Gustavo Hernández; Federico Alvarez; Alejandro Gonzalez Paton; Angel Lopez
The ICT4Life Open Source framework contains libraries for acquiring and processing data from different sensors, machine learning algorithms for activity recognition, as well as fusion methods of multiple modalities either at an early or at a late stage. The main purpose of the introduced system is to enable an easy customization of patients monitoring using different types of sensors. Furthermore, by allowing an easy integration of new sensors or types of activities, the proposed subsystem supports the development of new solutions for different diseases, than the ones considered in the ICT4Life p roject.
advanced video and signal based surveillance | 2017
Alberto Belmonte-Hernandez; Vassilis Solachidis; Thomas Theodoridis; Gustavo Hernandez-Penaloza; G. Conti; Nicholas Vretos; Federico Alvarez; Petros Daras
In this paper, a novel multi-modal method for person identification in indoor environments is presented. This approach relies on matching the skeletons detected by a Kinect v2 device with wearable devices equipped with inertial sensors. Movement features such as yaw and pitch changes are employed to associate a particular Kinect skeleton to a person using the wearable. The entire process of sensor calibration, feature extraction, synchronization and matching is detailed in this work. Six detection scenarios were defined to assess the proposed method. Experimental results have shown a high accuracy in the association process.
Studies in health technology and informatics | 2017
Alejandro Sánchez-Rico; Pascal Garel; Isabella Notarangelo; Marcos Quintana; Gustavo Hernández; Stylianos Asteriadis; Mirela Popa; Nicholas Vretos; Vassilis Solachidis; Marta Burgos; Ariane Girault
Integrated care ICT Platform to support patients, care-givers and health/social professionals in the care of dementia and Parkinsons disease with training, empowerment, sensor-based data analysis and cooperation services based on user-friendly interfaces.