Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michalis Papakostas is active.

Publication


Featured researches published by Michalis Papakostas.


pervasive technologies related to assistive environments | 2015

Monitoring breathing activity and sleep patterns using multimodal non-invasive technologies

Michalis Papakostas; James Staud; Fillia Makedon; Vangelis Metsis

The monitoring of sleeping behavioral patterns is of major importance for various reasons such as the detection and treatment of sleep disorders, the assessment of the effect of different medical conditions or medications on the sleep quality, and the assessment of mortality risks associated with sleeping patterns in adults and children. Sleep monitoring by itself is a difficult problem due to both privacy and technical considerations. The proposed system uses a combination of non-invasive sensors to assess and report sleep patterns and breathing activity: a contact-based pressure mattress and a non-contact 2D image acquisition device. To evaluate our system, we used real data collected in Heracleia Labs assistive living apartment. Our system uses Machine Learning and Computer Vision techniques to automatically analyze the collected data, recognize sleep patterns and track the breathing behavior. It is non-invasive, as it does not disrupt the users usual sleeping behavior and it can be used both at the clinic and at home with minimal cost. Going one step beyond, we developed a mobile application for visualizing the analyzed data and monitor the patients sleep status remotely.


intelligent user interfaces | 2017

CogniLearn: A Deep Learning-based Interface for Cognitive Behavior Assessment

Srujana Gattupalli; Dylan Ebert; Michalis Papakostas; Fillia Makedon; Vassilis Athitsos

This paper proposes a novel system for assessing physical exercises specifically designed for cognitive behavior monitoring. The proposed system provides decision support to experts for helping with early childhood development. Our work is based on the well-established framework of Head-Toes-Knees-Shoulders (HTKS) that is known for its sufficient psychometric properties and its ability to assess cognitive dysfunctions. HTKS serves as a useful measure for behavioral self-regulation. Our system, CogniLearn, automates capturing and motion analysis of users performing the HTKS game and provides detailed evaluations using state-of-the-art computer vision and deep learning based techniques for activity recognition and evaluation. The proposed system is supported by an intuitive and specifically designed user interface that can help human experts to cross-validate and/or refine their diagnosis. To evaluate our system, we created a novel dataset, that we made open to the public to encourage further experimentation. The dataset consists of 15 subjects performing 4 different variations of the HTKS task and contains in total more than 60,000 RGB frames, of which 4,443 are fully annotated.


Computation | 2017

Deep Visual Attributes vs. Hand-Crafted Audio Features on Multidomain Speech Emotion Recognition

Michalis Papakostas; Evaggelos Spyrou; Theodoros Giannakopoulos; Giorgos Siantikos; Dimitrios Sgouropoulos; Phivos Mylonas; Fillia Makedon

Emotion recognition from speech may play a crucial role in many applications related to human–computer interaction or understanding the affective state of users in certain tasks, where other modalities such as video or physiological parameters are unavailable. In general, a human’s emotions may be recognized using several modalities such as analyzing facial expressions, speech, physiological parameters (e.g., electroencephalograms, electrocardiograms) etc. However, measuring of these modalities may be difficult, obtrusive or require expensive hardware. In that context, speech may be the best alternative modality in many practical applications. In this work we present an approach that uses a Convolutional Neural Network (CNN) functioning as a visual feature extractor and trained using raw speech information. In contrast to traditional machine learning approaches, CNNs are responsible for identifying the important features of the input thus, making the need of hand-crafted feature engineering optional in many tasks. In this paper no extra features are required other than the spectrogram representations and hand-crafted features were only extracted for validation purposes of our method. Moreover, it does not require any linguistic model and is not specific to any particular language. We compare the proposed approach using cross-language datasets and demonstrate that it is able to provide superior results vs. traditional ones that use hand-crafted features.


signal image technology and internet based systems | 2016

Short-Term Recognition of Human Activities Using Convolutional Neural Networks

Michalis Papakostas; Theodoros Giannakopoulos; Fillia Makedon; Vangelis Karkaletsis

This paper proposes a deep learning classification method for frame-wise recognition of human activities, using raw color (RGB) information. In particular, we present a Convolutional Neural Network (CNN) classification approach for recognising three basic motion activity classes, that cover the vast majority of human activities in the context of a home monitoring environment, namely: sitting, walking and standing up. A real-world fully annotated dataset has been compiled, in the context of an assisted living home environment. Through extensive experimentation we have highlighted the benefits of deep learning architectures against traditional shallow classifiers functioning on hand-crafted features, on the task of activity recognition. Our approach proves the robustness and the quality of CNN classifiers that lies on learning highly invariant features. Our ultimate goal is to tackle the challenging task of activity recognition in environments that are characterized with high levels of inherent noise.


pervasive technologies related to assistive environments | 2016

An Interactive Learning and Adaptation Framework for Adaptive Robot Assisted Therapy

Konstantinos Tsiakas; Michalis Papakostas; Benjamin Chebaa; Dylan Ebert; Vangelis Karkaletsis; Fillia Makedon

In this paper, we present an interactive learning and adaptation framework. The framework combines Interactive Reinforcement Learning methods to effectively adapt and refine a learned policy to cope with new users. We argue that implicit feedback provided by the primary user and guidance from a secondary user can be integrated to the adaptation mechanism, resulting at a tailored and safe interaction. We illustrate this framework with a use case in Robot Assisted Therapy, presenting a Robot Yoga Trainer that monitors a yoga training session, adjusting the session parameters based on human motion activity recognition and evaluation through depth data, to assist the user complete the session, following a Reinforcement Learning approach.


pervasive technologies related to assistive environments | 2016

A Survey of Sensing Modalities for Human Activity, Behavior, and Physiological Monitoring

Alexandros Lioulemes; Michalis Papakostas; Shawn N. Gieser; Theodora Toutountzi; Maher Abujelala; Sanika Gupta; Christopher Collander; Christopher McMurrough; Fillia Makedon

In this paper, we present a survey of emerging technologies for non-invasive human activity, behavior, and physiological sensing. The survey focuses on technologies that are close to entering the commercial market, or have only recently become available. We intend for this survey to give researchers in any field relevant to human data collection an overview of currently accessible devices and sensing modalities, their capabilities, and how the technologies will mature with time.


pervasive technologies related to assistive environments | 2015

An interactive framework for learning user-object associations through human-robot interaction

Michalis Papakostas; Konstantinos Tsiakas; Natalie Parde; Vangelis Karkaletsis; Fillia Makedon

A great deal of recent research has focused on social and assistive robots that can achieve a more natural and realistic interaction between the agent and its environment. Following this direction, this paper aims to establish a computational framework that can associate objects with their uses and their basic characteristics in an automated manner. The goal is to continually enrich the robots knowledge regarding objects that are important to the user, through verbal interaction. We address the problem of learning correlations between object properties and human needs by associating visual with verbal information. Although the visual information can be acquired directly by the robot, the verbal information is acquired via interaction with a human user. Users provide descriptions of the objects for which the robot has captured visual information, and these two sources of information are combined automatically. We present a general model for learning these associations using Gaussian Mixture Models. Since learning is based on a probabilistic model, the approach handles uncertainty, redundancy, and irrelevant information. We illustrate the capabilities of our approach by presenting the results of an initial experiment run in a laboratory environment, and we describe the set of modules that support the proposed framework.


international symposium on parallel and distributed processing and applications | 2015

Visual sentiment analysis for brand monitoring enhancement

Theodoros Giannakopoulos; Michalis Papakostas; Stavros J. Perantonis; Vangelis Karkaletsis

Brand monitoring and reputation management are vital tasks in all modern business intelligence frameworks. However, recent related technologies rely mostly on the textual aspect of online content, in order to extract the underlying sentiment with respect to particular brands. In this work, we demonstrate the sentiment analysis method in the context of a brand monitoring framework, breaking the text-only barrier in the field. Towards this end, a wide range of visual features is extracted, some of which focus on the underlying semiotics and aesthetics of the images. In addition, we employ textual information embedded in the images under study, by adopting text mining techniques that focus on extracting sentiment. We evaluate the classification task for the particular binary task (negative vs positive sentiment) and propose a fusion approach that combines the two different modalities. Finally, the evaluation procedure has been carried out in the context of two different use cases, namely: (a) a general image sentiment classifier for brand and advertising images and (b) a brand-specific classification procedure, according to which the brand of the input images is known a-priori. Results have proven that the visual-based sentiment classification of brand and advertising information can outperform the respective text-based classification. In addition, fusing the two modalities leads to significant performance boosting.


pervasive technologies related to assistive environments | 2018

v-CAT: A Cyberlearning Framework for Personalized Cognitive Skill Assessment and Training

Michalis Papakostas; Konstantinos Tsiakas; Maher Abujelala; Morris D. Bell; Fillia Makedon

Recent research has shown that hundreds of millions of workers worldwide may lose their jobs to robots and automation by 2030, impacting over 40 developed and emerging countries and affecting more than 800 types of jobs. While automation promises to increase productivity and relieve workers from tedious or heavy-duty tasks, it can also widen the gap, leaving behind workers who lack automation training. In this project, we propose to build a technologically based, personalized vocational cyberlearning training system, where the user is assessed while immersed in a simulated workplace/factory task environment, and the system collecting and analyzing multisensory cognitive, behavioral and physiological data. Such a system, will produce recommendations to support targeted vocational training decision-making. The focus is on collecting and analyzing specific neurocognitive functions that include, working memory, attention, cognitive overload and cognitive flexibility. Collected data are analyzed to reveal, in iterative fashion, relationships between physiological and cognitive performance metrics, and how these relate to work-related behavioral patterns that require special vocational training.


Proceedings of the 5th international Workshop on Sensor-based Activity Recognition and Interaction | 2018

Towards a task-driven framework for multimodal fatigue analysis during physical and cognitive tasks

Konstantinos Tsiakas; Michalis Papakostas; James Ford; Fillia Makedon

This paper outlines the development of a task-driven framework for multimodal fatigue analysis during physical and cognitive tasks. While fatigue is a common symptom across several neurological chronic diseases, such as multiple sclerosis (MS), traumatic brain injury (TBI), cerebral palsy (CP) and others, it remains poorly understood, due to various reasons, including subjectivity and variability amongst individuals. Towards this end, we propose a task-driven data collection framework for multimodal fatigue analysis, in the domain of MS, combining behavioral, sensory and subjective measures, while users perform a set of both physical and cognitive tasks, including assessment tests and Activities of Daily Living (ADLs).

Collaboration


Dive into the Michalis Papakostas's collaboration.

Top Co-Authors

Avatar

Fillia Makedon

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Konstantinos Tsiakas

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Theodoros Giannakopoulos

National and Kapodistrian University of Athens

View shared research outputs
Top Co-Authors

Avatar

Natalie Parde

University of North Texas

View shared research outputs
Top Co-Authors

Avatar

Dylan Ebert

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Evaggelos Spyrou

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Maher Abujelala

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adam Hair

University of North Texas

View shared research outputs
Researchain Logo
Decentralizing Knowledge