Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Konstantinos Avgerinakis is active.

Publication


Featured researches published by Konstantinos Avgerinakis.


intelligent environments | 2013

Recognition of Activities of Daily Living for Smart Home Environments

Konstantinos Avgerinakis; Alexia Briassouli; Ioannis Kompatsiaris

The recognition of Activities of Daily Living (ADL) from video can prove particularly useful in assisted living and smart home environments, as behavioral and lifestyle profiles can be constructed through the recognition of ADLs over time. Often, existing methods for recognition of ADLs have a very high computational cost, which makes them unsuitable for real time or near real time applications. In this work we present a novel method for recognizing ADLs with accuracy comparable to the state of the art, at a lowered computational cost. Comprehensive testing of the best existing descriptors, encoding methods and BoW/SVM based classification methods takes place to determine the optimal recognition solution. A statistical method for determining the temporal duration of extracted trajectories is also introduced, to streamline the recognition process and make it less ad-hoc. Experiments take place with benchmark ADL datasets and a newly introduced set of ADL recordings of elderly people with dementia as well as healthy individuals. Our algorithm leads to accurate recognition rates, comparable or better than the State of the Art, at a lower computational cost.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2016

Semantic Event Fusion of Different Visual Modality Concepts for Activity Recognition

Carlos Fernando Crispim-Junior; Vincent Buso; Konstantinos Avgerinakis; Georgios Meditskos; Alexia Briassouli; Jenny Benois-Pineau; Ioannis Kompatsiaris; Francois Bremond

Combining multimodal concept streams from heterogeneous sensors is a problem superficially explored for activity recognition. Most studies explore simple sensors in nearly perfect conditions, where temporal synchronization is guaranteed. Sophisticated fusion schemes adopt problem-specific graphical representations of events that are generally deeply linked with their training data and focused on a single sensor. This paper proposes a hybrid framework between knowledge-driven and probabilistic-driven methods for event representation and recognition. It separates semantic modeling from raw sensor data by using an intermediate semantic representation, namely concepts. It introduces an algorithm for sensor alignment that uses concept similarity as a surrogate for the inaccurate temporal information of real life scenarios. Finally, it proposes the combined use of an ontology language, to overcome the rigidity of previous approaches at model definition, and a probabilistic interpretation for ontological models, which equips the framework with a mechanism to handle noisy and ambiguous concept observations, an ability that most knowledge-driven methods lack. We evaluate our contributions in multimodal recordings of elderly people carrying out IADLs. Results demonstrated that the proposed framework outperforms baseline methods both in event recognition performance and in delimiting the temporal boundaries of event instances.


Journal of Ambient Intelligence and Smart Environments | 2015

Activities of daily living recognition using optimal trajectories from motion boundaries

Konstantinos Avgerinakis; Alexia Briassouli; Ioannis Kompatsiaris

The recognition of activities of daily living (ADLs) refers to the classification of activities commonly carried out in daily life, which are of particular interest in numerous applications, such as health monitoring, smart home environments and surveillance systems. We introduce a novel method for activity recognition, which achieves high recognition rates in ADL scenarios, comparable to, or better than, the State-of-the-Art (SoA). Meaningful areas of interest, the motion boundary activity areas, are introduced for dense sampling of interest points, significantly reducing the computational cost of action representation. Interest points are tracked over time by an enhanced Kanade Lucas Tomasi (KLT) tracker and accumulated in a three-dimensional trajectory structure, where multi-scale descriptors are formed. The temporal length of trajectories is determined online in an adaptive, non ad-hoc manner, by applying sequential statistical change detection on motion features via the Cumulative Sum (CUSUM) method. We thus build a multi-scale hybrid local-global appearance and motion descriptor, invariant to temporal changes, and supplemented with global location information. Encoding is performed using both standard Bag-of-Visual-Words (BoVW) and Fisher scheme, while a multiclass Support Vector Machine (SVM) is trained for recognition purposes. Extensive experimentation took place on both benchmark and our own data-sets, where it was demonstrated that our algorithm is robust to a wide range of viewing/recording conditions and human activities, achieving human activity recognition that is more accurate or comparable with the SoA.


conference on multimedia modeling | 2014

VERGE: An Interactive Search Engine for Browsing Video Collections

Anastasia Moumtzidou; Konstantinos Avgerinakis; Evlampios E. Apostolidis; Vera Aleksic; Fotini Markatopoulou; Christina Papagiannopoulou; Stefanos Vrochidis; Vasileios Mezaris; Reinhard Busch; Ioannis Kompatsiaris

This paper presents VERGE interactive video retrieval engine, which is capable of searching and browsing video content. The system integrates several content-based analysis and retrieval modules such as video shot segmentation and scene detection, concept detection, clustering and visual similarity search into a user friendly interface that supports the user in browsing through the collection, in order to retrieve the desired clip.


acm multimedia | 2013

Activity detection and recognition of daily living events

Konstantinos Avgerinakis; Alexia Briassouli; Ioannis Kompatsiaris

Activity recognition is one of the most active topics within computer vision. Despite its popularity, its application in real life scenarios is limited because many methods are not entirely automated and consume high computational resources for inferring information. In this work, we contribute two novel algorithms: (a) one for automatic video sequence segmentation - elsewhere referred to as activity spotting or activity detection - and (b) a second one for reducing activity representation computational cost. Two Bag-of-Words (BoW) representation schemas were tested for recognition purposes. A set of experiments was performed, both on publicly available datasets of activities of daily living (ADL), but also on our own ADL dataset with both healthy subjects and people with dementia, in realistic, life-like environments that are more challenging than those of benchmark datasets. Our method is shown to provide results better than, or comparable with, the SoA, while we also contribute a realistic ADL dataset to the community.


international conference on image processing | 2015

Moving camera human activity localization and recognition with motionplanes and multiple homographies

Konstantinos Avgerinakis; Katerina Adam; Alexia Briassouli; Yiannis Kompatsiaris

Camera motion is often present in videos, due to shaking, jitter or ego-motion. This creates the need for a reliable motion compensation algorithm that will lead to accurate spatial activity localization and activity recognition. We detect background areas using SLIC superpixels and multiple homographies, and introduce motionplanes, defined as background areas that are undergoing diverse camera motions. Dense trajectories are sampled from these regions and histograms of appearance and motion are extracted to build a representation scheme to be used for activity recognition. Experiments on two publicly available unconstrained video datasets of human activities recorded with a moving camera show that our algorithm not only outperforms most activity localization state-of-the-art algorithms but also leads to high activity recognition accuracy rates.


acm multimedia | 2010

Real time illumination invariant motion change detection

Konstantinos Avgerinakis; Alexia Briassouli; Ioannis Kompatsiaris

An original approach for real time detection of changes in motion is presented, which can lead to the detection and recognition of events. Current video change detection focuses on shot changes which depend on appearance, not motion. Changes in motion are detected in pixels that are found to be active via the kurtosis. Statistical modeling of the motion data shows that the Laplace distribution provides the most accurate fit. The Laplace model of the motion is used in a sequential change detection test, which detects the changes in real time. False alarm detection determined whether a detected change is indeed induced by motion or by varying scene illumination. This leads to precise detection of changes in motion for many videos, where shot change detection if shown to fail. Experiments show that the proposed method finds meaningful changes in real time, even under conditions of varying scene illumination.


Signal Processing-image Communication | 2017

Efficient motion estimation methods for fast recognition of activities of daily living

Stergios Poularakis; Konstantinos Avgerinakis; Alexia Briassouli; Ioannis Kompatsiaris

Abstract This work proposes a framework for the efficient recognition of activities of daily living (ADLs), captured by static color cameras, applicable in real world scenarios. Our method reduces the computational cost of ADL recognition in both compressed and uncompressed domains by introducing system level improvements in State-of-the-Art activity recognition methods. Faster motion estimation methods are employed to replace costly dense optical flow (OF) based motion estimation, through the use of fast block matching methods, as well as motion vectors, drawn directly from the compressed video domain (MPEG vectors). This results in increased computational efficiency, with minimal loss in terms of recognition accuracy. To prove the effectiveness of our approach, we provide an extensive, in-depth investigation of the trade-offs between computational cost, compression efficiency and recognition accuracy, tested on bench-mark and real-world ADL video datasets.


conference on multimedia modeling | 2016

VERGE: A Multimodal Interactive Search Engine for Video Browsing and Retrieval

Anastasia Moumtzidou; Evlampios E. Apostolidis; Foteini Markatopoulou; Anastasia Ioannidou; Ilias Gialampoukidis; Konstantinos Avgerinakis; Stefanos Vrochidis; Vasileios Mezaris; Ioannis Kompatsiaris; Ioannis Patras

This paper presents VERGE interactive search engine, which is capable of browsing and searching into video content. The system integrates content-based analysis and retrieval modules such as video shot segmentation, concept detection, clustering, as well as visual similarity and object-based search.


advanced video and signal based surveillance | 2016

A hybrid framework for online recognition of activities of daily living in real-world settings

Farhood Negin; Michal Koperski; Carlos Fernando Crispim; Francois Bremond; Serhan Cosar; Konstantinos Avgerinakis

Many supervised approaches report state-of-the-art results for recognizing short-term actions in manually clipped videos by utilizing fine body motion information. The main downside of these approaches is that they are not applicable in real world settings. The challenge is different when it comes to unstructured scenes and long-term videos. Unsupervised approaches have been used to model the long-term activities but the main pitfall is their limitation to handle subtle differences between similar activities since they mostly use global motion information. In this paper, we present a hybrid approach for long-term human activity recognition with more precise recognition of activities compared to unsupervised approaches. It enables processing of long-term videos by automatically clipping and performing online recognition. The performance of our approach has been tested on two Activities of Daily Living (ADL) datasets. Experimental results are promising compared to existing approaches.

Collaboration


Dive into the Konstantinos Avgerinakis's collaboration.

Top Co-Authors

Avatar

Ioannis Kompatsiaris

Information Technology Institute

View shared research outputs
Top Co-Authors

Avatar

Stefanos Vrochidis

Information Technology Institute

View shared research outputs
Top Co-Authors

Avatar

Anastasia Moumtzidou

Information Technology Institute

View shared research outputs
Top Co-Authors

Avatar

Evlampios E. Apostolidis

Information Technology Institute

View shared research outputs
Top Co-Authors

Avatar

Anastasios Karakostas

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar

Vasileios Mezaris

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar

Yiannis Kompatsiaris

Information Technology Institute

View shared research outputs
Top Co-Authors

Avatar

Georgios Meditskos

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar

Stelios Andreadis

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge