Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Evlampios E. Apostolidis is active.

Publication


Featured researches published by Evlampios E. Apostolidis.


international conference on acoustics, speech, and signal processing | 2014

Fast shot segmentation combining global and local visual descriptors

Evlampios E. Apostolidis; Vasileios Mezaris

This paper introduces an algorithm for fast temporal segmentation of videos into shots. The proposed method detects abrupt and gradual transitions, based on the visual similarity of neighboring frames of the video. The descriptive efficiency of both local (SURF) and global (HSV histograms) descriptors is exploited for assessing frame similarity, while GPU-based processing is used for accelerating the analysis. Specifically, abrupt transitions are initially detected between successive video frames where there is a sharp change in the visual content, which is expressed by a very low similarity score. Then, the calculated scores are further analysed for the identification of frame-sequences where a progressive change of the visual content takes place and, in this way gradual transitions are detected. Finally, a post-processing step is performed aiming to identify outliers due to object/camera movement and flash-lights. The experiments show that the proposed algorithm achieves high accuracy while being capable of faster-than-real-time analysis.


acm multimedia | 2014

Automatic fine-grained hyperlinking of videos within a closed collection using scene segmentation

Evlampios E. Apostolidis; Vasileios Mezaris; Mathilde Sahuguet; Benoit Huet; Barbora Cervenková; Daniel Stein; Stefan Eickeler; José Luis Redondo García; Raphaël Troncy; Lukás Pikora

This paper introduces a framework for establishing links between related media fragments within a collection of videos. A set of analysis techniques is applied for extracting information from different types of data. Visual-based shot and scene segmentation is performed for defining media fragments at different granularity levels, while visual cues are detected from keyframes of the video via concept detection and optical character recognition (OCR). Keyword extraction is applied on textual data such as the output of OCR, subtitles and metadata. This set of results is used for the automatic identification and linking of related media fragments. The proposed framework exhibited competitive performance in the Video Hyperlinking sub-task of MediaEval 2013, indicating that video scene segmentation can provide more meaningful segments, compared to other decomposition methods, for hyperlinking purposes.


conference on multimedia modeling | 2014

VERGE: An Interactive Search Engine for Browsing Video Collections

Anastasia Moumtzidou; Konstantinos Avgerinakis; Evlampios E. Apostolidis; Vera Aleksic; Fotini Markatopoulou; Christina Papagiannopoulou; Stefanos Vrochidis; Vasileios Mezaris; Reinhard Busch; Ioannis Kompatsiaris

This paper presents VERGE interactive video retrieval engine, which is capable of searching and browsing video content. The system integrates several content-based analysis and retrieval modules such as video shot segmentation and scene detection, concept detection, clustering and visual similarity search into a user friendly interface that supports the user in browsing through the collection, in order to retrieve the desired clip.


international conference on multimedia and expo | 2013

Fast object re-detection and localization in video for spatio-temporal fragment creation

Evlampios E. Apostolidis; Vasileios Mezaris; Ioannis Kompatsiaris

This paper presents a method for the detection and localization of instances of user-specified objects within a video or a collection of videos. The proposed method is based on the extraction and matching of SURF descriptors in video frames and further incorporates a number of improvements so as to enhance both the detection accuracy and the time efficiency of the process. Specifically, (a) GPU-based processing is introduced for specific parts of the object re-detection pipeline, (b) a new video-structure-based sampling technique is employed for limiting the number of frames that need to be processed and (c) improved robustness to scale variations is achieved by generating and employing additional instances of the object of interest based on the one originally provided by the user. The experimental results show that the algorithm achieves high levels of detection accuracy while the overall needed processing time makes the algorithm suitable for quick instance-based labeling of video and the creation of object-based spatio-temporal fragments.


Proceedings of the First International Workshop on Multimedia Verification | 2017

The InVID Plug-in: Web Video Verification on the Browser

Denis Teyssou; Jean-Michel Leung; Evlampios E. Apostolidis; Konstantinos Apostolidis; Symeon Papadopoulos; Markos Zampoglou; Olga Papadopoulou; Vasileios Mezaris

This paper presents a novel open-source browser plug-in that aims at supporting journalists and news professionals in their efforts to verify user-generated video. The plug-in, which is the result of an iterative design thinking methodology, brings together a number of sophisticated multimedia analysis components and third party services, with the goal of speeding up established verification workflows and making it easy for journalists to access the results of different services that were previously used as standalone tools. The tool has been downloaded several hundreds of times and is currently used by journalists worldwide, after being tested by Agence France-Presse (AFP) and Deutsche Welle (DW) journalists and media researchers for a few months. The tool has already helped debunk a number of fake videos.


Multimedia Tools and Applications | 2017

A web-based tool for fast instance-level labeling of videos and the creation of spatiotemporal media fragments

Anastasia Ioannidou; Evlampios E. Apostolidis; Chrysa Collyda; Vasileios Mezaris

This paper presents a web-based interactive tool for time-efficient instance-level spatiotemporal labeling of videos, based on the re-detection of manually selected objects of interest that appear in them. The developed tool allows the user to select a number of instances of the object that will be used for annotating the video via detecting and spatially demarcating it in the video frames, and provide a short description about the selected object. These instances are given as input to the object re-detection module of the tool, which detects and spatially demarcates re-occurrences of the object in the video frames. The video segments that contain detected instances of the given object can be then considered as object-related media fragments, being annotated with the user-provided information about the object. A key component for building such a tool is the development of an algorithm that performs the re-detection of the object throughout the video frames. For this, the first part of this work presents our study on different approaches for object re-detection and the finally developed one, which combines the recently proposed BRISK descriptors with a descriptor matching strategy that relies on the LSH algorithm. Following, the second part of this work is dedicated to the description of the implemented tool, introducing the supported functionalities and demonstrating its use for object-specific labeling of videos. A set of experiments and a user study regarding the efficiency of the introduced object re-detection method and the performance of the developed tool indicate that the proposed framework can be used for accurate and time-efficient instance-based annotation of videos, and the creation of object-related spatiotemporal media fragments.


conference on multimedia modeling | 2016

VERGE: A Multimodal Interactive Search Engine for Video Browsing and Retrieval

Anastasia Moumtzidou; Evlampios E. Apostolidis; Foteini Markatopoulou; Anastasia Ioannidou; Ilias Gialampoukidis; Konstantinos Avgerinakis; Stefanos Vrochidis; Vasileios Mezaris; Ioannis Kompatsiaris; Ioannis Patras

This paper presents VERGE interactive search engine, which is capable of browsing and searching into video content. The system integrates content-based analysis and retrieval modules such as video shot segmentation, concept detection, clustering, as well as visual similarity and object-based search.


conference on multimedia modeling | 2015

VERGE: A Multimodal Interactive Video Search Engine

Anastasia Moumtzidou; Konstantinos Avgerinakis; Evlampios E. Apostolidis; Fotini Markatopoulou; Konstantinos Apostolidis; Stefanos Vrochidis; Vasileios Mezaris; Ioannis Kompatsiaris; Ioannis Patras

This paper presents VERGE interactive video retrieval engine, which is capable of searching into video content. The system integrates several content-based analysis and retrieval modules such as video shot boundary detection, concept detection, clustering and visual similarity search.


international conference on multimedia retrieval | 2017

VideoAnalysis4ALL: An On-line Tool for the Automatic Fragmentation and Concept-based Annotation, and the Interactive Exploration of Videos

Chrysa Collyda; Evlampios E. Apostolidis; Alexandros Pournaras; Foteini Markatopoulou; Vasileios Mezaris; Ioannis Patras

This paper presents the VideoAnalysis4ALL tool that supports the automatic fragmentation and concept-based annotation of videos, and the exploration of the annotated video fragments through an interactive user interface. The developed web application decomposes the video into two different granularities, namely shots and scenes, and annotates each fragment by evaluating the existence of a number (several hundreds) of high-level visual concepts in the keyframes extracted from these fragments. Through the analysis the tool enables the identification and labeling of semantically coherent video fragments, while its user interfaces allow the discovery of these fragments with the help of human-interpretable concepts. The integrated state-of-the-art video analysis technologies perform very well and, by exploiting the processing capabilities of multi-thread / multi-core architectures, reduce the time required for analysis to approximately one third of the videos duration, thus making the analysis three times faster than real-time processing.


conference on multimedia modeling | 2018

A Motion-Driven Approach for Fine-Grained Temporal Segmentation of User-Generated Videos.

Konstantinos Apostolidis; Evlampios E. Apostolidis; Vasileios Mezaris

This paper presents an algorithm for the temporal segmentation of user-generated videos into visually coherent parts that correspond to individual video capturing activities. The latter include camera pan and tilt, change in focal length and camera displacement. The proposed approach identifies the aforementioned activities by extracting and evaluating the region-level spatio-temporal distribution of the optical flow over sequences of neighbouring video frames. The performance of the algorithm was evaluated with the help of a newly constructed ground-truth dataset, against several state-of-the-art techniques and variations of them. Extensive evaluation indicates the competitiveness of the proposed approach in terms of detection accuracy, and highlight its suitability for analysing large collections of data in a time-efficient manner.

Collaboration


Dive into the Evlampios E. Apostolidis's collaboration.

Top Co-Authors

Avatar

Vasileios Mezaris

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar

Ioannis Kompatsiaris

Information Technology Institute

View shared research outputs
Top Co-Authors

Avatar

Anastasia Moumtzidou

Information Technology Institute

View shared research outputs
Top Co-Authors

Avatar

Konstantinos Avgerinakis

Information Technology Institute

View shared research outputs
Top Co-Authors

Avatar

Stefanos Vrochidis

Information Technology Institute

View shared research outputs
Top Co-Authors

Avatar

Vasileios Mezaris

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar

Ioannis Patras

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Foteini Markatopoulou

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Fotini Markatopoulou

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge