Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vasileios Mezaris is active.

Publication


Featured researches published by Vasileios Mezaris.


IEEE Transactions on Circuits and Systems for Video Technology | 2004

Real-time compressed-domain spatiotemporal segmentation and ontologies for video indexing and retrieval

Vasileios Mezaris; Ioannis Kompatsiaris; Nikolaos V. Boulgouris; Michael G. Strintzis

In this paper, a novel algorithm is presented for the real-time, compressed-domain, unsupervised segmentation of image sequences and is applied to video indexing and retrieval. The segmentation algorithm uses motion and color information directly extracted from the MPEG-2 compressed stream. An iterative rejection scheme based on the bilinear motion model is used to effect foreground/background segmentation. Following that, meaningful foreground spatiotemporal objects are formed by initially examining the temporal consistency of the output of iterative rejection, clustering the resulting foreground macroblocks to connected regions and finally performing region tracking. Background segmentation to spatiotemporal objects is additionally performed. MPEG-7 compliant low-level descriptors describing the color, shape, position, and motion of the resulting spatiotemporal objects are extracted and are automatically mapped to appropriate intermediate-level descriptors forming a simple vocabulary termed object ontology. This, combined with a relevance feedback mechanism, allows the qualitative definition of the high-level concepts the user queries for (semantic objects, each represented by a keyword) and the retrieval of relevant video segments. Desired spatial and temporal relationships between the objects in multiple-keyword queries can also be expressed, using the shot ontology. Experimental results of the application of the segmentation algorithm to known sequences demonstrate the efficiency of the proposed segmentation approach. Sample queries reveal the potential of employing this segmentation algorithm as part of an object-based video indexing and retrieval scheme.


international conference on image processing | 2003

An ontology approach to object-based image retrieval

Vasileios Mezaris; Ioannis Kompatsiaris; Michael G. Strintzis

In this paper, an image retrieval methodology suited for search in large collections of heterogeneous images is presented. The proposed approach employs a fully unsupervised segmentation algorithm to divide images into regions. Low-level features describing the color, position, size and shape of the resulting regions are extracted and are automatically mapped to appropriate intermediate-level descriptors forming a simple vocabulary termed object ontology. The object ontology is used to allow the qualitative definition of the high-level concepts the user queries for (semantic objects, each represented by a keyword) in a human-centered fashion. When querying, clearly irrelevant image regions are rejected using the intermediate-level descriptors; following that, a relevance feedback mechanism employing the low-level features is invoked to produce the final query results. The proposed approach bridges the gap between keyword-based approaches, which assume the existence of rich image captions or require manual evaluation and annotation of every image of the collection, and query-by-example approaches, which assume that the user queries for images similar to one that already is at his disposal.


IEEE Transactions on Circuits and Systems for Video Technology | 2005

Knowledge-assisted semantic video object detection

Stamatia Dasiopoulou; Vasileios Mezaris; Ioannis Kompatsiaris; Vasileios-Kyriakos Papastathis; Michael G. Strintzis

An approach to knowledge-assisted semantic video object detection based on a multimedia ontology infrastructure is presented. Semantic concepts in the context of the examined domain are defined in an ontology, enriched with qualitative attributes (e.g., color homogeneity), low-level features (e.g., color model components distribution), object spatial relations, and multimedia processing methods (e.g., color clustering). Semantic Web technologies are used for knowledge representation in the RDF(S) metadata standard. Rules in F-logic are defined to describe how tools for multimedia analysis should be applied, depending on concept attributes and low-level features, for the detection of video objects corresponding to the semantic concepts defined in the ontology. This supports flexible and managed execution of various application and domain independent multimedia analysis tasks. Furthermore, this semantic analysis approach can be used in semantic annotation and transcoding systems, which take into consideration the users environment including preferences, devices used, available network bandwidth and content identity. The proposed approach was tested for the detection of semantic objects on video data of three different domains.


International Journal of Pattern Recognition and Artificial Intelligence | 2004

Still Image Segmentation Tools for Object-based Multimedia Applications

Vasileios Mezaris; Ioannis Kompatsiaris; Michael G. Strintzis

In this paper, a color image segmentation al- gorithm and an approach to large-format image segmen- tation are presented, both focused on breaking down im- ages to semantic objects for object-based multimedia ap- plications. The proposed color image segmentation algo- rithm performs the segmentation in the combined intensity- texture-position feature space in order to produce con- nected regions that correspond to the real-life objects shown in the image. A preprocessing stage of conditional image fil- tering and a modified K-Means-with-connectivity-constraint pixel classification algorithm are used to allow for seamless integration of the different pixel features. Unsupervised op- eration of the segmentation algorithm is enabled by means of an initial clustering procedure. The large-format image seg- mentation scheme employs the aforementioned segmenta- tion algorithm, providing an elegant framework for the fast segmentation of relatively large images. In this framework, the segmentation algorithm is applied to reduced versions of the original images, in order to speed-up the completion of the segmentation, resulting in a coarse-grained segmen- tation mask. The final fine-grained segmentation mask is produced with partial reclassification of the pixels of the original image to the already formed regions, using a Bayes classifier. As shown by experimental evaluation, this novel scheme provides fast segmentation with high perceptual seg- mentation quality.


EURASIP Journal on Advances in Signal Processing | 2004

Region-based image retrieval using an object ontology and relevance feedback

Vasileios Mezaris; Ioannis Kompatsiaris; Michael G. Strintzis

An image retrieval methodology suited for search in large collections of heterogeneous images is presented. The proposed approach employs a fully unsupervised segmentation algorithm to divide images into regions and endow the indexing and retrieval system with content-based functionalities. Low-level descriptors for the color, position, size, and shape of each region are subsequently extracted. These arithmetic descriptors are automatically associated with appropriate qualitative intermediate-level descriptors, which form a simple vocabulary termed object ontology. The object ontology is used to allow the qualitative definition of the high-level concepts the user queries for (semantic objects, each represented by a keyword) and their relations in a human-centered fashion. When querying for a specific semantic object (or objects), the intermediate-level descriptor values associated with both the semantic object and all image regions in the collection are initially compared, resulting in the rejection of most image regions as irrelevant. Following that, a relevance feedback mechanism, based on support vector machines and using the low-level descriptors, is invoked to rank the remaining potentially relevant image regions and produce the final query results. Experimental results and comparisons demonstrate, in practice, the effectiveness of our approach.


IEEE Transactions on Circuits and Systems for Video Technology | 2011

Temporal Video Segmentation to Scenes Using High-Level Audiovisual Features

Panagiotis Sidiropoulos; Vasileios Mezaris; Ioannis Kompatsiaris; Hugo Meinedo; Miguel Bugalho; Isabel Trancoso

In this paper, a novel approach to video temporal decomposition into semantic units, termed scenes, is presented. In contrast to previous temporal segmentation approaches that employ mostly low-level visual or audiovisual features, we introduce a technique that jointly exploits low-level and high-level features automatically extracted from the visual and the auditory channel. This technique is built upon the well-known method of the scene transition graph (STG), first by introducing a new STG approximation that features reduced computational cost, and then by extending the unimodal STG-based temporal segmentation technique to a method for multimodal scene segmentation. The latter exploits, among others, the results of a large number of TRECVID-type trained visual concept detectors and audio event detectors, and is based on a probabilistic merging process that combines multiple individual STGs while at the same time diminishing the need for selecting and fine-tuning several STG construction parameters. The proposed approach is evaluated on three test datasets, comprising TRECVID documentary films, movies, and news-related videos, respectively. The experimental results demonstrate the improved performance of the proposed approach in comparison to other unimodal and multimodal techniques of the relevant literature and highlight the contribution of high-level audiovisual features toward improved video segmentation to scenes.


IEEE Transactions on Circuits and Systems for Video Technology | 2004

Video object segmentation using Bayes-based temporal tracking and trajectory-based region merging

Vasileios Mezaris; Ioannis Kompatsiaris; Michael G. Strintzis

A novel unsupervised video object segmentation algorithm is presented, aiming to segment a video sequence to objects: spatiotemporal regions representing a meaningful part of the sequence. The proposed algorithm consists of three stages: initial segmentation of the first frame using color, motion, and position information, based on a variant of the K-means-with-connectivity-constraint algorithm; a temporal tracking algorithm, using a Bayes classifier and rule-based processing to reassign changed pixels to existing regions and to efficiently handle the introduction of new regions; and a trajectory-based region merging procedure that employs the long-term trajectory of regions, rather than the motion at the frame level, so as to group them to objects with different motion. As shown by experimental evaluation, this scheme can efficiently segment video sequences with fast moving or newly appearing objects. A comparison with other methods shows segmentation results corresponding more accurately to the real objects appearing on the image sequence.


Ultrasound in Medicine and Biology | 2008

IMAGE ANALYSIS TECHNIQUES FOR AUTOMATED IVUS CONTOUR DETECTION

Maria Papadogiorgaki; Vasileios Mezaris; Yiannis S. Chatzizisis; George D. Giannoglou; Ioannis Kompatsiaris

Intravascular ultrasound (IVUS) constitutes a valuable technique for the diagnosis of coronary atherosclerosis. The detection of lumen and media-adventitia borders in IVUS images represents a necessary step towards the reliable quantitative assessment of atherosclerosis. In this work, a fully automated technique for the detection of lumen and media-adventitia borders in IVUS images is presented. This comprises two different steps for contour initialization: one for each corresponding contour of interest and a procedure for the refinement of the detected contours. Intensity information, as well as the result of texture analysis, generated by means of a multilevel discrete wavelet frames decomposition, are used in two different techniques for contour initialization. For subsequently producing smooth contours, three techniques based on low-pass filtering and radial basis functions are introduced. The different combinations of the proposed methods are experimentally evaluated in large datasets of IVUS images derived from human coronary arteries. It is demonstrated that our proposed segmentation approaches can quickly and reliably perform automated segmentation of IVUS images.


Eurasip Journal on Image and Video Processing | 2015

Special issue on animal and insect behaviour understanding in image sequences

Concetto Spampinato; Giovanni Maria Farinella; Bastiaan Johannes Boom; Vasileios Mezaris; Margrit Betke; Robert B. Fisher

Imaging systems are, nowadays, used increasingly in a range of ecological monitoring applications, in particular for biological, fishery, geological and physical surveys. These technologies have improved radically the ability to capture high-resolution images in challenging environments and consequently to manage effectively natural resources. Unfortunately, advances in imaging devices have not been followed by improvements in automated analysis systems, necessary because of the need for timeconsuming and expensive inputs by human observers. This analytical ‘bottleneck’ greatly limits the potentialities of these technologies and increases demand for automatic content analysis approaches to enable proactive provision of analytical information. On the other side, the study of the behaviour by processing visual data has become an active research area in computer vision. The visual information gathered from image sequences is extremely useful to understand the behaviour of the different objects in the scene, as well as how they interact with each other or with the surrounding environment. However, whilst a large number of video analysis techniques have been developed specifically for investigating events and behaviour in human-centred applications, very little attention has been paid to the understanding of other live organisms, such as animals and insects, although a huge amount of video data are routinely recorded, e.g. the Fish4Knowledge project (www. fish4knowledge.eu) or the wide range of nest cams (http:// watch.birds.cornell.edu/nestcams/home/index) continuously monitor, respectively, underwater reef and bird nests (there exist also variants focusing on wolves, badgers, foxes etc.). The automated analysis of visual data in real-life environments for animal and insect behaviour understanding poses several challenges for computer vision researchers


content based multimedia indexing | 2009

An Empirical Study of Multi-label Learning Methods for Video Annotation

Anastasios Dimou; Grigorios Tsoumakas; Vasileios Mezaris; Ioannis Kompatsiaris; Ioannis P. Vlahavas

This paper presents an experimental comparison of different approaches to learning from multi-labeled video data. We compare state-of-the-art multi-label learning methods on the Media mill Challenge dataset. We employ MPEG-7 and SIFT-based global image descriptors independently and in conjunction using variations of the stacking approach for their fusion. We evaluate the results comparing the different classifiers using both MPEG-7 and SIFT-based descriptors and their fusion. A variety of multi-label evaluation measures is used to explore advantages and disadvantages of the examined classifiers. Results give rise to interesting conclusions.

Collaboration


Dive into the Vasileios Mezaris's collaboration.

Top Co-Authors

Avatar

Ioannis Kompatsiaris

Information Technology Institute

View shared research outputs
Top Co-Authors

Avatar

Michael G. Strintzis

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar

Ioannis Patras

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Georgios Th. Papadopoulos

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar

Nikolaos Gkalelis

Information Technology Institute

View shared research outputs
Top Co-Authors

Avatar

Foteini Markatopoulou

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anastasios Dimou

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar

Panagiotis Sidiropoulos

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar

Evlampios E. Apostolidis

Information Technology Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge