Michael G. Strintzis
Aristotle University of Thessaloniki
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael G. Strintzis.
european semantic web conference | 2005
Stephan Bloehdorn; Kosmas Petridis; Carsten Saathoff; Nikos Simou; Vassilis Tzouvaras; Yannis S. Avrithis; Siegfried Handschuh; Yiannis Kompatsiaris; Steffen Staab; Michael G. Strintzis
Annotations of multimedia documents typically have been pursued in two different directions. Either previous approaches have focused on low level descriptors, such as dominant color, or they have focused on the content dimension and corresponding annotations, such as person or vehicle. In this paper, we present a software environment to bridge between the two directions. M-OntoMat-Annotizer allows for linking low level MPEG-7 visual descriptions to conventional Semantic Web ontologies and annotations. We use M-OntoMat-Annotizer in order to construct ontologies that include prototypical instances of high-level domain concepts together with a formal specification of corresponding visual descriptors. Thus, we formalize the interrelationship of high- and low-level multimedia concept descriptions allowing for new kinds of multimedia content analysis and reasoning.
medical informatics europe | 1998
Nicos Maglaveras; T. Stamkopoulos; Konstantinos I. Diamantaras; C. Pappas; Michael G. Strintzis
The most widely used signal in clinical practice is the ECG. ECG conveys information regarding the electrical function of the heart, by altering the shape of its constituent waves, namely the P, QRS, and T waves. Thus, the required tasks of ECG processing are the reliable recognition of these waves, and the accurate measurement of clinically important parameters measured from the temporal distribution of the ECG constituent waves. In this paper, we shall review some current trends on ECG pattern recognition. In particular, we shall review non-linear transformations of the ECG, the use of principal component analysis (linear and non-linear), ways to map the transformed data into n-dimensional spaces, and the use of neural networks (NN) based techniques for ECG pattern recognition and classification. The problems we shall deal with are the QRS/PVC recognition and classification, the recognition of ischemic beats and episodes, and the detection of atrial fibrillation. Finally, a generalised approach to the classification problems in n-dimensional spaces will be presented using among others NN, radial basis function networks (RBFN) and non-linear principal component analysis (NLPCA) techniques. The performance measures of the sensitivity and specificity of these algorithms will also be presented using as training and testing data sets from the MIT-BIH and the European ST-T databases.
Pattern Recognition Letters | 2003
Filareti Tsalakanidou; Dimitrios Tzovaras; Michael G. Strintzis
In the present paper a face recognition technique is developed based on depth and colour information. The main objective of the paper is to evaluate three different approaches (colour, depth, combination of colour and depth) for face recognition and quantify the contribution of depth. The proposed face recognition technique is based on the implementation of the principal component analysis algorithm and the extraction of depth and colour eigenfaces. Experimental results show significant gains attained with the addition of depth information.
IEEE Transactions on Circuits and Systems for Video Technology | 2004
Vasileios Mezaris; Ioannis Kompatsiaris; Nikolaos V. Boulgouris; Michael G. Strintzis
In this paper, a novel algorithm is presented for the real-time, compressed-domain, unsupervised segmentation of image sequences and is applied to video indexing and retrieval. The segmentation algorithm uses motion and color information directly extracted from the MPEG-2 compressed stream. An iterative rejection scheme based on the bilinear motion model is used to effect foreground/background segmentation. Following that, meaningful foreground spatiotemporal objects are formed by initially examining the temporal consistency of the output of iterative rejection, clustering the resulting foreground macroblocks to connected regions and finally performing region tracking. Background segmentation to spatiotemporal objects is additionally performed. MPEG-7 compliant low-level descriptors describing the color, shape, position, and motion of the resulting spatiotemporal objects are extracted and are automatically mapped to appropriate intermediate-level descriptors forming a simple vocabulary termed object ontology. This, combined with a relevance feedback mechanism, allows the qualitative definition of the high-level concepts the user queries for (semantic objects, each represented by a keyword) and the retrieval of relevant video segments. Desired spatial and temporal relationships between the objects in multiple-keyword queries can also be expressed, using the shot ontology. Experimental results of the application of the segmentation algorithm to known sequences demonstrate the efficiency of the proposed segmentation approach. Sample queries reveal the potential of employing this segmentation algorithm as part of an object-based video indexing and retrieval scheme.
IEEE Transactions on Information Forensics and Security | 2008
Iordanis Mpiperis; Sotiris Malassiotis; Michael G. Strintzis
In this paper, we explore bilinear models for jointly addressing 3D face and facial expression recognition. An elastically deformable model algorithm that establishes correspondence among a set of faces is proposed first and then bilinear models that decouple the identity and facial expression factors are constructed. Fitting these models to unknown faces enables us to perform face recognition invariant to facial expressions and facial expression recognition with unknown identity. A quantitative evaluation of the proposed technique is conducted on the publicly available BU-3DFE face database in comparison with our previous work on face recognition and other state-of-the-art algorithms for facial expression recognition. Experimental results demonstrate an overall 90.5% facial expression recognition rate and an 86% rank-1 face recognition rate.
international conference on image processing | 2003
Vasileios Mezaris; Ioannis Kompatsiaris; Michael G. Strintzis
In this paper, an image retrieval methodology suited for search in large collections of heterogeneous images is presented. The proposed approach employs a fully unsupervised segmentation algorithm to divide images into regions. Low-level features describing the color, position, size and shape of the resulting regions are extracted and are automatically mapped to appropriate intermediate-level descriptors forming a simple vocabulary termed object ontology. The object ontology is used to allow the qualitative definition of the high-level concepts the user queries for (semantic objects, each represented by a keyword) in a human-centered fashion. When querying, clearly irrelevant image regions are rejected using the intermediate-level descriptors; following that, a relevance feedback mechanism employing the low-level features is invoked to produce the final query results. The proposed approach bridges the gap between keyword-based approaches, which assume the existence of rich image captions or require manual evaluation and annotation of every image of the collection, and query-by-example approaches, which assume that the user queries for images similar to one that already is at his disposal.
IEEE Transactions on Signal Processing | 1998
T. Stamkopoulos; Kostas I. Diamantaras; Nikolaos Maglaveras; Michael G. Strintzis
The detection of ischemic cardiac beats from a patients electrocardiogram (EGG) signal is based on the characteristics of a specific part of the beat called the ST segment. The correct classification of the beats relies heavily on the efficient and accurate extraction of the ST segment features. An algorithm is developed for this feature extraction based on nonlinear principal component analysis (NLPCA). NLPCA is a method for nonlinear feature extraction that is usually implemented by a multilayer neural network. It has been observed to have better performance, compared with linear principal component analysis (PCA), in complex problems where the relationships between the variables are not linear. In this paper, the NLPCA techniques are used to classify each segment into one of two classes: normal and abnormal (ST+, ST-, or artifact). During the algorithm training phase, only normal patterns are used, and for classification purposes, we use only two nonlinear features for each ST segment. The distribution of these features is modeled using a radial basis function network (RBFN). Test results using the European ST-T database show that using only two nonlinear components and a training set of 1000 normal samples from each file produce a correct classification rate of approximately 80% for the normal beats and higher than 90% for the ischemic beats.
IEEE Transactions on Image Processing | 2001
Nikolaos V. Boulgouris; Dimitrios Tzovaras; Michael G. Strintzis
The optimal predictors of a lifting scheme in the general n-dimensional case are obtained and applied for the lossless compression of still images using first quincunx sampling and then simple row-column sampling. In each case, the efficiency of the linear predictors is enhanced nonlinearly. Directional postprocessing is used in the quincunx case, and adaptive-length postprocessing in the row-column case. Both methods are seen to perform well. The resulting nonlinear interpolation schemes achieve extremely efficient image decorrelation. We further investigate context modeling and adaptive arithmetic coding of wavelet coefficients in a lossless compression framework. Special attention is given to the modeling contexts and the adaptation of the arithmetic coder to the actual data. Experimental evaluation shows that the best of the resulting coders produces better results than other known algorithms for multiresolution-based lossless image coding.
IEEE Transactions on Circuits and Systems for Video Technology | 2002
George A. Triantafyllidis; Dimitrios Tzovaras; Michael G. Strintzis
A novel frequency-domain technique for image blocking artifact detection and reduction is presented. The algorithm first detects the regions of the image which present visible blocking artifacts. This detection is performed in the frequency domain and uses the estimated relative quantization error calculated when the discrete cosine transform (DCT) coefficients are modeled by a Laplacian probability function. Then, for each block affected by blocking artifacts, its DC and AC coefficients are recalculated for artifact reduction. To achieve this, a closed-form representation of the optimal correction of the DCT coefficients is produced by minimizing a novel enhanced form of the mean squared difference of slope for every frequency separately. This correction of each DCT coefficient depends on the eight neighboring coefficients in the subband-like representation of the DCT transform and is constrained by the quantization upper and lower bound. Experimental results illustrating the performance of the proposed method are presented and evaluated.
IEEE Transactions on Circuits and Systems for Video Technology | 1997
Dimitrios Tzovaras; Nikos Grammalidis; Michael G. Strintzis
An object-based coding scheme is proposed for the coding of a stereoscopic image sequence using motion and disparity information. A hierarchical block-based motion estimation approach is used for initialization, while disparity estimation is performed using a pixel-based hierarchical dynamic programming algorithm. A split-and-merge segmentation procedure based on three-dimensional (3-D) motion modeling is then used to determine regions with similar motion parameters. The segmentation part of the algorithm is interleaved with the estimation part in order to optimize the coding performance of the procedure. Furthermore, a technique is examined for propagating the segmentation information with time. A 3-D motion-compensated prediction technique is used for both intensity and depth image sequence coding. Error images and depth maps are encoded using discrete cosine transform (DCT) and Huffman methods. Alternately, an efficient wireframe depth modeling technique may be used to convey depth information to the receiver. Motion and wireframe model parameters are then quantized and transmitted to the decoder along with the segmentation information. As a straightforward application, the use of the depth map information for the generation of intermediate views at the receiver is also discussed. The performance of the proposed compression methods is evaluated experimentally and is compared to other stereoscopic image sequence coding schemes.