Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Grigorios Tsagkatakis is active.

Publication


Featured researches published by Grigorios Tsagkatakis.


IEEE Transactions on Circuits and Systems for Video Technology | 2011

Online Distance Metric Learning for Object Tracking

Grigorios Tsagkatakis; Andreas E. Savakis

Tracking an object without any prior information regarding its appearance is a challenging problem. Modern tracking algorithms treat tracking as a binary classification problem between the object class and the background class. The binary classifier can be learned offline, if a specific object model is available, or online, if there is no prior information about the objects appearance. In this paper, we propose the use of online distance metric learning in combination with nearest neighbor classification for object tracking. We assume that the previous appearances of the object and the background are clustered so that a nearest neighbor classifier can be used to distinguish between the new appearance of the object and the appearance of the background. In order to support the classification, we employ a distance metric learning (DML) algorithm that learns to separate the object from the background. We utilize the first few frames to build an initial model of the object and the background and subsequently update the model at every frame during the course of tracking, so that changes in the appearance of the object and the background are incorporated into the model. Furthermore, instead of using only the previous frame as the objects model, we utilize a collection of previous appearances encoded in a template library to estimate the similarity under variations in appearance. In addition to the utilization of the online DML algorithm for learning the object/background model, we propose a novel feature representation of image patches. This representation is based on the extraction of scale invariant features over a regular grid coupled with dimensionality reduction using random projections. This type of representation is both robust, capitalizing on the reproducibility of the scale invariant features, and fast, performing the tracking on a reduced dimensional space. The proposed tracking algorithm was tested under challenging conditions and achieved state-of-the art performance.


international conference on computer vision | 2011

Manifold based Sparse Representation for robust expression recognition without neutral subtraction

Raymond W. Ptucha; Grigorios Tsagkatakis; Andreas E. Savakis

This paper exploits the discriminative power of manifold learning in conjunction with the parsimonious power of sparse signal representation to perform robust facial expression recognition. By utilizing an ℓ1 reconstruction error and a statistical mixture model, both accuracy and tolerance to occlusion improve without the need to perform neutral frame subtraction. Initially facial features are mapped onto a low dimensional manifold using supervised Locality Preserving Projections. Then an ℓ1 optimization is employed to relate surface projections to training exemplars, where reconstruction models on facial regions determine the expression class. Experimental procedures and results are done in accordance with the recently published extended Cohn-Kanade and GEMEP-FERA datasets. Results demonstrate that posed datasets overemphasize the mouth region, while spontaneous datasets rely more on the upper cheek and eye regions. Despite these differences, the proposed method overcomes previous limitations to using sparse methods for facial expression and produces state-of-the-art results on both types of datasets.


european conference on computer vision | 2010

Sparse representations and distance learning for attribute based category recognition

Grigorios Tsagkatakis; Andreas E. Savakis

While traditional approaches in object recognition require the specification of training examples from each class and the application of class specific classifiers, in real world situations, the immensity of the number of image classes makes this task daunting. A novel approach in object recognition is attribute based classification, where instead of training classifiers for the recognition of specific object class instances, classifiers are trained on attributes of the object images and these attributes are subsequently used for the object recognition. The attributes based paradigm offers significant advantages including the ability to train classifiers without any visual examples. We begin by discussing a scenario for object recognition on mobile devices where the attribute prediction and the attribute-to-class mapping are decoupled in order to meet the specific resource constraints of mobile systems. We next present two extensions on the attribute based classification paradigm by introducing alternative approaches in attribute prediction and attribute-to-class mapping. For the attribute prediction, we employ the recently proposed Sparse Representations Classification scheme that offers significant benefits compared to the previous SVM based approaches, such as increased accuracy and elimination of the training stage. For the attribute-to-class mapping, we employ a Distance Metric Learning algorithm that automatically infers the significance of each attribute instead of assuming uniform attribute importance. The benefits of the proposed extensions are validated through experimental results.


Image and Signal Processing for Remote Sensing XXI | 2015

Deep learning for multi-label land cover classification

Konstantinos Karalas; Grigorios Tsagkatakis; Michalis Zervakis; Panagiotis Tsakalides

Whereas single class classification has been a highly active topic in optical remote sensing, much less effort has been given to the multi-label classification framework, where pixels are associated with more than one labels, an approach closer to the reality than single-label classification. Given the complexity of this problem, identifying representative features extracted from raw images is of paramount importance. In this work, we investigate feature learning as a feature extraction process in order to identify the underlying explanatory patterns hidden in low-level satellite data for the purpose of multi-label classification. Sparse auto-encoders composed of a single hidden layer, as well as stacked in a greedy layer-wise fashion formulate the core concept of our approach. The results suggest that learning such sparse and abstract representations of the features can aid in both remote sensing and multi-label problems. The results presented in the paper correspond to a novel real dataset of annotated spectral imagery naturally leading to the multi-label formulation.


international conference on pattern recognition | 2010

Manifold Modeling with Learned Distance in Random Projection Space for Face Recognition

Grigorios Tsagkatakis; Andreas E. Savakis

In this paper, we propose the combination of manifold learning and distance metric learning for the generation of a representation that is both discriminative and informative, and we demonstrate that this approach is effective for face recognition. Initial dimensionality reduction is achieved using random projections, a computationally efficient and data independent linear transformation. Distance metric learning is then applied to increase the separation between classes and improve the accuracy of nearest neighbor classification. Finally, a manifold learning method is used to generate a mapping between the randomly projected data and a low dimensional manifold. Face recognition results suggest that the combination of distance metric learning and manifold learning can increase performance. Furthermore, random projections can be applied as an initial step without significantly affecting the classification accuracy.


international midwest symposium on circuits and systems | 2012

Low vision assistance using face detection and tracking on android smartphones

Andreas E. Savakis; Mark Stump; Grigorios Tsagkatakis; Roy W. Melton; Gary Behm; Gwen Sterns

This paper presents a low vision assistance system for individuals with blind spots in their visual field. The system identifies prominent faces in the field of view and redisplays them in regions that are visible to the user. As part of the system performance evaluation, we compare various algorithms for face detection and tracking on an Android smartphone, a netbook and a high-performance workstation representative of cloud computing. We examine processing time and energy consumption on all three platforms to determine the tradeoff between processing on a smartphone versus a cloud-desktop after compression and transmission. Our results demonstrate that Viola-Jones face detection along with Lucas-Kanade tracking achieve the best performance and efficiency.


international conference on image processing | 2009

Random Projections for face detection under resource constraints

Grigorios Tsagkatakis; Andreas E. Savakis

Face detection is a key component in numerous computer vision applications. Most face detection algorithms achieve real-time performance by some form of dimensionality reduction of the input data, such as Principal Component Analysis. In this paper, we are exploring the emerging method of Random Projections (RP), a data independent linear projection method, for dimensionality reduction in the context of face detection. The benefits of using random projections include computational efficiency that can be obtained by implementing matrix multiplications with a small number of integer additions or subtractions. The computational savings are of great significance in resource constrained environments, such as wireless video sensor networks. Experimental results suggest that RP can achieve performance that is comparable to that obtained with traditional dimensionality reduction techniques for face detection using support vector machines.


international symposium on visual computing | 2010

Face recognition using sparse representations and manifold learning

Grigorios Tsagkatakis; Andreas E. Savakis

Manifold learning is a novel approach in non-linear dimensionality reduction that has shown great potential in numerous applications and has gained ground compared to linear techniques. In addition, sparse representations have been recently applied on computer vision problems with success, demonstrating promising results with respect to robustness in challenging scenarios. A key concept shared by both approaches is the notion of sparsity. In this paper we investigate how the framework of sparse representations can be applied in various stages of manifold learning. We explore the use of sparse representations in two major components of manifold learning: construction of the weight matrix and classification of test data. In addition, we investigate the benefits that are offered by introducing a weighting scheme on the sparse representations framework via the weighted LASSO algorithm. The underlying manifold learning approach is based on the recently proposed spectral regression framework that offers significant benefits compared to previously proposed manifold learning techniques. We present experimental results on these techniques in three challenging face recognition datasets.


international conference on image processing | 2011

Manifold learning for simultaneous pose and facial expression recognition

Raymond W. Ptucha; Grigorios Tsagkatakis; Andreas E. Savakis

Research on facial expression recognition has steadily been moving from analysis of deliberative frontal expressions to analysis of unconstrained spontaneous expressions. This shift has spawned complex 3D models and computationally expensive geometric methods that prevent usage on resource constrained platforms such as smart phones. This paper presents manifold learning techniques for accurate multi-view facial expression on low resolution 2D images. Our results indicate that mixed class local pose and expression manifold methods perform better than global expression techniques and work just as well as fusing together results from multiple manifolds.


international conference on pattern recognition | 2010

Face detection in resource constrained wireless systems

Grigorios Tsagkatakis; Andreas E. Savakis

Face detection is one of the most popular areas of computer vision partly due to its many applications such as surveillance, human-computer interaction and biometrics. Recent developments in distributed wireless systems offer new embedded platforms for vision that are characterized by limitations in processing power, memory, bandwidth and available power. Migrating traditional face detection algorithms to this new environment requires taking into consideration these additional constraints. In this chapter, we investigate how image compression, a key processing step in many resource-constrained environments, affects the classification performance of face detection systems. Towards that end, we explore the effects of three well known image compression techniques, namely JPEG, JPEG2000 and SPIHT on face detection based on support vector machines and Adaboost cascade classifiers (Viola-Jones). We also examine the effects of H.264/MPEG-4 AVC video compression on Viola-Jones face detection.

Collaboration


Dive into the Grigorios Tsagkatakis's collaboration.

Top Co-Authors

Avatar

Andreas E. Savakis

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michalis Zervakis

Technical University of Crete

View shared research outputs
Top Co-Authors

Avatar

Gary Behm

National Technical Institute for the Deaf

View shared research outputs
Top Co-Authors

Avatar

Raymond W. Ptucha

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mark Stump

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Raymond W. Ptucha

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Roy W. Melton

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge