Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gabriel Tsechpenakis is active.

Publication


Featured researches published by Gabriel Tsechpenakis.


hawaii international conference on system sciences | 2005

Blob Analysis of the Head and Hands: A Method for Deception Detection

Shan Lu; Gabriel Tsechpenakis; Dimitris N. Metaxas; Matthew L. Jensen; John Kruse

Behavioral indicators of deception and behavioral state are extremely difficult for humans to analyze. Blob analysis, a method for analyzing the movement of the head and hands based on the identification of skin color is presented. This method is validated with numerous skin tones. A proof-of-concept study is presented that uses blob analysis to explore behavioral state identification in the detection of deception.


IEEE Transactions on Neural Networks | 2010

The Infinite Hidden Markov Random Field Model

Sotirios P. Chatzis; Gabriel Tsechpenakis

Hidden Markov random field (HMRF) models are widely used for image segmentation, as they appear naturally in problems where a spatially constrained clustering scheme is asked for. A major limitation of HMRF models concerns the automatic selection of the proper number of their states, i.e., the number of region clusters derived by the image segmentation procedure. Existing methods, including likelihood- or entropy-based criteria, and reversible Markov chain Monte Carlo methods, usually tend to yield noisy model size estimates while imposing heavy computational requirements. Recently, Dirichlet process (DP, infinite) mixture models have emerged in the cornerstone of nonparametric Bayesian statistics as promising candidates for clustering applications where the number of clusters is unknown a priori; infinite mixture models based on the original DP or spatially constrained variants of it have been applied in unsupervised image segmentation applications showing promising results. Under this motivation, to resolve the aforementioned issues of HMRF models, in this paper, we introduce a nonparametric Bayesian formulation for the HMRF model, the infinite HMRF model, formulated on the basis of a joint Dirichlet process mixture (DPM) and Markov random field (MRF) construction. We derive an efficient variational Bayesian inference algorithm for the proposed model, and we experimentally demonstrate its advantages over competing methodologies.


IEEE Transactions on Biomedical Engineering | 2008

A Novel Computational Approach for Simultaneous Tracking and Feature Extraction of C. elegans Populations in Fluid Environments

Gabriel Tsechpenakis; Laura Bianchi; Dimitris N. Metaxas; Monica Driscoll

The nematode Caenorhabditis elegans (C. elegans) is a genetic model widely used to dissect conserved basic biological mechanisms of development and nervous system function. C. elegans locomotion is under complex neuronal regulation and is impacted by genetic and environmental factors; thus, its analysis is expected to shed light on how genetic, environmental, and pathophysiological processes control behavior. To date, computer-based approaches have been used for analysis of C. elegans locomotion; however, none of these is both high resolution and high throughput. We used computer vision methods to develop a novel automated approach for analyzing the C. elegans locomotion. Our method provides information on the position, trajectory, and body shape during locomotion and is designed to efficiently track multiple animals (C. elegans) in cluttered images and under lighting variations. We used this method to describe in detail C. elegans movement in liquid for the first time and to analyze six unc-8, one mec-4, and one odr-1 mutants. We report features of nematode swimming not previously noted and show that our method detects differences in the swimming profile of mutants that appear at first glance similar.


hawaii international conference on system sciences | 2005

An Approach for Intent Identification by Building on Deception Detection

Judee K. Burgoon; Mark Adkins; John Kruse; Matthew L. Jensen; Thomas O. Meservy; Douglas P. Twitchell; Amit V. Deokar; Jay F. Nunamaker; Shan Lu; Gabriel Tsechpenakis; Dimitris N. Metaxas; Robert Younger

Past research in deception detection at the University of Arizona has guided the investigation of intent detection. A theoretical foundation and model for the analysis of intent detection is proposed. Available test beds for intent analysis are discussed and two proof-of-concept studies exploring nonverbal communication within the context of deception detection and intent analysis are shared.


IEEE Transactions on Intelligent Transportation Systems | 2009

Detecting Concealment of Intent in Transportation Screening: A Proof of Concept

Judee K. Burgoon; Douglas P. Twitchell; Matthew L. Jensen; Thomas O. Meservy; Mark Adkins; John Kruse; Amit V. Deokar; Gabriel Tsechpenakis; Shan Lu; Dimitris N. Metaxas; Jr . Jay F. Nunamaker; Robert Younger

Transportation and border security systems have a common goal: to allow law-abiding people to pass through security and detain those people who intend to harm. Understanding how intention is concealed and how it might be detected should help in attaining this goal. In this paper, we introduce a multidisciplinary theoretical model of intent concealment along with three verbal and nonverbal automated methods for detecting intent: message feature mining, speech act profiling, and kinesic analysis. This paper also reviews a program of empirical research supporting this model, including several previously published studies and the results of a proof-of-concept study. These studies support the model by showing that aspects of intent can be detected at a rate that is higher than chance. Finally, this paper discusses the implications of these findings in an airport-screening scenario.


computer vision and pattern recognition | 2007

CRF-driven Implicit Deformable Model

Gabriel Tsechpenakis; Dimitris N. Metaxas

We present a topology independent solution for segmenting objects with texture patterns of any scale, using an implicit deformable model driven by conditional random fields (CRFs). Our model integrates region and edge information as image driven terms, whereas the probabilistic shape and internal (smoothness) terms use representations similar to the level-set based methods. The evolution of the model is solved as a MAP estimation problem, where the target conditional probability is decomposed into the internal term and the image-driven term. For the later, we use discriminative CRFs in two scales, pixel- and patch-based, to obtain smooth probability fields based on the corresponding image features. The advantages and novelties of our approach are (i) the integration of CRFs with implicit deformable models in a tightly coupled scheme, (ii) the use of CRFs which avoids ambiguities in the probability fields, (iii) the handling of local feature variations by updating the model interior statistics and processing at different spatial scales, and (v) the independence from the topology. We demonstrate the performance of our method in a wide variety of images, from the zebra and cheetah examples to the left and right ventricles in cardiac images.


international conference on computer vision | 2007

Coupling CRFs and Deformable Models for 3D Medical Image Segmentation

Gabriel Tsechpenakis; Jianhua Wang; Brandon Mayer; Dimitris N. Metaxas

In this paper we present a hybrid probabilistic framework for 3D image segmentation, using Conditional Random Fields (CRFs) and implicit deformable models. Our 3D deformable model uses voxel intensity and higher scale textures as data-driven terms, while the shape is formulated implicitly using the Euclidean distance transform. The data-driven terms are used as observations in a 3D discriminative CRF, which drives the model evolution based on a simple graphical model. In this way, we solve the model evolution as a joint MAP estimation problem for the 3D label field of the CRF and the 3D shape of the deformable model. We demonstrate the performance of our approach in the estimation of the volume of the human tear menisci from images obtained with optical coherence tomography.


international conference on multimedia and expo | 2005

HMM-Based Deception Recognition from Visual Cues

Gabriel Tsechpenakis; Dimitris N. Metaxas; Mark Adkins; John Kruse; Judee K. Burgoon; Matthew L. Jensen; Thomas O. Meservy; Douglas P. Twitchell; Amit V. Deokar; Jay F. Nunamaker

Behavioral indicators of deception and behavioral state are extremely difficult for humans to analyze. This research effort attempts to leverage automated systems to augment humans in detecting deception by analyzing nonverbal behavior on video. By tracking faces and hands of an individual, it is anticipated that objective behavioral indicators of deception can be isolated, extracted and synthesized to create a more accurate means for detecting human deception. Blob analysis, a method for analyzing the movement of the head and hands based on the identification of skin color is presented. A proof-of-concept study is presented that uses Blob analysis to extract visual cues and events, throughout the examined videos. The integration of these cues is done using a hierarchical hidden Markov model to explore behavioral state identification in the detection of deception, mainly involving the detection of agitated and over-controlled behaviors


Computer Vision and Image Understanding | 2006

Learning-based dynamic coupling of discrete and continuous trackers

Gabriel Tsechpenakis; Dimitris N. Metaxas; Carol Neidle

We present a data-driven dynamic coupling between discrete and continuous methods for tracking objects of high dofs, which overcomes the limitations of previous techniques. In our approach, two trackers work in parallel, and the coupling between them is based on the tracking error. We use a model-based continuous method to achieve accurate results and, in cases of failure, we re-initialize the model using our discrete tracker. This method maintains the accuracy of a more tightly coupled system, while increasing its efficiency. At any given frame, our discrete tracker uses the current and several previous frames to search into a database for the best matching solution. For improved robustness, object configuration sequences, rather than single configurations, are stored in the database. We apply our framework to the problem of 3D hand tracking from image sequences and the discrimination between fingerspelling and continuous signs in American Sign Language.


IEEE Transactions on Image Processing | 2009

CoCRF Deformable Model: A Geometric Model Driven by Collaborative Conditional Random Fields

Gabriel Tsechpenakis; Dimitris N. Metaxas

We present a hybrid framework for integrating deformable models with learning-based classification, for image segmentation with region ambiguities. We show how a region-based geometric model is coupled with conditional random fields (CRF) in a simple graphical model, such that the model evolution is driven by a dynamically updated probability field. We define the model shape with the signed distance function, while we formulate the internal energy with a C1 continuity constraint, a shape prior, and a term that forces the zero level of the shape function towards a connected form. The latter can be seen as a term that forces different closed curves on the image plane to merge, and, therefore, our model inherently carries the property of merging regions. We calculate the image likelihood that drives the evolution using a collaborative formulation of conditional random fields (CoCRF), which is updated during the evolution in an online learning manner. The CoCRF infers class posteriors to regions with feature ambiguities by assessing the joint appearance of neighboring sites, and using the classification confidence to regulate the inference. The novelties of our approach are (i) the tight coupling of deformable models with classification, combining the estimation of smooth region boundaries with the robustness of the probabilistic region classification, (ii) the handling of feature variations, by updating the region statistics in an online learning manner, and (iii) the improvement of the region classification using our CoCRF. We demonstrate the performance of our method in a variety of images with clutter, region inhomogeneities, boundary ambiguities, and complex textures, from the zebra and cheetah examples to medical images.

Collaboration


Dive into the Gabriel Tsechpenakis's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stefanos D. Kollias

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jianhua Wang

Bascom Palmer Eye Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge