Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Theodora A. Varvarigou is active.

Publication


Featured researches published by Theodora A. Varvarigou.


IEEE Transactions on Fuzzy Systems | 2008

A Fuzzy Clustering Approach Toward Hidden Markov Random Field Models for Enhanced Spatially Constrained Image Segmentation

Sotirios P. Chatzis; Theodora A. Varvarigou

Hidden Markov random field (HMRF) models have been widely used for image segmentation, as they appear naturally in problems where a spatially constrained clustering scheme, taking into account the mutual influences of neighboring sites, is asked for. Fuzzy c-means (FCM) clustering has also been successfully applied in several image segmentation applications. In this paper, we combine the benefits of these two approaches, by proposing a novel treatment of HMRF models, formulated on the basis of a fuzzy clustering principle. We approach the HMRF model treatment problem as an FCM-type clustering problem, effected by introducing the explicit assumptions of the HMRF model into the fuzzy clustering procedure. Our approach utilizes a fuzzy objective function regularized by Kullback--Leibler divergence information, and is facilitated by application of a mean-field-like approximation of the MRF prior. We experimentally demonstrate the superiority of the proposed approach over competing methodologies, considering a series of synthetic and real-world image segmentation applications.


Future Generation Computer Systems | 2007

Efficient task replication and management for adaptive fault tolerance in mobile Grid environments

Antonios Litke; Dimitrios Skoutas; Konstantinos Tserpes; Theodora A. Varvarigou

Fault tolerant Grid computing is of vital importance as the Grid and Mobile computing worlds converge to the Mobile Grid computing paradigm. We present an efficient scheme based on task replication, which utilizes the Weibull reliability function for the Grid resources so as to estimate the number of replicas that are going to be scheduled in order to guarantee a specific fault tolerance level for the Grid environment. The additional workload that is produced by the replication is handled by a resource management scheme which is based on the knapsack formulation and which aims to maximize the utilization and profit of the Grid infrastructure. The proposed model has been evaluated through simulation and has shown its efficiency for being used in a middleware approach in future mobile Grid environments.


IEEE Transactions on Parallel and Distributed Systems | 2007

Fair Scheduling Algorithms in Grids

Nikolaos D. Doulamis; Anastasios D. Doulamis; Emmanouel A. Varvarigos; Theodora A. Varvarigou

In this paper, we propose a new algorithm for fair scheduling, and we compare it to other scheduling schemes such as the earliest deadline first (EDF) and the first come first served (FCFS) schemes. Our algorithm uses a max-min fair sharing approach for providing fair access to users. When there is no shortage of resources, the algorithm assigns to each task enough computational power for it to finish within its deadline. When there is congestion, the main idea is to fairly reduce the CPU rates assigned to the tasks so that the share of resources that each user gets is proportional to the users weight. The weight of a user may be defined as the users contribution to the infrastructure or the price he is willing to pay for services or any other socioeconomic consideration. In our algorithms, all tasks whose requirements are lower than their fair share CPU rate are served at their demanded CPU rates. However, the CPU rates of tasks whose requirements are larger than their fair share CPU rate are reduced to fit the total available computational capacity in a fair manner. Three different versions of fair scheduling are adopted in this paper: the simple fair task order (SFTO), which schedules the tasks according to their respective fair completion times, the adjusted fair task order (AFTO), which refines the SFTO policy by ordering the tasks using the adjusted fair completion time, and the max-min fair share (MMFS) scheduling policy, which simultaneously addresses the problem of finding a fair task order and assigning a processor to each task based on a max-min fair sharing policy. Experimental results and comparisons with traditional scheduling schemes such as the EDF and the FCFS are presented using three different error criteria. Validation of the simulations using real experiments of tasks generated from 3D image- rendering processes is also provided. The three proposed scheduling schemes can be integrated into existing grid computing architectures.


Journal of Systems and Software | 2011

The effects of scheduling, workload type and consolidation scenarios on virtual machine performance and their prediction through optimized artificial neural networks

George Kousiouris; Tommaso Cucinotta; Theodora A. Varvarigou

The aim of this paper is to study and predict the effect of a number of critical parameters on the performance of virtual machines (VMs). These parameters include allocation percentages, real-time scheduling decisions and co-placement of VMs when these are deployed concurrently on the same physical node, as dictated by the server consolidation trend and the recent advances in the Cloud computing systems. Different combinations of VM workload types are investigated in relation to the aforementioned factors in order to find the optimal allocation strategies. What is more, different levels of memory sharing are applied, based on the coupling of VMs to cores on a multi-core architecture. For all the aforementioned cases, the effect on the score of specific benchmarks running inside the VMs is measured. Finally, a black box method based on genetically optimized artificial neural networks is inserted in order to investigate the degradation prediction ability a priori of the execution and is compared to the linear regression method.


Journal of Systems and Software | 2012

A Self-adaptive hierarchical monitoring mechanism for Clouds

Gregory Katsaros; George Kousiouris; Spyridon V. Gogouvitis; Dimosthenis Kyriazis; Andreas Menychtas; Theodora A. Varvarigou

While Cloud computing offers the potential to dramatically reduce the cost of software services through the commoditization of IT assets and on-demand usage patterns, one has to consider that Future Internet applications raise the need for environments that can facilitate real-time and interactivity and thus pose specific requirements to the underlying infrastructure. The latter, should be able to efficiently adapt resource provisioning to the dynamic Quality of Service (QoS) demands of such applications. To this direction, in this paper we present a monitoring system that facilitates on-the-fly self-configuration in terms of both the monitoring time intervals and the monitoring parameters. The proposed approach forms a multi-layered monitoring framework for measuring QoS at both application and infrastructure levels targeting trigger events for runtime adaptability of resource provisioning estimation and decision making. Besides, we demonstrate the operation of the implemented mechanism and evaluate its effectiveness using a real-world application scenario, namely Film Post Production.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2009

Robust Sequential Data Modeling Using an Outlier Tolerant Hidden Markov Model

Sotirios P. Chatzis; Dimitrios I. Kosmopoulos; Theodora A. Varvarigou

Hidden Markov (chain) models using finite Gaussian mixture models as their hidden state distributions have been successfully applied in sequential data modeling and classification applications. Nevertheless, Gaussian mixture models are well known to be highly intolerant to the presence of untypical data within the fitting data sets used for their estimation. Finite Students t-mixture models have recently emerged as a heavier-tailed, robust alternative to Gaussian mixture models, overcoming these hurdles. To exploit these merits of Students t-mixture models in the context of a sequential data modeling setting, we introduce, in this paper, a novel hidden Markov model where the hidden state distributions are considered to be finite mixtures of multivariate Students t-densities. We derive an algorithm for the model parameters estimation under a maximum likelihood framework, assuming full, diagonal, and factor-analyzed covariance matrices. The advantages of the proposed model over conventional approaches are experimentally demonstrated through a series of sequential data modeling applications.


Future Generation Computer Systems | 2008

An innovative workflow mapping mechanism for Grids in the frame of Quality of Service

Dimosthenis Kyriazis; Konstantinos Tserpes; Andreas Menychtas; Antonios Litke; Theodora A. Varvarigou

The advent of Grid environments made feasible the solution of computational intensive problems in a reliable and cost-effective way. As workflow systems carry out more complex and mission-critical applications, Quality of Service (QoS) analysis serves to ensure that each application meets user requirements. In that frame, we present a novel algorithm which allows the mapping of workflow processes to Grid provided services assuring at the same time end-to-end provision of QoS based on user-defined parameters and preferences. We also demonstrate the operation of the implemented algorithm and evaluate its effectiveness using a Grid scenario, based on a 3D image rendering application.


Computer Communications | 2007

Adjusted fair scheduling and non-linear workload prediction for QoS guarantees in grid computing

Nikolaos D. Doulamis; Anastasios D. Doulamis; Antonios Litke; Athanasios Panagakis; Theodora A. Varvarigou; Emmanouel A. Varvarigos

In this paper, we propose an efficient non-linear task workload prediction mechanism incorporated with a fair scheduling algorithm for task allocation and resource management in Grid computing. Workload prediction is accomplished in a Grid middleware approach using a non-linear model expressed as a series of finite known functional components using concepts of functional analysis. The coefficient of functional components are obtained using a training set of appropriate samples, the pairs of which are estimated based on a runtime estimation model relied on a least squares approximation scheme. The advantages of the proposed non-linear task workload prediction scheme is that (i) it is not constrained by analysis of source code (analytical methods), which is practically impossible to be implemented in complicated real-life applications or (ii) it does not exploit the variations of the workload statistics as the statistical approaches does. The predicted task workload is then exploited by a novel scheduling algorithm, enabling a fair Quality of Service oriented resource management so that some tasks are not favored against others. The algorithm is based on estimating the adjusted fair completion times of the tasks for task order selection and on an earliest completion time strategy for the grid resource assignment. Experimental results and comparisons with traditional scheduling approaches as implemented in the framework of European Union funded research projects GRIA and GRIDLAB grid infrastructures have revealed the outperformance of the proposed method.


Computers in Industry | 2001

Automated inspection of gaps on the automobile production line through stereo vision and specular reflection

Dimitrios I. Kosmopoulos; Theodora A. Varvarigou

Abstract One of the most difficult tasks in the later stages of automobile assembly is the dimensional inspection of the gaps between the car body and the various panels fitted on it (doors, motor-hood, etc.). The employment of an automatic gap-measuring system would reduce the costs significantly and would offer high flexibility. However, this task is still performed by humans and only a few — still experimental — automatic systems have been reported. In this paper, we introduce a system for automated gap inspection that employs computer vision. It is capable of measuring the lateral and the range dimension of the gap (width and flush, correspondingly). The measurement installation consists of two calibrated stereo cameras and two infrared LED lamps, used for highlighting the edges of the gap through specular reflection. The gap is measured as the 3D distance between the highlighted edges. This method has significant advantages against the laser-based, gap-measuring systems, mainly due to its color independency. Our approach has been analytically described in 2D and extensively evaluated using synthetic as well as real gaps. The results obtained verify its robustness and its applicability in an industrial environment.


IEEE Transactions on Signal Processing | 2008

Signal Modeling and Classification Using a Robust Latent Space Model Based on

Sotirios P. Chatzis; Dimitrios I. Kosmopoulos; Theodora A. Varvarigou

Factor analysis is a statistical covariance modeling technique based on the assumption of normally distributed data. A mixture of factor analyzers can be hence viewed as a special case of Gaussian (normal) mixture models providing a mathematically sound framework for attribute space dimensionality reduction. A significant shortcoming of mixtures of factor analyzers is the vulnerability of normal distributions to outliers. Recently, the replacement of normal distributions with the heavier-tailed Students-t distributions has been proposed as a way to mitigate these shortcomings and the treatment of the resulting model under an expectation-maximization (EM) algorithm framework has been conducted. In this paper, we develop a Bayesian approach to factor analysis modeling based on Students-t distributions. We derive a tractable variational inference algorithm for this model by expressing the Students-t distributed factor analyzers as a marginalization over additional latent variables. Our innovative approach provides an efficient and more robust alternative to EM-based methods, resolving their singularity and overfitting proneness problems, while allowing for the automatic determination of the optimal model size. We demonstrate the superiority of the proposed model over well-known covariance modeling techniques in a wide range of signal processing applications.

Collaboration


Dive into the Theodora A. Varvarigou's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andreas Menychtas

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

George Kousiouris

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kleopatra Konstanteli

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Athanasios Voulodimos

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Spyridon V. Gogouvitis

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Vassiliki Andronikou

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Nikolaos D. Doulamis

Technical University of Crete

View shared research outputs
Researchain Logo
Decentralizing Knowledge