Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Theodore Andronikos is active.

Publication


Featured researches published by Theodore Andronikos.


international parallel and distributed processing symposium | 2006

Dynamic multi phase scheduling for heterogeneous clusters

Florina M. Ciorba; Theodore Andronikos; Ioannis Riakiotakis; Anthony T. Chronopoulos; George K. Papakonstantinou

Distributed computing systems are a viable and less expensive alternative to parallel computers. However, concurrent programming methods in distributed systems have not been studied as extensively as for parallel computers. Some of the main research issues are how to deal with scheduling and load balancing of such a system, which may consist of heterogeneous computers. In the past, a variety of dynamic scheduling schemes suitable for parallel loops (with independent iterations) on heterogeneous computer clusters have been obtained and studied. However, no study of dynamic schemes for loops with iteration dependencies has been reported so far. In this work we study the problem of scheduling loops with iteration dependencies for heterogeneous (dedicated and non-dedicated) clusters. The presence of iteration dependencies incurs an extra degree of difficulty and makes the development of such schemes quite a challenge. We extend three well known dynamic schemes (CSS, TSS and DTSS) by introducing synchronization points at certain intervals so that processors compute in pipelined fashion. Our scheme is called dynamic multi-phase scheduling (DMPS) and we apply it to loops with iteration dependencies. We implemented our new scheme on a network of heterogeneous computers and studied its performance. Through extensive testing on two real-life applications (the heat equation and the Floyd-Steinberg algorithm), we show that the proposed method is efficient for parallelizing nested loops with dependencies on heterogeneous systems.


Journal of Parallel and Distributed Computing | 1999

Optimal Scheduling for UET/UET-UCT Generalizedn-Dimensional Grid Task Graphs

Theodore Andronikos; Nectarios Koziris; George K. Papakonstantinou; Panayotis Tsanakas

Then-dimensional grid is one of the most representative patterns of data flow in parallel computation. Many scientific algorithms, which require nearest neighbor communication in a lattice space, are modeled by a task graph with the properties of a simple or enhanced grid. The two most frequently used scheduling models for grids are the unit execution time-zero communication delay (UET) and the unit execution time?unit communication time (UET-UCT). In this paper we introduce an enhanced model of then-dimensional grid by adding extra diagonal edges and allowing unequal boundaries for each dimension. For this generalized grid topology we establish the optimal makespan for both cases of UET/UET-UCT grids. Then we give a closed formula that calculates the minimum number of processors required to achieve the optimal makespan. Finally, we propose a low-complexity optimal time and processor scheduling strategy for both cases.


Parallel Algorithms and Applications | 1997

LOWER TIME AND PROCESSOR BOUNDS FOR EFFICIENT MAPPING OF UNIFORM DEPENDENCE ALGORITHMS INTO SYSTOLIC ARRAYS

Theodore Andronikos; Nectarios Koziris; Zacharias Tsiatsoulis; George K. Papakonstantinou; Panayotis Tsanakas

One of the most promising areas of research is the area of automatic parallelization of sequential algorithms, where the primary objective is the execution of the algorithm in optimal parallel lime. For this purpose, methods of detecting and exploiting all inherent parallelism must be devised. Once optimal execution time is ensured, other prerequisites, e.g., the minimization of the number of processing elements (in the case of systolic arrays) or the minimization of the communication overhead (in the case of distributed memory architectures), should be accomplished too. In this paper we study the automatic parallelization of DO(FOR)-loops; we propose an algorithm that partitions the index space into distinct dependence chains and assigns them to different processing elements. We estimate that our method is always optimal in time and, for a specific subclass of nested DO(FOR)-loops, is also optimal in the number of systolic cells.


Journal of Parallel and Distributed Computing | 2008

Enhancing self-scheduling algorithms via synchronization and weighting

Florina M. Ciorba; Ioannis Riakiotakis; Theodore Andronikos; George K. Papakonstantinou; Anthony T. Chronopoulos

Existing dynamic self-scheduling algorithms, used to schedule independent tasks on heterogeneous clusters, cannot handle tasks with dependencies because they lack the support for internode communication. To compensate for this deficiency we introduce a synchronization mechanism that provides inter-processor communication, thus, enabling self-scheduling algorithms to handle efficiently nested loops with dependencies. We also present a weighting mechanism that significantly improves the performance of dynamic self-scheduling algorithms. These algorithms divide the total number of tasks into chunks and assign them to processors. The weighting mechanism adapts the chunk sizes to the computing power and current run-queue state of the processors. The synchronization and weighting mechanisms are orthogonal, in the sense that they can simultaneously be applied to loops with dependencies. Thus, they broaden the application spectrum of dynamic self-scheduling algorithms and improve their performance. Extensive testing confirms the efficiency of the synchronization and weighting mechanisms and the significant improvement of the synchronized-weighted versions of the algorithms over the synchronized-only versions.


international conference on cluster computing | 2006

Self-Adapting Scheduling for Tasks with Dependencies in Stochastic Environments

I. Riakotakis; Florina M. Ciorba; Theodore Andronikos; George K. Papakonstantinou

This paper addresses dynamic load balancing algorithms for non-dedicated heterogeneous clusters of workstations. We propose an algorithm called self-adapting scheduling (SAS), targeted at nested loops with dependencies in a stochastic environment. This means that the load entering the system, not belonging to the parallel application under execution, follows an unpredictable pattern which can be modeled by a stochastic process. SAS takes into account the history of previous timing results and the load patterns in order to make accurate load balancing predictions. We study the performance of SAS in comparison with DTSS. We established in previous work that DTSS is the most efficient self-scheduling algorithm for loops with dependencies on heterogeneous clusters. We test our algorithm under the assumption that the interarrival times and life-times of incoming jobs are exponentially distributed. The experimental results show that SAS significantly outperforms DTSS especially with rapidly varying loads


euromicro workshop on parallel and distributed processing | 2000

Optimal scheduling for UET-UCT grids into fixed number of processors

Theodore Andronikos; Nectarios Koziris

The n-dimensional grid is one of the most representative patterns of data flow in parallel computation. Many scientific algorithms, which require nearest neighbor communication in a lattice space, are modeled by a task graph with the properties of a simple or enhanced grid. In this paper we consider an enhanced model of the n-dimensional grid by adding extra diagonal edges and allowing unequal boundaries for each dimension. First, we calculate the optimal makespan for the generalized UET-UCT (Unit Execution Time-Unit Communication Time) grid topology and then, we establish the minimum number of processors required, to achieve the optimal makespan. We present the optimal time schedule, using unbounded and bounded number of processors, without allowing task duplication. This paper proves that UET-UCT scheduling of generalized n-dimensional grids into fixed number of processors is low complexity tractable.


panhellenic conference on informatics | 2009

Adding Temporal Dimension to Ontologies via OWL Reification

Theodore Andronikos; Michalis Stefanidakis; Ioannis Papadakis

It has been pointed out recently by many researchers that it is difficult for languages such as OWL to incorporate time in their relations. As a result it is difficult to reason about the temporal ordering of events or about the temporal properties of relations that vary as a function of time. There are a number of methods that have been proposed in order to circumvent this difficulty. One technique that can be used for this purpose is reification. The advantage of this approach is that it does not require the introduction of additional constructs to the OWL language. In this work we investigate the types of queries that are possible with this mechanism and we conclude that is in fact possible to express many interesting and useful queries. Moreover, due to the use of standard OWL, existing reasoning tools are capable of handling these queries.


international parallel processing symposium | 1997

Optimal scheduling for UET-UCT generalized n-dimensional grid task graphs

Theodore Andronikos; Nectarios Koziris; George K. Papakonstantinou; Panayotis Tsanakas

The n-dimensional grid is one of the most representative patterns of data flow in parallel computation. The most frequently used scheduling models for grids is the unit execution-unit communication time (UET-UCT). We enhance the model of n-dimensional grid by adding extra diagonal edges. First, we calculate the optimal makespan for the generalized UET-UCT grid topology and then we establish the minimum number of processors required, to achieve the optimal makespan. Furthermore, we solve the scheduling problem for generalized n-dimensional grids by proposing an optimal time and space scheduling strategy. We thus prove that UET-UCT scheduling of generalized n-dimensional grids is low complexity tractable.


parallel computing | 2011

Distributed dynamic load balancing for pipelined computations on heterogeneous systems

Ioannis Riakiotakis; Florina M. Ciorba; Theodore Andronikos; George K. Papakonstantinou

One of the most significant causes for performance degradation of scientific and engineering applications on high performance computing systems is the uneven distribution of the computational work to the resources of the system. This effect, which is known as load imbalance, is even more noticeable in the case of irregular applications and heterogeneous distributed systems. This motivated the parallel and distributed computing research community to focus on methods that provide good load balancing for scientific and engineering applications running on (heterogeneous) distributed systems. Efficient load balancing and scheduling methods are employed for scientific applications from various fields, such as mechanics, materials, physics, chemistry, biology, applied mathematics, etc. Such applications typically employ a large number of computational methods in order to simulate complex phenomena, on very large scales of time and magnitude. These simulations consist of routines that perform repetitive computations (in the form of DO/FOR loops) over very large data sets, which, if not properly implemented and executed, may suffer from poor performance. The number of repetitive computations in the simulation codes is not always constant. Moreover, the computational nature of these simulations may be in fact irregular, leading to the case when one computation takes (unpredictably) more time than others. For successful and timely results, large scale simulations require the use of large scale computing systems, which often are widely distributed and highly heterogeneous. Moreover, large scale computing systems are usually shared among multiple users, which causes the quality and quantity of the available resources to be highly unpredictable. There are numerous load balancing methods in the literature for different parallel architectures. The most recent of these methods typically follow the master-worker paradigm, where a single coordinator (master) is responsible for making all the scheduling decisions based on information provided by the workers. Depending on the application requirements, the scheduling policy and the computational environment, the benefits of this paradigm may be limited as follows: (1) its efficiency may not scale as the number of processors increases, and (2) it is quite probable that the scheduling decisions are made based on outdated information, especially on systems where the workload changes rapidly. In an effort to address these limitations, we propose a distributed (master-less) load balancing scheme, in which the scheduling decisions are made by the workers in a distributed fashion. We implemented this method along with other two master-worker schemes (a previously existing one and a recently modified one) for three different scientific computational kernels. In order to validate the usefulness and efficiency of the proposed scheme, we conducted a series of comparative performance tests with the two master-worker schemes for each computational kernel. The target system is an SMP cluster, on which we simulated three different patterns of system load fluctuation. The experiments strongly support the belief that the distributed approach offers greater performance and better scalability on such systems, showing an overall improvement ranging from 13% to 24% over the master-worker approaches.


Computation | 2015

Dominant Strategies of Quantum Games on Quantum Periodic Automata

Konstantinos Giannakis; Christos Papalitsas; Kalliopi Kastampolidou; Alexandros Singh; Theodore Andronikos

Game theory and its quantum extension apply in numerous fields that affect people’s social, political, and economical life. Physical limits imposed by the current technology used in computing architectures (e.g., circuit size) give rise to the need for novel mechanisms, such as quantum inspired computation. Elements from quantum computation and mechanics combined with game-theoretic aspects of computing could open new pathways towards the future technological era. This paper associates dominant strategies of repeated quantum games with quantum automata that recognize infinite periodic inputs. As a reference, we used the PQ-PENNY quantum game where the quantum strategy outplays the choice of pure or mixed strategy with probability 1 and therefore the associated quantum automaton accepts with probability 1. We also propose a novel game played on the evolution of an automaton, where players’ actions and strategies are also associated with periodic quantum automata.

Collaboration


Dive into the Theodore Andronikos's collaboration.

Top Co-Authors

Avatar

George K. Papakonstantinou

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nectarios Koziris

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Florina M. Ciorba

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Panayotis Tsanakas

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ioannis Riakiotakis

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Ioannis Drositis

National and Kapodistrian University of Athens

View shared research outputs
Top Co-Authors

Avatar

Anthony T. Chronopoulos

University of Texas at San Antonio

View shared research outputs
Top Co-Authors

Avatar

Andrew Koulouris

National Technical University of Athens

View shared research outputs
Researchain Logo
Decentralizing Knowledge