Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chryssis Georgiou is active.

Publication


Featured researches published by Chryssis Georgiou.


principles of distributed computing | 2008

On the complexity of asynchronous gossip

Chryssis Georgiou; Seth Gilbert; Rachid Guerraoui; Dariusz R. Kowalski

In this paper, we study the complexity of gossip in an asynchronous, message-passing fault-prone distributed system. In short, we show that an adaptive adversary can significantly hamper the spreading of a rumor, while an oblivious adversary cannot. This latter fact implies that there exist message-efficient asynchronous (randomized) consensus protocols, in the context of an oblivious adversary. In more detail, we summarize our results as follows. If the adversary is adaptive, we show that a randomized asynchronous gossip algorithm cannot terminate in fewer than O(f(d + delta)) time steps unless Omega(n+f2) messages are exchanged, where n is the total number of processes, f is the number of tolerated crash failures, d is the maximum communication delay for the specific execution in question, and delta is the bound on relative process speeds in the specific execution. The lower bound result is to be contrasted with deterministic synchronous gossip algorithms that, even against an adaptive adversary, require only O(polylog n) time steps and O(n polylog n) messages. In the case of an oblivious adversary, we present three different randomized, asynchronous algorithms that provide different trade-offs between time complexity and message complexity. The first algorithm is based on the epidemic paradigm, and completes in O(n / (n-f) log2 n (d + δ)) time steps using O(n log3 n (d + δ)) messages, with high probability. The second algorithm relies on more rapid dissemination of the rumors, yielding a constant-time (w.r.t. n) gossip protocol: for every constant epsilon < 1, and for f ≤ n/2, there is a variant with time complexity O((1 / ε)(d+δ)) and message complexity O((1/ε)n1+εlog n (d+δ)). The third algorithm solves a weaker version of the gossip problem in which each process receives at least a majority of the rumors. This algorithm achieves constant O(d+δ) time complexity and message complexity O(n7/4 log2 n). As an application of these message-efficient gossip protocols, we present three randomized consensus protocols. Our consensus algorithms derive from combining each of our gossip protocols with the Canetti-Rabin framework, resulting in message-efficient consensus algorithms. The resulting protocols have time and message-complexity asymptotically equal to our gossip protocols. We particularly highlight the third consensus protocol, a result that is interesting in its own right: the first asynchronous randomized consensus algorithm with strictly subquadradic message-complexity, i.e., O(n7/4 log2 n).


Journal of Parallel and Distributed Computing | 2009

Fault-tolerant semifast implementations of atomic read/write registers

Chryssis Georgiou; Nicolas C. Nicolaou; Alexander A. Shvartsman

This paper investigates time-efficient implementations of atomic read-write registers in message-passing systems where the number of readers can be unbounded. In particular we study the case of a single writer, multiple readers, and S servers, such that the writer, any subset of the readers, and up to t servers may crash. A recent result of Dutta et al. [P. Dutta, R. Guerraoui, R.R. Levy, A. Chakraborty, How fast can a distributed atomic read be? In: Proceedings of the 23rd ACM Symposium on Principles of Distributed Computing, 2004, pp. 236-245] shows how to obtain fast implementations in which both reads and writes complete in one communication round-trip, under the constraint that the number of readers is less than St-2, where t


symposium on reliable distributed systems | 2006

Reliably Executing Tasks in the Presence of Untrusted Entities

Antonio Fernández; Luis Fernandez Lopez; Agustín Santos; Chryssis Georgiou

In this work we consider a distributed system formed by a master processor and a collection of n processors (workers) that can execute tasks; worker processors are untrusted and might act maliciously. The master assigns tasks to workers to be executed. Each task returns a binary value, and we want the master to accept only correct values with high probability. Furthermore, we assume that the service provided by the workers is not free; for each task that a worker is assigned, the master is charged with a work-unit. Therefore, considering a single task assigned to several workers, our goal is to have the master computer to accept the correct value of the task with high probability, with the smallest possible amount of work (number of workers the master assigns the task). We explore two ways of bounding the number of faulty processors: (a) we consider a fixed bound f < n/2 on the maximum number of workers that may fail, and (b) a probability p < 1/2 of any processor to be faulty (all processors are faulty with probability p, independently of the rest of processors). Our work demonstrates that it is possible to obtain high probability of correct acceptance with low work. In particular, by considering both mechanisms of bounding the number of malicious workers, we first show lower bounds on the minimum amount of (expected) work required, so that any algorithm accepts the correct value with probability of success 1 - epsiv, where epsiv Lt 1 (e.g., 1/n). Then we develop and analyze two algorithms, each using a different decision strategy, and show that both algorithms obtain the same probability of success 1 - epsiv, and in doing so, they require similar upper bounds on the (expected) work. Furthermore, under certain conditions, these upper bounds are asymptotically optimal with respect to our lower bounds


international symposium on distributed computing | 2001

The Complexity of Synchronous Iterative Do-All with Crashes

Chryssis Georgiou; Alexander Russell; Alexander A. Shvartsman

Do-All is the problem of performing N tasks in a distributed system of P failure-prone processors [8]. Many distributed and parallel algorithms have been developed for this problem and several algorithm simulations have been developed by iterating Do-All algorithms. The efficiency of the solutions for Do-All is measured in terms of work complexity where all processing steps taken by the processors are counted. We present the first non-trivial lower bounds for Do-All that capture the dependence of work on N, P and f, the number of processor crashes. For the model of computation where processors are able to make perfect load-balancing decisions locally, we also present matching upper bounds. We define the r-iterative Do-All problem that abstracts the repeated use of Do-All such as found in algorithm simulations. Our f-sensitive analysis enables us to derive a tight bound for r-iterative Do-All work (that is stronger than the r-fold work complexity of a single Do-All). Our approach that models perfect load-balancing allows for the analysis of specific algorithms to be divided into two parts: (i) the analysis of the cost of tolerating failures while performing work, and (ii) the analysis of the cost of implementing load-balancing. We demonstrate the utility and generality of this approach by improving the analysis of two known efficient algorithms. Finally we present a new upper bound on simulations of synchronous shared-memory algorithms on crash-prone processors.


Archive | 2007

Do-All Computing in Distributed Systems

Chryssis Georgiou; Alexander A. Shvartsman

This book studies algorithmic issues associated with cooperative execution of multiple independent tasks by distributed computing agents including partitionable networks. It provides the most significant algorithmic solution developed and available today for do-all computing for distributed systems (including partitionable networks), and is the first monograph that deals with do-all computing for distributed systems. The book is structured to meet the needs of a professional audience composed of researchers and practitioners in industry. This volume is also suitable for graduate-level students in computer science.


SIAM Journal on Computing | 2005

Work-Competitive Scheduling for Cooperative Computing with Dynamic Groups

Chryssis Georgiou; Alexander Russell; Alexander A. Shvartsman

The problem of cooperatively performing a set of t tasks in a decentralized computing environment subject to failures is one of the fundamental problems in distributed computing. The setting with partitionable networks is especially challenging, as algorithmic solutions must accommodate the possibility that groups of processors become disconnected (and, perhaps, reconnected) during the computation. The efficiency of task-performing algorithms is often assessed in terms of work: the total number of tasks, counting multiplicities, performed by all of the processors during the computation. In general, the scenario where the processors are partitioned into g disconnected components causes any task-performing algorithm to have work


international parallel and distributed processing symposium | 2006

Network uncertainty in selfish routing

Chryssis Georgiou; Theophanis Pavlides; Anna Philippou

\Omega(t\cdot g)


Journal of Discrete Algorithms | 2003

Cooperative computing with fragmentable and mergeable groups

Chryssis Georgiou; Alexander A. Shvartsman

even if each group of processors performs no more than the optimal number of


Synthesis Lectures on Distributed Computing Theory | 2011

Cooperative Task-Oriented Computing: Algorithms and Complexity

Chryssis Georgiou; Alexander A. Shvartsman

\Theta(t)


International Journal on Software Tools for Technology Transfer | 2009

Automated implementation of complex distributed algorithms specified in the IOA language

Chryssis Georgiou; Nancy A. Lynch; Panayiotis Mavrommatis; Joshua A. Tauber

tasks. Given that such pessimistic lower bounds apply to any scheduling algorithm, we pursue a competitive analysis. Specifically, this paper studies a simple randomized scheduling algorithm for p asynchronous processors, connected by a dynamically changing communication medium, to complete t known tasks. The performance of this algorithm is compared against that of an omniscient off-line algorithm with full knowledge of the future changes in the communication medium. The paper describes a notion of computation width, which associates a natural number with a history of changes in the communication medium, and shows both upper and lower bounds on work-competitiveness in terms of this quantity. Specifically, it is shown that the simple randomized algorithm obtains the competitive ratio

Collaboration


Dive into the Chryssis Georgiou's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter M. Musial

University of Connecticut

View shared research outputs
Top Co-Authors

Avatar

Shlomi Dolev

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge