Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tom Friedetzky is active.

Publication


Featured researches published by Tom Friedetzky.


SIAM Journal on Computing | 2003

The Natural Work-Stealing Algorithm is Stable

Petra Berenbrink; Tom Friedetzky; Leslie Ann Goldberg

In this paper we analyse a very simple dynamic work-stealing algorithm. In the work-generation model, there are n generators which are arbitrarily distributed among a set of n processors. During each time-step, with probability /spl lambda/, each generator generates a unit-time task which it inserts into the queue of its host processor. After the new tasks are generated, each processor removes one task from its queue and services it. Clearly, the work-generation model allows the load to grow more and more imbalanced, so, even when /spl lambda/<1, the system load can be unbounded. The natural work-stealing algorithm that we analyse works as follows. During each time step, each empty processor sends a request to a randomly selected other processor. Any non-empty processor having received at least one such request in turn decides (again randomly) in favour of one of the requests. The number of tasks which are transferred from the non-empty processor to the empty one is determined by the so-called work-stealing function f. We analyse the long-term behaviour of the system as a function of /spl lambda/ and f. We show that the system is stable for any constant generation rate /spl lambda/<1 and for a wide class of functions f. We give a quantitative description of the functions f which lead to stable systems. Furthermore, we give upper bounds on the average system load (as a function of f and n).


Algorithmica | 2012

Convergence to Equilibria in Distributed, Selfish Reallocation Processes with Weighted Tasks

Petra Berenbrink; Tom Friedetzky; Iman Hajirasouliha; Zengjian Hu

We consider the problem of dynamically reallocating (or re-routing) m weighted tasks among a set of n uniform resources (one may think of the tasks as selfish players). We assume an arbitrary initial placement of tasks, and we study the performance of distributed, natural reallocation algorithms. We are interested in the time it takes the system to converge to an equilibrium (or get close to an equilibrium).Our main contributions are (i) a modification of the protocol in 2006 that yields faster convergence to equilibrium, together with a matching lower bound, and (ii) a non-trivial extension to weighted tasks.


research in computational molecular biology | 2005

Improved duplication models for proteome network evolution

Gurkan Bebek; Petra Berenbrink; Colin Cooper; Tom Friedetzky; Joseph H. Nadeau; S. Cenk Sahinalp

Protein-protein interaction networks, particularly that of the yeast S. Cerevisiae, have recently been studied extensively. These networks seem to satisfy the small world property and their (1-hop) degree distribution seems to form a power law. More recently, a number of duplication based random graph models have been proposed with the aim of emulating the evolution of protein-protein interaction networks and satisfying these two graph theoretical properties. In this paper, we show that the proposed model of Pastor-Satorras et al. does not generate the power law degree distribution with exponential cutoff as claimed and the more restrictive model by Chung et al. cannot be interpreted unconditionally. It is possible to slightly modify these models to ensure that they generate a power law degree distribution. However, even after this modification, the more general k-hop degree distribution achieved by these models, for k > 1, are very different from that of the yeast proteome network. We address this problem by introducing a new network growth model that takes into account the sequence similarity between pairs of proteins (as a binary relationship) as well as their interactions. The new model captures not only the k-hop degree distribution of the yeast protein interaction network for all k > 0, but it also captures the 1-hop degree distribution of the sequence similarity network, which again seems to form a power law.


acm symposium on parallel algorithms and architectures | 2003

A proportionate fair scheduling rule with good worst-case performance

Micah Adler; Petra Berenbrink; Tom Friedetzky; Leslie Ann Goldberg; Paul W. Goldberg; Mike Paterson

In this paper we consider the following scenario. A set of n jobs with different threads is being run concurrently. Each job has an associated weight, which gives the proportion of processor time that it should be allocated. In a single time quantum, p threads of (not necessarily distinct) jobs receive one unit of service, and we require a rule that selects those p threads, at each quantum. Proportionate fairness means that over time, each job will have received an amount of service that is proportional to its weight. That aim cannot be achieved exactly due to the discretisation of service provision, but we can still hope to bound the extent to which service allocation deviates from its target. It is important that any scheduling rule be simple since the rule will be used frequently.We consider a variant of the Surplus Fair Scheduling (SFS) algorithm of Chandra, Adler, Goyal, and Shenoy. Our variant, which is appropriate for scenarios where jobs consist of multiple threads, retains the properties that make SFS empirically attractive but allows the first proof of proportionate fairness in a multiprocessor context. We show that when the variant is run, no job lags more than p H(n)-p+1 steps below its target number of services, where H(n) is the Harmonic function. Also, no job is over-supplied by more than O(1) extra services. This analysis is tight and it also extends to an adversarial setting, which models some situations in which the relative weights of jobs change over time.


Random Structures and Algorithms | 2015

Random walks which prefer unvisited edges : exploring high girth even degree expanders in linear time.

Petra Berenbrink; Colin Cooper; Tom Friedetzky

Let be a connected graph with vertices. A simple random walk on the vertex set of G is a process, which at each step moves from its current vertex position to a neighbouring vertex chosen uniformly at random. We consider a modified walk which, whenever possible, chooses an unvisited edge for the next transition; and makes a simple random walk otherwise. We call such a walk an edge-process (or E-process). The rule used to choose among unvisited edges at any step has no effect on our analysis. One possible method is to choose an unvisited edge uniformly at random, but we impose no such restriction. For the class of connected even degree graphs of constant maximum degree, we bound the vertex cover time of the E-process in terms of the edge expansion rate of the graph G, as measured by eigenvalue gap of the transition matrix of a simple random walk on G. A vertex v is l-good, if any even degree subgraph containing all edges incident with v contains at least l vertices. A graph G is l-good, if every vertex has the l-good property. Let G be an even degree l-good expander of bounded maximum degree. Any E-process on G has vertex cover time This is to be compared with the lower bound on the cover time of any connected graph by a weighted random walk. Our result is independent of the rule used to select the order of the unvisited edges, which could, for example, be chosen on-line by an adversary. As no walk based process can cover an n vertex graph in less than n – 1 steps, the cover time of the E-process is of optimal order when . With high probability random r-regular graphs, even, have . Thus the vertex cover time of the E-process on such graphs is .


international colloquium on automata languages and programming | 2005

Dynamic diffusion load balancing

Petra Berenbrink; Tom Friedetzky; Russell Martin

We consider the problem of dynamic load balancing in arbitrary (connected) networks on n nodes. Our load generation model is such that during each round, n tasks are generated on arbitrary nodes, and then (possibly after some balancing) one task is deleted from every non-empty node. Notice that this model fully saturates the resources of the network in the sense that we generate just as many new tasks per round as the network is able to delete. We show that even in this situation the system is stable, in that the total load remains bounded (as a function of n alone) over time. Our proof only requires that the underlying “communication” graph be connected. (It of course also works if we generate less than n new tasks per round, but the major contribution of this paper is the fully saturated case.) We further show that the upper bound we obtain is asymptotically tight (up to a moderate multiplicative constant) by demonstrating a corresponding lower bound on the system load for the particular example of a linear array (or path). We also show some simple negative results (i.e., instability) for work-stealing based diffusion-type algorithms in this setting.


mathematical foundations of computer science | 2012

Observe and remain silent (communication-less agent location discovery)

Tom Friedetzky; Leszek Gąsieniec; Thomas Gorry; Russell Martin

We study a randomised distributed communication-less coordination mechanism for n uniform anonymous agents located on a circle with unit circumference. We assume the agents are located at arbitrary but distinct positions, unknown to other agents. The agents perform actions in synchronised rounds. At the start of each round an agent chooses the direction of its movement (clockwise or anticlockwise), and moves at unit speed during this round. Agents are not allowed to overpass, i.e., when an agent collides with another it instantly starts moving with the same speed in the opposite direction. Agents cannot leave marks on the ring, have zero vision and cannot exchange messages. However, on the conclusion of each round each agent has access to (some, not necessarily all) information regarding its trajectory during this round. This information can be processed and stored by the agent for further analysis. The location discovery task to be performed by each agent is to determine the initial position of every other agent and eventually to stop at its initial position, or proceed to another task, in a fully synchronised manner. Our primary motivation is to study distributed systems where agents collect the minimum amount of information that is necessary to accomplish this location discovery task. Our main result is a fully distributed randomised (Las Vegas type) algorithm, solving the location discovery problemw.h.p. in O(nlog2n) rounds (assuming the agents collect sufficient information). Note that our result also holds if initially the agents do not know the value of n and they have no coherent sense of direction.


european symposium on algorithms | 2007

Convergence to equilibria in distributed, selfish reallocation processes with weighted tasks

Petra Berenbrink; Tom Friedetzky; Iman Hajirasouliha; Zengjian Hu

We consider the problem of dynamically reallocating (or rerouting) m weighted tasks among a set of n uniform resources (one may think of the tasks as selfish agents). We assume an arbitrary initial placement of tasks, and we study the performance of distributed, natural reallocation algorithms. We are interested in the time it takes the system to converge to an equilibrium (or get close to an equilibrium). Our main contributions are (i) a modification of the protocol in [2] that yields faster convergence to equilibrium, together with a matching lower bound, and (ii) a non-trivial extension to weighted tasks.


acm symposium on parallel algorithms and architectures | 2010

Balls into bins with related random choices

Petra Berenbrink; André Brinkmann; Tom Friedetzky; Lars Nagel

We consider a variation of classical <i>ball-into-bins</i> games. We randomly allocate <i>m</i> balls into ◊<i>n</i> bins. Following Godfreys model [6], we assume that each ball <i>i</i> comes with a β-balanced set of clusters of bins Β<sub>i</sub> = Β<sub>i</sub>,...Β<sub>si</sub>}. The condition of β-balancedness essentially enforces a uniform-like selection of bins, where the parameter β governs the deviation from uniformity. We use a more relaxed notion of balancedness than [6], and also generalise the concept to <i>deterministic balancedness</i>. Each ball <i>i</i>=1,...,<i>m</i>, in turn, runs the following protocol: (i) it <i>i.u.r. (independently and uniformly at random)</i> chooses a cluster of bins Β<sub>i</sub> ∈ Β<sub>i</sub>, and (ii) <i>i.u.r.</i> chooses one of the empty bins in Β<sub>i</sub> and allocates itself to it. Should the cluster not contain at least a single empty bin then the protocol fails. If the protocol terminates successfully, that is, every ball has indeed been able to find at least one empty bin in its chosen cluster, then this will obviously result in a maximum load of one. The main goal is to find a tight bound on the maximum number of balls, <i>m</i>, so that the protocol terminates successfully (with high probability). We improve on Godfreys result and show <i>m</i> = <i>n</i> ‾ Θ(β). This upper bound holds for all mentioned types of balancedness. It even holds when we generalise the model by allowing <i>runs</i>. In this extended model, motivated by P2P networks, each ball <i>i</i> tosses a coin, and with constant probability <i>p</i><sub>i</sub> (0 < <i>p</i><sub>i</sub> ≤ 1) it runs the protocol as described above, but with the remaining probability it copies the previous balls choice Β<sub>i</sub>_<sub>1</sub>, that is, it re-uses the previous cluster of bins.


international parallel and distributed processing symposium | 2006

A new analytical method for parallel, diffusion-type load balancing

Petra Berenbrink; Tom Friedetzky; Zengjian Hu

We propose a new proof technique which can be used to analyze many parallel load balancing algorithms. The technique is designed to handle concurrent load balancing actions, which are often the main obstacle in the analysis. We demonstrate the usefulness of the approach by analyzing various natural diffusion-type protocols. Our results are similar to, or better than, previously existing ones, while our proofs are much easier. The key idea is to first sequentialize the original, concurrent load transfers, analyze this new, sequential system, and then to bound the gap between both

Collaboration


Dive into the Tom Friedetzky's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zengjian Hu

Simon Fraser University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge