Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael E. Saks is active.

Publication


Featured researches published by Michael E. Saks.


SIAM Journal on Computing | 2000

Wait-Free k -Set Agreement is Impossible: The Topology of Public Knowledge

Michael E. Saks; Fotios Zaharoglou

In the classical consensus problem, each of n processors receives a private input value and produces a decision value which is one of the original input values, with the requirement that all processors decide the same value. A central result in distributed computing is that, in several standard models including the asynchronous shared-memory model, this problem has no deterministic solution. The k-set agreement problem is a generalization of the classical consensus proposed by Chaudhuri [ Inform. and Comput., 105 (1993), pp. 132--158], where the agreement condition is weakened so that the decision values produced may be different, as long as the number of distinct values is at most k. For


symposium on the theory of computing | 1989

The cell probe complexity of dynamic data structures

Michael L. Fredman; Michael E. Saks

n>k\geq 2


Journal of the ACM | 1992

An optimal on-line algorithm for metrical task system

Nathan Linial; Michael E. Saks

it was not known whether this problem is solvable deterministically in the asynchronous shared memory model. In this paper, we resolve this question by showing that for any k < n, there is no deterministic wait-free protocol for n processors that solves the k-set agreement problem. The proof technique is new: it is based on the development of a topological structure on the set of possible processor schedules of a protocol. This topological structure has a natural interpretation in terms of the knowledge of the processors of the state of the system. This structure reveals a close analogy between the impossibility of wait-free k-set agreement and the Brouwer fixed point theorem for the k-dimensional ball.


Journal of the ACM | 2005

An improved exponential-time algorithm for k -SAT

Ramamohan Paturi; Michael E. Saks; Francis Zane

Dynamic data structure problems involve the representation of data in memory in such a way as to permit certain types of modifications of the data (updates) and certain types of questions about the data (queries). This paradigm encompasses many fundamental problems in computer science. The purpose of this paper is to prove new lower and upper bounds on the time per operation to implement solutions to some familiar dynamic data structure problems including list representation, subset ranking, partial sums, and the set union problem. The main features of our lower bounds are:<list><item>They hold in the <italic>cell probe</italic> model of computation (A. Yao [18]) in which the time complexity of a sequential computation is defined to be the number of words of memory that are accessed. (The number of bits <italic>b</italic> in a single word of memory is a parameter of the model). All other computations are free. This model is at least as powerful as a random access machine and allows for unusual representation of data, indirect addressing etc. This contrasts with most previous lower bounds which are proved in models (e.g., algebraic, comparison, pointer manipulation) which require restrictions on the way data is represented and manipulated. </item><item>The lower bound method presented here can be used to derive amortized complexities, worst case per operation complexities, and randomized complexities. </item><item>The results occasionally provide (nearly tight) tradeoffs between the number <italic>R</italic> of words of memory that are read per operation, the number <italic>W</italic> of memory words rewritten per operation and the size <italic>b</italic> of each word. For the problems considered here there is a parameter <italic>n</italic> that represents the size of the data set being manipulated and for these problems <italic>b</italic> = log<italic>n</italic> is a natural register size to consider. By letting <italic>b</italic> vary, our results illustrate the effect of register size on time complexity. For instance, one consequence of the results is that for some of the problems considered here, increasing the register size from log<italic>n</italic> to polylog(<italic>n</italic>) only reduces the time complexity by a constant factor. On the other hand, decreasing the register size from log<italic>n</italic> to 1 increases time complexity by a log<italic>n</italic> factor for one of the problems we consider and only a loglog<italic>n</italic> factor for some other problems. </item></list> The first two specific data structure problems for which we obtain bounds are:<list><item>List Representation. This problem concerns the representation of an ordered list of at most <italic>n</italic> (not necessarily distinct) elements from the universe <italic>U</italic> = {1, 2,…, <italic>n</italic>}. The operations to be supported are report(<italic>k</italic>), which returns the <italic>k<supscrpt>th</supscrpt></italic> element of the list, insert(<italic>k</italic>, <italic>u</italic>) which inserts element <italic>u</italic> into the list between the elements in positions <italic>k</italic> - 1 and <italic>k</italic>, delete(<italic>k</italic>), which deletes the <italic>k<supscrpt>th</supscrpt></italic> item. </item><item>Subset Rank. This problem concerns the representation of a subset <italic>S</italic> of <italic>U</italic> = {1, 2,…, <italic>n</italic>}. The operations that must be supported are the updates “insert item <italic>j</italic> into the set” and “delete item <italic>j</italic> from the set” and the queries rank(<italic>j</italic>), which returns the number of elements in <italic>S</italic> that are less than or equal to <italic>j</italic>. </item></list> The natural word size for these problems is <italic>b</italic> = log<italic>n</italic>, which allows an item of <italic>U</italic> or an index into the list to be stored in one register. One simple solution to the list representation problem is to maintain a vector <italic>v</italic>, whose <italic>k<supscrpt>th</supscrpt></italic> entry contains the <italic>k<supscrpt>th</supscrpt></italic> item of the list. The report operation can be done in constant time, but the insert and delete operations may take time linear in the length of the list. Alternatively, one could store the items of the list with each element having a pointer to its predecessor and successor in the list. This allows for constant time updates (given a pointer to the appropriate location), but requires linear cost for queries. This problem can be solved must more efficiently by use of balanced trees (such as AVL trees). When <italic>b</italic> = log<italic>n</italic>, the worst case cost per operation using AVL trees is <italic>O</italic>(log<italic>n</italic>). If instead <italic>b</italic> = 1, so that each bit access costs 1, then the AVL three solution requires <italic>O</italic>(log<supscrpt>2</supscrpt><italic>n</italic>) per operation. It is not hard to find similar upper bounds for the subset rank problem (the algorithms for this problem are actually simpler than AVL trees). The question is: are these upper bounds bet possible? Our results show that the upper bounds for the case of log<italic>n</italic> bit registers are within a loglog<italic>n</italic> factor of optimal. On the other hand, somewhat surprisingly, for the case of single bit registers there are implementations for both of these problems that run in time significantly faster than <italic>O</italic>(log<supscrpt>2</supscrpt><italic>n</italic>) per operation. Let CPROBE(<italic>b</italic>) denote the cell probe computational model with register size <italic>b</italic>. Theorem 1. If <italic>b</italic> ≤ (log<italic>n</italic>)<supscrpt><italic>t</italic></supscrpt> for some <italic>t</italic>, then any CPROBE(<italic>b</italic>) implementation of either list representation or the subset rank requires &OHgr;(log<italic>n</italic>/loglog<italic>n</italic>) amortized time per operation. Theorem 2. Subset rank and list representation have CPROBE(1) implementations with respective complexities <italic>O</italic>((log<italic>n</italic>)(loglog<italic>n</italic>)) and <italic>O</italic>((log<italic>n</italic>)(loglog<italic>n</italic>)<supscrpt>2</supscrpt>) per operation. Paul Dietz (personal communication) has found an implementation of list representation with log<italic>n</italic> bit registers that requires only <italic>O</italic>(log<italic>n</italic>/loglog<italic>n</italic>) time per operation, and thus the result of theorem 1 is best possible. The lower bounds of theorem 1 are derived from lower bounds for a third problem:<list><item>Partial sum mode k. An array <italic>A</italic>[1],…, <italic>A</italic>[<italic>N</italic>] of integers mod <italic>k</italic> is to be represented. Updates are add(<italic>i</italic>, δ) which implements <italic>A</italic>[<italic>i</italic>] ← <italic>A</italic>[<italic>i</italic>] + δ; and queries are sum(j) which returns &Sgr;<subscrpt><italic>i</italic>≤<italic>j</italic></subscrpt><italic>A</italic>[<italic>i</italic>] (mod <italic>k</italic>). </item></list> This problem is demoted PS(n, k). Our main lower bound theorems provide tradeoffs between the number of register rewrites and register reads as a function of <italic>n</italic>, <italic>k</italic>, and <italic>b</italic>. Two corollaries of these results are:<list><item>Theorem 3. Any CPROBE(<italic>b</italic>) implementation of PS(n, 2) (partial sums mod 2) requires &OHgr;(log<italic>n</italic>/(loglog<italic>n</italic> + log<italic>b</italic>)) amortized time per operation, and for <italic>b</italic> ≥ log<italic>n</italic>, there is an implementation that achieves this. In particular, if <italic>b</italic> = &THgr;((log<italic>n</italic>)<supscrpt>c</supscrpt>) for some constant <italic>c</italic>, then the optimal time complexity of PS(n, 2) is &thgr;(log<italic>n</italic>/loglog<italic>n</italic>). </item><item>Theorem 4. Any CPROBE(1) implementation of PS(n, n) with single bit registers requires &OHgr;((log<italic>n</italic>/loglog<italic>n</italic>)<supscrpt>2</supscrpt>) amortized time per operation, and there is an implementation that achieves <italic>O</italic>(log<supscrpt>2</supscrpt><italic>n</italic>) time per operation. </item></list> It can be shown that a lower bound of PS(n, 2) is also a lower bound for both list representation and subset rank (the details, which are not difficult, are omitted from this report), and thus theorem 1 follows from theorem 3. The results of theorem 4 make an interesting contrast with those of theorem 2. For the three problems, list representation, subset rank and PS(n, k), there are standard algorithms that can be implemented on a CPROBE(log<italic>n</italic>) that use time <italic>O</italic>(log<italic>n</italic> per operation, and their implementations on CPROBE(1) require <italic>O</italic>(log<supscrpt>2</supscrpt><italic>n</italic>) time. Theorem 4 says that for the problem PS(n, n) this algorithm is essentially best possible, while theorem 2 says that for list representation and rank, the algorithm can be significantly improved. In fact, the rank problem an be viewed as a special case of PS(n, n) where the variables take on values on {0, 1}, and apparently this specialization is enough to reduce the complexity on a CPROBE(1) by a factor of log<italic>n</italic>/loglog<italic>n</italic>, even though on a CPROBE(log<italic>n</italic>) the complexities of the two problems differ by no more than a loglog<italic>n</italic> factor. The third problem we consider is the set union problem. This problem concerns the design of a data structure for the on-line manipulation of sets in the following setting. Initially, there are <italic>n</italic> singleton sets {1}, {2},…, {<italic>n</italic>} with <italic>i</italic> chosen as the name of the set {<italic>i</italic>}. Our data structure is required to implement two operations, Find(<italic>j</italic>), and Union(<italic>A</ital


symposium on the theory of computing | 1987

An optimal online algorithm for metrical task systems

Nathan Linial; Michael E. Saks

In practice, almost all dynamic systems require decisions to be made on-line, without full knowledge of their future impact on the system. A general model for the processing of sequences of tasks is introduced, and a general on-line decision algorithm is developed. It is shown that, for an important class of special cases, this algorithm is optimal among all on-line algorithms. Specifically, a task system (<italic>S,d</italic>) for processing sequences of tasks consists of a set <italic>S</italic> of states and a cost matrix <italic>d</italic> where <italic>d</italic>(<italic>i, j</italic> is the cost of changing from state <italic>i</italic> to state <italic>j</italic> (we assume that <italic>d</italic> satisfies the triangle inequality and all diagonal entries are 0). The cost of processing a given task depends on the state of the system. A schedule for a sequence <italic>T<supscrpt>1</supscrpt></italic>, <italic>T<supscrpt>2</supscrpt></italic>,…, <italic>T<supscrpt>k</supscrpt></italic> of tasks is a sequence <italic>s</italic><subscrpt>1</subscrpt>, <italic>s</italic><subscrpt>2</subscrpt>,…,<italic>s<subscrpt>k</subscrpt></italic> of states where <italic>s<subscrpt>i</subscrpt></italic> is the state in which <italic>T<supscrpt>i</supscrpt></italic> is processed; the cost of a schedule is the sum of all task processing costs and the state transition costs incurred. An on-line scheduling algorithm is one that chooses <italic>s<subscrpt>i</subscrpt></italic> only knowing <italic>T</italic><supscrpt>1</supscrpt><italic>T</italic><supscrpt>2</supscrpt>…<italic>T<supscrpt>i</supscrpt></italic>. Such an algorithm is <italic>w</italic>-competitive if, on any input task sequence, its cost is within an additive constant of <italic>w</italic> times the optimal offline schedule cost. The competitive ratio <italic>w</italic>(<italic>S</italic>, <italic>d</italic>) is the infimum <italic>w</italic> for which there is a <italic>w</italic>-competitive on-line scheduling algorithm for (<italic>S</italic>,<italic>d</italic>). It is shown that <italic>w</italic>(<italic>S</italic>, <italic>d</italic>) = 2|S|–1 <italic>for every task system</italic> in which <italic>d</italic> is symmetric, and <italic>w</italic>(<italic>S, d</italic>) = <italic>O</italic>(|<italic>S</italic>|<supscrpt>2</supscrpt>) for every task system. Finally, randomized on-line scheduling algorithms are introduced. It is shown that for the uniform task system (in which <italic>d</italic>(<italic>i,j</italic>) = 1 for all <italic>i,j</italic>), the expected competitive ratio <italic>w¯</italic>(<italic>S,d</italic>) = <italic>O</italic>(log|S|).


foundations of computer science | 1986

Probabilistic Boolean decision trees and the complexity of evaluating game trees

Michael E. Saks; Avi Wigderson

We propose and analyze a simple new randomized algorithm, called ResolveSat, for finding satisfying assignments of Boolean formulas in conjunctive normal form. The algorithm consists of two stages: a preprocessing stage in which resolution is applied to enlarge the set of clauses of the formula, followed by a search stage that uses a simple randomized greedy procedure to look for a satisfying assignment. Currently, this is the fastest known probabilistic algorithm for <i>k</i>-CNF satisfiability for <i>k</i> ≥ 4 (with a running time of <i>O</i>(2<sup>0.5625<i>n</i></sup>) for 4-CNF). In addition, it is the fastest known probabilistic algorithm for <i>k</i>-CNF, <i>k</i> ≥ 3, that have at most one satisfying assignment (unique <i>k</i>-SAT) (with a running time <i>O</i>(2<sup>(2 ln 2 − 1)<i>n</i> + <i>o</i>(<i>n</i>)</sup>) = <i>O</i>(2<sup>0.386 … <i>n</i></sup>) in the case of 3-CNF). The analysis of the algorithm also gives an upper bound on the number of the codewords of a code defined by a <i>k</i>-CNF. This is applied to prove a lower bounds on depth 3 circuits accepting codes with nonconstant distance. In particular we prove a lower bound Ω(2<sup>1.282…√>i<n>/i<</sup>) for an explicitly given Boolean function of <i>n</i> variables. This is the first such lower bound that is asymptotically bigger than 2<sup>√>i<n>/i< + <i>o</i>(√>i<n>/i<)</sup>.


Combinatorica | 1993

LOW DIAMETER GRAPH DECOMPOSITIONS

Nathan Linial; Michael E. Saks

In practice, almost all dynamic systems require decisions to be made online, without full knowledge of their future impact on the system. We introduce a general model for the processing of sequences of tasks and develop a general online decision algorithm. We show that, for an important class of special cases, this algorithm is optimal among all online algorithms. Specifically, a task system (<italic>S</italic>, <italic>d</italic>) for processing sequences of tasks consists of a set <italic>S</italic> of states and a cost matrix <italic>d</italic> where <italic>d</italic>(<italic>i</italic>, <italic>j</italic>) is the cost of changing from state <italic>i</italic> to state <italic>j</italic> (we assume that <italic>d</italic> satisfies the triangle inequality and all diagonal entries are O.) The cost of processing a given task depends on the state of the system. A schedule for a sequence <italic>T</italic><supscrpt>1</supscrpt>, <italic>T</italic><supscrpt>2</supscrpt> … <italic>T</italic><supscrpt>k</supscrpt> of tasks is a sequence <italic>s</italic><subscrpt>1</subscrpt>, <italic>s</italic><subscrpt>2</subscrpt> … <italic>s</italic><subscrpt>k</subscrpt> of states where <italic>s</italic><subscrpt>i</subscrpt> is the state in which <italic>T</italic><supscrpt>i</supscrpt> is processed; the cost of a schedule is the sum of all task processing costs and state transition costs incurred. An online scheduling algorithm is one that chooses <italic>s</italic><subscrpt>i</subscrpt> only knowing <italic>T</italic><supscrpt>1</supscrpt> <italic>T</italic><supscrpt>2</supscrpt> … <italic>T</italic><supscrpt>i</supscrpt>. Such an algorithm operates within waste factor <italic>w</italic> if, on any input task sequence, its costs is within an additive constant of <italic>w</italic> times the optimal offline schedule cost. The online waste factor <italic>w</italic>(<italic>S</italic>, <italic>d</italic>) is the infirm waste factor of any online scheduling algorithm for (<italic>S</italic>, <italic>d</italic>). We show that <italic>w</italic>(<italic>S</italic>, <italic>d</italic>) = 2|<italic>S</italic>| - 1 <italic>for every task system</italic> in which <italic>d</italic> symmetric, and <italic>w</italic>(<italic>S</italic>, <italic>d</italic>) = <italic>&Ogr;</italic>(|<italic>S</italic>|<supscrpt>2</supscrpt>) for every task system.


Combinatorica | 1984

A topological approach to evasiveness

Jeff Kahn; Michael E. Saks; Dean Sturtevant

The Boolean Decision tree model is perhaps the simplest model that computes Boolean functions; it charges only for reading an input variable. We study the power of randomness (vs. both determinism and non-determinism) in this model, and prove separation results between the three complexity measures. These results are obtained via general and efficient methods for computing upper and lower bounds on the probabilistic complexity of evaluating Boolean formulae in which every variable appears exactly once (AND/OR tree with distinct leaves). These bounds are shown to be exactly tight for interesting families of such tree functions. We then apply our results to the complexity of evaluating game trees, which is a central problem in AI. These trees are similar to Boolean tree functions, except that input variables (leaves) may take values from a large set (of valuations to game positions) and the AND/OR nodes are replaced by MIN/MAX nodes. Here the cost is the number of positions (leaves) probed by the algorithm. The best known algorithm for this problem is the alpha-beta pruning method. As a deterministic algorithm, it will in the worst case have to examine all positions. Many papers studied the expected behavior of alpha-beta pruning (on uniform trees) under the unreasonable assumption that position values are drawn independently from some distribution. We analyze a randomized variant of alphabeta pruning, show that it is considerably faster than the deterministic one in worst case, and prove it optimal for uniform trees.


Discrete Mathematics | 1989

An on-line graph coloring algorithm with sublinear performance ratio

L. Lovsz; Michael E. Saks; William T. Trotter

AbstractAdecomposition of a graphG=(V,E) is a partition of the vertex set into subsets (calledblocks). Thediameter of a decomposition is the leastd such that any two vertices belonging to the same connected component of a block are at distance ≤d. In this paper we prove (nearly best possible) statements, of the form: Anyn-vertex graph has a decomposition into a small number of blocks each having small diameter. Such decompositions provide a tool for efficiently decentralizing distributed computations. In [4] it was shown that every graph has a decomposition into at mosts(n) blocks of diameter at mosts(n) for


Journal of the ACM | 1989

The periodic balanced sorting network

Martin Dowd; Yehoshua Perl; Larry Rudolph; Michael E. Saks

Collaboration


Dive into the Michael E. Saks's collaboration.

Top Co-Authors

Avatar

Nathan Linial

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Shiyu Zhou

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michal Koucký

Charles University in Prague

View shared research outputs
Top Co-Authors

Avatar

C. Seshadhri

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anna R. Karlin

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Avi Wigderson

Institute for Advanced Study

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge