Faith E. Fich
University of Toronto
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Faith E. Fich.
Distributed Computing | 2003
Faith E. Fich; Eric Ruppert
Abstract.We survey results from distributed computing that show tasks to be impossible, either outright or within given resource bounds, in various models. The parameters of the models considered include synchrony, fault-tolerance, different communication media, and randomization. The resource bounds refer to time, space and message complexity. These results are useful in understanding the inherent difficulty of individual problems and in studying the power of different models of distributed computing. There is a strong emphasis in our presentation on explaining the wide variety of techniques that are used to obtain the results described.
symposium on the theory of computing | 2002
Paul Beame; Faith E. Fich
We obtain matching upper and lower bounds for the amount of time to find the predecessor of a given element among the elements of a fixed compactly stored set. Our algorithms are for the unit-cost word RAM with multiplication and are extended to give dynamic algorithms. The lower bounds are proved for a large class of problems, including both static and dynamic predecessor problems, in a much stronger communication game model, but they apply to the cell probe and RAM models.
symposium on the theory of computing | 1983
Faith E. Fich
In this paper, new upper and lower bounds are obtained for the number of gates in parallel prefix circuits with minimum depth when the number of inputs is a power of two. In addition, structural information concerning these circuits is described. Parallel prefix circuits with bounds imposed on the fan-out of the gates are also considered. In both cases, the upper and lower bounds obtained differ by small constant factors.
symposium on the theory of computing | 1999
Paul Beame; Faith E. Fich
We obtain matching upper and lower bounds for the amount of time to find the predecessor of a given element among the elements of a fixed efficiently stored set. Our algorithms are for the unit-cost word-level RAM with multiplication and extend to give optimal dynamic algorithms. The lower bounds are proved in a much stronger communication game model, but they apply to the cell probe and RAM models and to both static and dynamic predecessor problems.
international symposium on distributed computing | 2005
Faith E. Fich; Victor Luchangco; Mark Moir; Nir Shavit
The obstruction-free progress condition is weaker than previous nonblocking progress conditions such as lock-freedom and wait-freedom, and admits simpler implementations that are faster in the uncontended case. Pragmatic contention management techniques appear to be effective at facilitating progress in practice, but, as far as we know, none guarantees progress. We present a transformation that converts any obstruction-free algorithm into one that is wait-free when analyzed in the unknown-bound semisynchronous model. Because all practical systems satisfy the assumptions of the unknown-bound model, our result implies that, for all practical purposes, obstruction-free implementations can provide progress guarantees equivalent to wait-freedom. Our transformation preserves the advantages of any pragmatic contention manager, while guaranteeing progress.
Journal of Computer and System Sciences | 1980
Janusz A. Brzozowski; Faith E. Fich
We consider the family of languages whose syntactic monoids are R-trivial. Languages whose syntactic monoids are J-trivial correspond to a congruence which tests the subwords of length n or less that appear in a given word, for some integer n. We show that in the R-trivial case the required congruence also takes into account the order in which these subwords first appear, from left to right. Characterizations of the related finite automata and regular expressions are summarized. Dual results for L-trivial monoids are also discussed.
symposium on the theory of computing | 1985
Faith E. Fich; F. Meyer auf der Heide; Prabhakar Ragde; Avi Wigderson
In this paper we compare the power of the two most commonly used concurrent-write models of parallel computation, the COMMON PRAM and the PRIORITY PRAM. These models differ in the way they resolve write conflicts. If several processors want to write into the same shared memory cell at the same time, in the COMMON model they have to write the same value. In the PRIORITY model, they may attempt to write different values; the processor with smallest index succeeds. We consider PRAMs with <italic>n</italic> processors, each having arbitrary computational power. We provide the first separation results between these two models in two extreme cases: when the size <italic>m</italic> of the shared memory is small (<italic>m</italic> ≤ <italic>n</italic><supscrpt>ε</supscrpt>, ε < 1), and when it is infinite. In the case of small memory, the PRIORITY model can be faster than the COMMON model by a factor of &THgr;(log <italic>n</italic>), and this lower bound holds even if the COMMON model is probabilistic. In the case of infinite memory, the gap between the models can be a factor of &OHgr;(log log log <italic>n</italic>). We develop new proof techniques to obtain these results. The technique used for the second lower bound is strong enough to establish the first tight time bounds for the PRIORITY model, which is the strongest parallel computation model. We show that finding the maximum of <italic>n</italic> numbers requires &THgr;(log log <italic>n</italic>) steps, generalizing a result of Valiant for parallel computation trees.
Algorithmica | 1988
Faith E. Fich; Prabhakar Ragde; Avi Wigderson
This paper is concerned with the relative power of the two most popular concurrent-write models of parallel computation, the PRIORITY PRAM [G], and the COMMON PRAM [K]. Improving the trivial and seemingly optimalO(logn) simulation, we show that one step of a PRIORITY machine can be simulated byO(logn/(log logn)) steps of a COMMON machine with the same number of processors (and more memory). We further prove that this is optimal, if processor communication is restricted in a natural way.
symposium on the theory of computing | 1983
Danny Dolev; Faith E. Fich; Wolfgang J. Paul
<italic>Branching programs</italic> for the computation of Boolean functions were first studied in the Masters thesis of Masek.<supscrpt>7</supscrpt> In a rather straightforward manner they generalize the concept of a decision tree to a <italic>decision graph.</italic> Let P be a branching program with edges labelled by the Boolean variables, x<subscrpt>1</subscrpt>,...,x<subscrpt>n</subscrpt> and their complements. Given an input a&equil;(a<subscrpt>1</subscrpt>,...,a<subscrpt>n</subscrpt>) ε {0,1}<supscrpt>n</supscrpt>, program P computes a <italic>function value</italic> f<subscrpt>p</subscrpt>(a) in the following way. The nodes of P play the role of states or configurations. In particular, sinks play the role of final states or stopping configurations. The <italic>length</italic> of program P is the length of the longest path in P. Following Cobham,<supscrpt>2</supscrpt><italic>capacity</italic> of the program is defined to be the logarithm to the base 2 of the number of nodes in P. Length and capacity are lower bounds on time and space requirements for any reasonable model of sequential computation. Clearly, any n-variable Boolean function can be computed by a branching program of length n <italic>if</italic> the capacity is not constrained. Since space lower bounds in excess of log n remain a fundamental challenge, we consider restricted branching programs in the hope of gaining insight into this problem and the closely related problem of time-space trade-offs.
principles of distributed computing | 1984
Faith E. Fich; Prabhakar Ragde; Avi Wigderson
Shared-memory models for parallel computation (e.g. parallel RAMs) are very natural and already widely used for parallel algorithm design. The various models differ from each other mainly in the way they restrict simultaneous processor access to a shared memory cell. Understanding the relative power of these models is important for understanding the power of parallel computation. Two recent pioneering works shed some light in this question. Cook and Dwork [CD] (resp. Snir [S]) present problems that, for instances of size n, can be solved in O(1) time on an n-processor PRAM that allows simultaneous write (resp. read) access to shared memory, but require Ω(log n) time on a PRAM that forbids simultaneous write (resp. read) access, regardless of the number of processors. When allowing simultaneous write access, the model must include a write-conflict resolution scheme. Three such schemes were suggested in the literature, and in this paper we study their relative power. Here the situation is more sensitive, as a small increase in the number of processors allows constant time simulation of the strongest by the weakest. By fixing the number of processors and parametrizing the number of shared memory cells, we obtain tight separation results between the models, thereby partially answering open questions of Vishkin [V]. New lower bounds techniques are developed for this purpose.