Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where F. Meyer auf der Heide is active.

Publication


Featured researches published by F. Meyer auf der Heide.


foundations of computer science | 1997

Exploiting locality for data management in systems of limited bandwidth

Bruce M. Maggs; F. Meyer auf der Heide; Berthold Vöcking; Matthias Westermann

This paper deals with data management in computer systems in which the computing nodes are connected by a relatively sparse network. We consider the problem of placing and accessing a set of shared objects that are read and written from the nodes in the network. These objects are, e.g., global variables in a parallel program, pages or cache lines in a virtual shared memory system, shared files in a distributed file system, or pages in the World Wide Web. A data management strategy consists of a placement strategy that maps the objects (possibly dynamically and with redundancy) to the nodes, and an access strategy that describes how reads and writes are handled by the system (including the routing). We investigate static and dynamic data management strategies.


symposium on the theory of computing | 1985

One, two, three . . . infinity: lower bounds for parallel computation

Faith E. Fich; F. Meyer auf der Heide; Prabhakar Ragde; Avi Wigderson

In this paper we compare the power of the two most commonly used concurrent-write models of parallel computation, the COMMON PRAM and the PRIORITY PRAM. These models differ in the way they resolve write conflicts. If several processors want to write into the same shared memory cell at the same time, in the COMMON model they have to write the same value. In the PRIORITY model, they may attempt to write different values; the processor with smallest index succeeds. We consider PRAMs with <italic>n</italic> processors, each having arbitrary computational power. We provide the first separation results between these two models in two extreme cases: when the size <italic>m</italic> of the shared memory is small (<italic>m</italic> ≤ <italic>n</italic><supscrpt>ε</supscrpt>, ε < 1), and when it is infinite. In the case of small memory, the PRIORITY model can be faster than the COMMON model by a factor of &THgr;(log <italic>n</italic>), and this lower bound holds even if the COMMON model is probabilistic. In the case of infinite memory, the gap between the models can be a factor of &OHgr;(log log log <italic>n</italic>). We develop new proof techniques to obtain these results. The technique used for the second lower bound is strong enough to establish the first tight time bounds for the PRIORITY model, which is the strongest parallel computation model. We show that finding the maximum of <italic>n</italic> numbers requires &THgr;(log log <italic>n</italic>) steps, generalizing a result of Valiant for parallel computation trees.


Theoretical Computer Science | 1985

Simulating probabilistic by deterministic algebraic computation trees

F. Meyer auf der Heide

A probabilistic algebraic computation tree (probabilistic ACT) which recognizes L ⊂ Rn in expected time T, and which gives the wrong answer with probability ⩽ ϵ < 12, can be simulated by a deterministic ACT in O(T2n) steps. The same result holds for linear search algorithms (LSAs). The result for ACTs establishes a weaker version of results previously shown by the author for LSAs, namely that LSAs can only be slightly sped up by their nondeterministic versions. This paper shows that ACTs can only be slightly sped up by their probabilistic versions. The result for LSAs solves a problem posed by Snir (1983). He found an example where probabilistic LSAs are faster than deterministic ones and asked how large this gap can be.


foundations of computer science | 1984

On The Limits To Speed Up Parallel Machines By Large Hardware And Unbounded Communication

F. Meyer auf der Heide; Rüdiger Reischuk

Lower bounds for sequential and parallel random access machines (RAMs, WRAMs) and distributed systems of RAMs (DRAMs) are proved. We show that, when p processors instead of one are available, the computation of certain functions cannot be speeded up by a factor p but only by a factor 0 (log(p)). For DRAMs with communication graph of degree c a maximal speedup 0 (log(c)) can be achieved for these problems. We apply these results to testing the solvability of linear diophantine equations. This generalizes a lower bond of Yao for parallel computation trees. Improving results of Dobkin/Lipton and Klein/Meyer auf der Heide, we establish large lower bounds for the above problem on RAMs. Finnaly we prove that at least log (n) + 1 steps are necessary for computing the sum of n integers by a WRAM regardless of the number of processors and the solution of write conflicts.


symposium on the theory of computing | 1990

Not all keys can be hashed in constant time

Joseph Gil; F. Meyer auf der Heide; Avi Wigderson

A multitude of models and algorithms for hashing have been suggested and analyzed. However, almost all of them are specific in their assumptions and results. We present a simple new model that captures many natural (sequential and parallel) hashing algorithms. In a game against nature, the algorithm and coin-tosses cause the evolution of a random tree, whose size corresponds to space (hash table size), and two notions of depth correspond respectively to the largest probe sequences for insertion (parallel insertion time) and search of a key.


Theoretical Computer Science | 1988

A tradeoff between search and update time for the implicit dictionary problem

Faith E. Fich; F. Meyer auf der Heide; Eli Upfal; Avi Wigderson

This paper proves a tradeoff between the time it takes to search for elements in an implicit dictionary and the time it takes to update the value of elements in specified locations of the dictionary. It essentially shows that if the update time is constant, then the search time is ω(nɛ) for some constant ɛ>0.


symposium on the theory of computing | 1985

Fast algorithms for n-dimensional restrictions of hard problems

F. Meyer auf der Heide

Let M be a parallel RAM with p processors and arithmetic operations addition and subtraction recognizing <italic>L</italic> ⊂ <italic>N<supscrpt>n</supscrpt></italic> in t steps. Then <italic>L</italic> can be recognized by a (sequential!) linear search algorithm (LSA) in &Ogr;(<italic>n</italic><supscrpt>4</supscrpt>(log(<italic>n</italic>) + <italic>t</italic> + log(<italic>p</italic>))) steps. Thus many n-dimensional restrictions of NP-complete problems (binary programming, traveling salesman problem, etc.) and even that of the uniquely optimum traveling salesman problem, which is Δ<supscrpt><italic>P</italic></supscrpt><subscrpt>2</subscrpt>-complete, can be solved in polynomial time by an LSA. This result generalizes the construction of a polynomial LSA for the n-dimensional restriction of the knapsack problem previously shown by the author, and destroys the hope of proving nonpolynomial lower bounds for any problem which can be recognized by a PRAM as above with 2<supscrpt>poly(n)</supscrpt> processors in poly(n) time.


symposium on theoretical aspects of computer science | 1986

Speeding up random access machines by few processors

F. Meyer auf der Heide

Sequential and parallel random access machines (RAMs, PRAMs) with arithmetic operations + and − are considered. PRAMs may also multiply with constants. These machines work on integer inputs. It is shown that, in contrast to bit orientated models as Turing machines or log-cost RAMs, one can in many cases speed up RAMs by PRAMs with few processors. More specifically, a RAM without indirect addressing can be uniformly sped up by a PRAM with q processors by a factor (loglogq)2/logq. A similar result holds for nonuniform speed ups of RAMs with indirect addressing. Furthermore, certain networks of RAMs (such as k-dimensional grids) with q processors can be sped up significantly with only q1+τ processors. Nonuniformly, the above speed up can even be achieved for arbitrary bounded degree networks (including powerful networks such as permutation networks or Cube-Connected Cycles), if only few input variables are allowed. It is previously shown by the author, that the speed ups for RAMs are almost best possible.


scandinavian workshop on algorithm theory | 1988

Upper and lower bounds for the dictionary problem

Martin Dietzfelbinger; Kurt Mehlhorn; F. Meyer auf der Heide; Hans Rohnert

We give a randomized algorithm for the dictionary problem with O(1) worst case time for lookup and O(1) expected amortized time for insertion and deletion. We also prove an Ω(log n) lower bound on the amortized worst case time complexity of any deterministic algorithm based on hashing. Furthermore, if the worst case lookup time is restricted to k, then the lower bound becomes Ω(k·.n1/k).


symposium on theoretical aspects of computer science | 1986

A time-space tradeoff for element distinctness

Faith E. Fich; F. Meyer auf der Heide; Eli Upfal; Avi Wigderson

Collaboration


Dive into the F. Meyer auf der Heide's collaboration.

Top Co-Authors

Avatar

Avi Wigderson

Institute for Advanced Study

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Martin Dietzfelbinger

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joseph Gil

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

A. Karlin

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

H. Rohnert

Technical University of Dortmund

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge