Jesper Larsson Träff
Max Planck Society
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jesper Larsson Träff.
Journal of Parallel and Distributed Computing | 1998
Gerth Stølting Brodal; Jesper Larsson Träff; Christos D. Zaroliagis
We present a parallel priority queue that supports the following operations in constant time:parallel insertionof a sequence of elements ordered according to key,parallel decrease keyfor a sequence of elements ordered according to key,deletion of the minimum key element, anddeletion of an arbitrary element. Our data structure is the first to support multi-insertion and multi-decrease key in constant time. The priority queue can be implemented on the EREW PRAM and can perform any sequence ofnoperations inO(n) time andO(mlogn) work,mbeing the total number of keyes inserted and/or updated. A main application is a parallel implementation of Dijkstras algorithm for the single-source shortest path problem, which runs inO(n) time andO(mlogn) work on a CREW PRAM on graphs withnvertices andmedges. This is a logarithmic factor improvement in the running time compared with previous approaches.
parallel computing | 1995
Jesper Larsson Träff
Abstract We experimentally compare two distributed algorithms based on Dijkstras algorithm for the single-source shortest path problem. The algorithms are intended for asynchronous distributed systems with a small number of processors without shared memory, and specifically address the situation where communication is costly. Variations of the algorithms have been implemented in occam , and results from experiments on a 16 processor transputer system with randomly generated graphs of different types and densities with up to 20000 vertices and 250000 edges are reported. The distributed algorithms exploit the two obvious sources of parallelism in Dijkstras algorithm. In the update-driven algorithm several vertices may be selected and scanned simultaneously. Since not all selected vertices can be guaranteed to be correct, the achievable speed-up depends on the problem instance, and no worst-case guarantee better than the sequential algorithm can be given. In contrast the minimum-driven algorithm performs the scanning in parallel but from only one selected vertex at a time. Ideally this algorithm has linear speed-up for dense graphs. Implemented naively both algorithms perform poorly, but can be improved by relaxations and approximations which reduce communication volume. In all cases the theoretically less attractive update-driven algorithm turns out to perform much better than the apparently preferable minimum-driven algorithm.
european symposium on algorithms | 2003
Irit Katriel; Peter Sanders; Jesper Larsson Träff
We present a simple new (randomized) algorithm for computing minimum spanning trees that is more than two times faster than the best previously known algorithms (for dense, “difficult” inputs). It is of conceptual interest that the algorithm uses the property that the heaviest edge in a cycle can be discarded. Previously this has only been exploited in asymptotically optimal algorithms that are considered impractical. An additional advantage is that the algorithm can greatly profit from pipelined memory access. Hence, an implementation on a vector machine is up to 10 times faster than previous algorithms. We outline additional refinements for MSTs of implicitly defined graphs and the use of the central data structure for querying the heaviest edge between two nodes in the MST. The latter result is also interesting for sparse graphs.
Journal of Parallel and Distributed Computing | 2000
Jesper Larsson Träff; Christos D. Zaroliagis
We present a simple parallel algorithm for the single-source shortest path problem in planar digraphs with nonnegative real edge weights. The algorithm runs on the EREW PRAM model of parallel computation in O((n2?+n1??)logn) time, performing O(n1+?logn) work for any 0<?<1/2. The strength of the algorithm is its simplicity, making it easy to implement and presumable quite efficient in practice. The algorithm improves upon the work of all previous parallel algorithms. Our algorithm is based on a region decomposition of the input graph and uses a well-known parallel implementation of Dijkstras algorithm. The logarithmic factor in both the work and the time can be eliminated by plugging in a less practical, sequential planar shortest path algorithm together with an improved parallel implementation of Dijkstras algorithm.
international parallel processing symposium | 1997
Gerth Stølting Brodal; Jesper Larsson Träff; Christos D. Zaroliagis
Presents a parallel priority data structure that improves the running time of certain algorithms for problems that lack a fast and work-efficient parallel solution. As a main application, we give a parallel implementation of Dijkstras (1959) algorithm which runs in O(n) time while performing O(m log n) work on a CREW PRAM. This is a logarithmic factor improvement for the running time compared with previous approaches. The main feature of our data structure is that the operations needed in each iteration of Dijkstras algorithm can be supported in O(1) time.
international conference on algorithms and complexity | 1997
Jyrki Katajainen; Jesper Larsson Träff
The efficiency of mergesort programs is analysed under a simple unit-cost model. In our analysis the time performance of the sorting programs includes the costs of key comparisons, element moves and address calculations. The goal is to establish the best possible time-bound relative to the model when sorting n integers. By the well-known information-theoretic argument n log2n−O(n) is a lower bound for the integer-sorting problem in our framework. New implementations for two-way and four-way bottom-up mergesort are given, the worst-case complexities of which are shown to be bounded by 5.5n log2n+O(n) and 3.25n log2n+O(n), respectively. The theoretical findings are backed up with a series of experiments which show the practical relevance of our analysis when implementing library routines for internal-memory computations.
European Journal of Operational Research | 1996
Jesper Larsson Träff
Abstract This note presents a simple heuristic to speed up algorithms for the maximum flow problem that works by repeatedly finding blocking flows in layered (acyclic) networks. The heuristic assigns a capacity to each vertex of the layered network, which will be an upper bound on the amount of flow that can be transported through that vertex to the sink. This information can be utilized when constructing a blocking flow, since no vertex can ever accommodate more flow than its capacity. The static heuristic computes capacities in a layered network once, while a dynamic variant readjusts capacities during construction of the blocking flow. The effects of both static and dynamic heuristics are evaluated by a series of experiments with the wave algorithm of Tarjan. Although neither give theoretical improvement to the efficiency of the algorithm, the practical effects are in most cases worthwhile, and for certain types of networks quite dramatic.
acm symposium on parallel algorithms and architectures | 1996
Christoph W. Kessler; Jesper Larsson Träff
Christoph W. Kefiler Jesper Larsson Traff FB 4 Inforulatik, Universitat Trier Max–Planck–Institut fur Inforn]atik D-54286 Trier, Germany D-66123 Saarbrucken, Chmany kessler@psi .uni-trier. de traff@mpi-sb .mpg.de A library. called PAD, ofhasic parallel algorithms ancl data structllres for the PRAM is currently being implemented Ilsing the PRAM programming language Fork95. Main motivations of the PAD m-oiect”is to studv the PRAM as a .“ . practical programming model, and to provide an organized collection ofhasic PRAM algorithms for the SB-PRAM under completion at the IJniversity of Saarbrucken. We give a brief survey of Fork95, ancl describe the main components of PAD. Finally we report on the status of thelang-uage and library imd discuss further clevelopments.
Untitled Event | 1997
Artur Czumaj; Paolo Ferragina; Leszek Gasieniec; S. Muthukrishnan; Jesper Larsson Träff
european conference on parallel processing | 2002
Peter Sanders; Jesper Larsson Träff