Thomas Mølhave
Duke University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Thomas Mølhave.
workshop on algorithms and data structures | 2007
Allan Grønlund Jørgensen; Gabriel Moruz; Thomas Mølhave
In the faulty-memory RAM model, the content of memory cells can get corrupted at any time during the execution of an algorithm, and a constant number of uncorruptible registers are available. A resilient data structure in this model works correctly on the set of uncorrupted values. In this paper we introduce a resilient priority queue. The deletemin operation of a resilient priority queue returns either the minimum uncorrupted element or some corrupted element. Our resilient priority queue uses O(n) space to store n elements. Both insert and deletemin operations are performed in O(log n + d) time amortized, where d is the maximum amount of corruptions tolerated. Our priority queue matches the performance of classical optimal priority queues in the RAM model when the number of corruptions tolerated is O(log n). We prove matching worst case lower bounds for resilient priority queues storing only structural information in the uncorruptible registers between operations.
advances in geographic information systems | 2010
Alex Beutel; Thomas Mølhave; Pankaj K. Agarwal
With modern LiDAR technology the amount of topographic data, in the form of massive point clouds, has increased dramatically. One of the most fundamental GIS tasks is to construct a grid digital elevation model (DEM) from these 3D point clouds. In this paper we present a simple yet very fast algorithm for constructing a grid DEM from massive point clouds using natural neighbor interpolation (NNI). We use a graphics processing unit (GPU) to significantly speed up the computation. To handle the large data sets and to deal with graphics hardware limitations clever blocking schemes are used to partition the point cloud. For example, using standard desktop computers and graphics hardware, we construct a high-resolution grid with 150 million cells from two billion points in less than thirty-seven minutes. This is about one-tenth of the time required for the same computer to perform a standard linear interpolation, which produces a much less smooth surface.
european symposium on algorithms | 2007
Gerth Stølting Brodal; Rolf Fagerberg; Irene Finocchi; Fabrizio Grandoni; Giuseppe F. Italiano; Allan Grønlund Jørgensen; Gabriel Moruz; Thomas Mølhave
We investigate the problem of computing in the presence of faults that may arbitrarily (i.e., adversarially) corrupt memory locations. In the faulty memory model, any memory cell can get corrupted at any time, and corrupted cells cannot be distinguished from uncorrupted ones. An upper bound δ on the number of corruptions and O(1) reliable memory cells are provided. In this model, we focus on the design of resilient dictionaries, i.e., dictionaries which are able to operate correctly (at least) on the set of uncorrupted keys.We first present a simple resilient dynamic search tree, based on random sampling, with O(log n+δ) expected amortized cost per operation, and O(n) space complexity. We then propose an optimal deterministic static dictionary supporting searches in Θ(log n+δ) time in the worst case, and we show how to use it in a dynamic setting in order to support updates in O(log n + δ) amortized time. Our dynamic dictionary also supports range queries in O(log n+δ+t) worst case time, where t is the size of the output. Finally, we show that every resilient search tree (with some reasonable properties) must take Ω(log n + δ) worst-case time per search.
advances in geographic information systems | 2013
Swaminathan Sankararaman; Pankaj K. Agarwal; Thomas Mølhave; Jiangwei Pan; Arnold P. Boedihardjo
A fundamental problem in analyzing trajectory data is to identify common patterns between pairs or among groups of trajectories. In this paper, we consider the problem of matching similar portions between a pair of trajectories, each observed as a sequence of points sampled from it. We present new measures of trajectory similarity --- both local and global --- between a pair of trajectories to distinguish between similar and dissimilar portions. We then use this model to perform segmentation of a set of trajectories into fragments, contiguous portions of trajectories shared by many of them. Our model for similarity is robust under noise and sampling rate variations. The model also yields a score which can be used to rank multiple pairs of trajectories according to similarity, e.g. in clustering applications. We present quadratic time algorithms to compute the similarity between trajectory pairs under our measures together with algorithms to identify fragments in a large set of trajectories efficiently using the similarity model. Finally, we present an extensive experimental study evaluating the effectiveness of our approach on real datasets, comparing it with earlier approaches. Our experiments show that our model for similarity is highly accurate in distinguishing similar and dissimilar portions as compared to earlier methods even with sparse sampling. Further, our segmentation algorithm is able to identify a small set of fragments capturing the common parts of trajectories in the dataset.
workshop on algorithms and data structures | 2009
Gerth Stølting Brodal; Allan Grønlund Jørgensen; Thomas Mølhave
Algorithms dealing with massive data sets are usually designed for I/O-efficiency, often captured by the I/O model by Aggarwal and Vitter. Another aspect of dealing with massive data is how to deal with memory faults, e.g. captured by the adversary based faulty memory RAM by Finocchi and Italiano. However, current fault tolerant algorithms do not scale beyond the internal memory. In this paper we investigate for the first time the connection between I/O-efficiency in the I/O model and fault tolerance in the faulty memory RAM, and we assume that both memory and disk are unreliable. We show a lower bound on the number of I/Os required for any deterministic dictionary that is resilient to memory faults. We design a static and a dynamic deterministic dictionary with optimal query performance as well as an optimal sorting algorithm and an optimal priority queue. Finally, we consider scenarios where only cells in memory or only cells on disk are corruptible and separate randomized and deterministic dictionaries in the latter.
international symposium on algorithms and computation | 2009
Gerth Stølting Brodal; Allan Grønlund Jørgensen; Gabriel Moruz; Thomas Mølhave
The faulty memory RAM presented by Finocchi and Italiano [1] is a variant of the RAM model where the content of any memory cell can get corrupted at any time, and corrupted cells cannot be distinguished from uncorrupted cells. An upper bound, ?, on the number of corruptions and O(1) reliable memory cells are provided. In this paper we investigate the fundamental problem of counting in faulty memory. Keeping many reliable counters in the faulty memory is easily done by replicating the value of each counter ?(?) times and paying ?(?) time every time a counter is queried or incremented. In this paper we decrease the expensive increment cost to o(?) and present upper and lower bound tradeoffs decreasing the increment time at the cost of the accuracy of the counters.
international conference and exhibition on computing for geospatial research application | 2010
Thomas Mølhave; Pankaj K. Agarwal; Lars Arge; Morten Revsbæk
In this paper we demonstrate that the technology required to perform typical GIS computations on very large high-resolution terrain models has matured enough to be ready for use by practitioners. We also demonstrate the impact that high-resolution data has on common problems. To our knowledge, some of the computations we present have never before been carried out by standard desktop computers on data sets of comparable size.
european symposium on algorithms | 2008
Lars Arge; Thomas Mølhave; Norbert Zeh
We present an optimal cache-oblivious algorithm for finding all intersections between a set of non-intersecting red segments and a set of non-intersecting blue segments in the plane. Our algorithm uses
advances in geographic information systems | 2013
Niel Lebeck; Thomas Mølhave; Pankaj K. Agarwal
O(\frac{N}{B}\log_{M/B}\frac{N}{B}+T/B)
advances in geographic information systems | 2010
Lars Arge; Kasper Green Larsen; Thomas Mølhave; Freek van Walderveen
memory transfers, where Nis the total number of segments, Mand Bare the memory and block transfer sizes of any two consecutive levels of any multilevel memory hierarchy, and Tis the number of intersections.