Gabriel Moruz
Goethe University Frankfurt
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gabriel Moruz.
workshop on algorithms and data structures | 2007
Allan Grønlund Jørgensen; Gabriel Moruz; Thomas Mølhave
In the faulty-memory RAM model, the content of memory cells can get corrupted at any time during the execution of an algorithm, and a constant number of uncorruptible registers are available. A resilient data structure in this model works correctly on the set of uncorrupted values. In this paper we introduce a resilient priority queue. The deletemin operation of a resilient priority queue returns either the minimum uncorrupted element or some corrupted element. Our resilient priority queue uses O(n) space to store n elements. Both insert and deletemin operations are performed in O(log n + d) time amortized, where d is the maximum amount of corruptions tolerated. Our priority queue matches the performance of classical optimal priority queues in the RAM model when the number of corruptions tolerated is O(log n). We prove matching worst case lower bounds for resilient priority queues storing only structural information in the uncorruptible registers between operations.
ACM Journal of Experimental Algorithms | 2008
Gerth Stølting Brodal; Rolf Fagerberg; Gabriel Moruz
Quicksort was first introduced in 1961 by Hoare. Many variants have been developed, the best of which are among the fastest generic-sorting algorithms available, as testified by the choice of Quicksort as the default sorting algorithm in most programming libraries. Some sorting algorithms are adaptive, i.e., they have a complexity analysis that is better for inputs, which are nearly sorted, according to some specified measure of presortedness. Quicksort is not among these, as it uses Ω(n log n) comparisons even for sorted inputs. However, in this paper, we demonstrate empirically that the actual running time of Quicksort is adaptive with respect to the presortedness measure Inv. Differences close to a factor of two are observed between instances with low and high Inv value. We then show that for the randomized version of Quicksort, the number of element swaps performed is provably adaptive with respect to the measure Inv. More precisely, we prove that randomized Quicksort performs expected O(n(1 + log(1 + Inv/n))) element swaps, where Inv denotes the number of inversions in the input sequence. This result provides a theoretical explanation for the observed behavior and gives new insights on the behavior of Quicksort. We also give some empirical results on the adaptive behavior of Heapsort and Mergesort.
european symposium on algorithms | 2007
Gerth Stølting Brodal; Rolf Fagerberg; Irene Finocchi; Fabrizio Grandoni; Giuseppe F. Italiano; Allan Grønlund Jørgensen; Gabriel Moruz; Thomas Mølhave
We investigate the problem of computing in the presence of faults that may arbitrarily (i.e., adversarially) corrupt memory locations. In the faulty memory model, any memory cell can get corrupted at any time, and corrupted cells cannot be distinguished from uncorrupted ones. An upper bound δ on the number of corruptions and O(1) reliable memory cells are provided. In this model, we focus on the design of resilient dictionaries, i.e., dictionaries which are able to operate correctly (at least) on the set of uncorrupted keys.We first present a simple resilient dynamic search tree, based on random sampling, with O(log n+δ) expected amortized cost per operation, and O(n) space complexity. We then propose an optimal deterministic static dictionary supporting searches in Θ(log n+δ) time in the worst case, and we show how to use it in a dynamic setting in order to support updates in O(log n + δ) amortized time. Our dynamic dictionary also supports range queries in O(log n+δ+t) worst case time, where t is the size of the output. Finally, we show that every resilient search tree (with some reasonable properties) must take Ω(log n + δ) worst-case time per search.
international colloquium on automata languages and programming | 2005
Gerth Stølting Brodal; Rolf Fagerberg; Gabriel Moruz
Two new adaptive sorting algorithms are introduced which perform an optimal number of comparisons with respect to the number of inversions in the input. The first algorithm is based on a new linear time reduction to (non-adaptive) sorting. The second algorithm is based on a new division protocol for the GenericSort algorithm by Estivill-Castro and Wood. From both algorithms we derive I/O-optimal cache-aware and cache-oblivious adaptive sorting algorithms. These are the first I/O-optimal adaptive sorting algorithms.
symposium on experimental and efficient algorithms | 2009
Deepak Ajwani; Andreas Beckmann; Riko Jacob; Ulrich Meyer; Gabriel Moruz
Flash memory-based solid-state disks are fast becoming the dominant form of end-user storage devices, partly even replacing the traditional hard-disks. Existing two-level memory hierarchy models fail to realize the full potential of flash-based storage devices. We propose two new computation models, the general flash model and the unit-cost model, for memory hierarchies involving these devices. Our models are simple enough for meaningful algorithm design and analysis. In particular, we show that a broad range of existing external-memory algorithms and data structures based on the merging paradigm can be adapted efficiently into the unit-cost model. Our experiments show that the theoretical analysis of algorithms on our models corresponds to the empirical behavior of algorithms when using solid-state disks as external memory.
Theoretical Computer Science | 2010
Camil Demetrescu; Bruno Escoffier; Gabriel Moruz; Andrea Ribichini
In this paper we show how parallel algorithms can be turned into efficient streaming algorithms for several classical combinatorial problems in the W-Stream model. In this model, at each pass one input stream is read, one output stream is written, and data items have to be processed using limited space; streams are pipelined in such a way that the output stream produced at pass i is given as input stream at pass i+1. We first introduce a simulation technique that allows turning efficient PRAM algorithms into optimal W-Stream ones, for many classical combinatorial problems, including list ranking and Euler tour of a tree. For other problems, most notably graph problems, however, this technique leads to suboptimal algorithms. To overcome this difficulty we introduce the RelaxedPRAM (RPRAM) computational model, as an intermediate model between PRAM and W-Stream. RPRAM allows every processor to access a non-constant number of memory cells per parallel round, albeit with some restrictions. The RPRAM model, while being more powerful than the PRAM model, can be simulated in W-Stream within the same asymptotic bounds. The extra power provided by RPRAM allows us in many cases to substantially reduce the number of processors, while maintaining the same number of parallel rounds, leading to more efficient W-Stream simulations of parallel algorithms. Our RPRAM technique gives new insights on developing streaming algorithms and yields efficient algorithms for several classical problems in this model including sorting, connectivity, minimum spanning tree, biconnected components, and maximal independent set. In addition to allowing smooth space-passes tradeoffs, our algorithms are also shown, by proving almost-tight communication complexity-based lower bounds in W-Stream, to be optimal up to polylog factors.
Lecture Notes in Computer Science | 2002
Gabriel Ciobanu; Daniel Dumitriu; Dorin Huzum; Gabriel Moruz; Bogdan Tanas
We present a new version of P systems called Client-Server P Systems (CSPS). The client membranes are characterized by their states; the server membrane stores the states of the clients and triggers the corresponding interaction rules. We show that CSPS have the same expressive power as Turing machines. CSPS is usedto model various molecular processes in which interaction and state transitions are causally linked. Signaling pathways and T cell activation are described by using a CSPS software environment called MOlNET (MOlecular NETworks). MOlNET can describe the dynamics of molecular interactions, including both qualitative and quantitative aspects and simulating the signaling pathways that tune the activation thresholds for T cells.
international symposium on algorithms and computation | 2009
Gerth Stølting Brodal; Allan Grønlund Jørgensen; Gabriel Moruz; Thomas Mølhave
The faulty memory RAM presented by Finocchi and Italiano [1] is a variant of the RAM model where the content of any memory cell can get corrupted at any time, and corrupted cells cannot be distinguished from uncorrupted cells. An upper bound, ?, on the number of corruptions and O(1) reliable memory cells are provided. In this paper we investigate the fundamental problem of counting in faulty memory. Keeping many reliable counters in the faulty memory is easily done by replicating the value of each counter ?(?) times and paying ?(?) time every time a counter is queried or incremented. In this paper we decrease the expensive increment cost to o(?) and present upper and lower bound tradeoffs decreasing the increment time at the cost of the accuracy of the counters.
Frontiers of Computer Science in China | 2012
Fabian Gieseke; Gabriel Moruz; Jan Vahrenhold
We propose a k-d tree variant that is resilient to a pre-described number of memory corruptions while still using only linear space. While the data structure is of independent interest, we demonstrate its use in the context of high-radiation environments. Our experimental evaluation demonstrates that the resulting approach leads to a significantly higher resiliency rate compared to previous results. This is especially the case for large-scale multi-spectral satellite data, which renders the proposed approach well-suited to operate aboard today’s satellites.
workshop on algorithms and data structures | 2005
Gerth Stølting Brodal; Gabriel Moruz
Branch mispredictions is an important factor affecting the running time in practice. In this paper we consider tradeoffs between the number of branch mispredictions and the number of comparisons for sorting algorithms in the comparison model. We prove that a sorting algorithm using O(dn log n) comparisons performs Ω(n logdn) branch mispredictions. We show that Multiway MergeSort achieves this tradeoff by adopting a multiway merger with a low number of branch mispredictions. For adaptive sorting algorithms we similarly obtain that an algorithm performing O(dn(1 + log(1 + Inv/n))) comparisons must perform Ω(n logd(1 + Inv/n)) branch mispredictions, where Inv is the number of inversions in the input. This tradeoff can be achieved by GenericSort by Estivill-Castro and Wood by adopting a multiway division protocol and a multiway merging algorithm with a low number of branch mispredictions.