Sergio De Agostino
Sapienza University of Rome
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sergio De Agostino.
parallel computing | 1995
Sergio De Agostino
Abstract The LZ2 compression method is hardly parallelizable since it is known to be P-complete. In spite of such negative result, we show in this paper that the decoding process can be parallelized efficiently on an EREW PRAM model of computation with O(n/log(n)) processors and O(log2 n) time, where n is the length of the output string.
International Journal of Foundations of Computer Science | 2006
Sergio De Agostino
Summary form only given. The unbounded version of the LZ2 compression method is P-complete, therefore, it is unlikely to have a sublinear work space when LZ2 compression is implemented unless a deletion heuristic is applied to bound the dictionary. Several LZ2 compression heuristics have been designed and several deletion heuristics have been applied. In this work, we show experimental results on the compression effectiveness for 2/spl les/p/spl les/6, using the AP compression heuristic. The relaxed LRU (RLRU) deletion heuristic turns out to be as good as LRU even when p is equal to 2. This fact shows that there should be always an improvement when the two values of p differ substantially. FREEZE, RESTART and SWAP are simpler heuristics, which do not delete elements from the dictionary at each step. SWAP is the best among these simpler approaches and has a worse compression efficiency than RLRU and LRU.
Parallel Processing Letters | 2004
Sergio De Agostino
We show nearly work-optimal parallel decoding algorithms which run on the PRAM EREW in O(log n) time with O(n/(log n)1/2) processors for text compressed with LZ1 and LZ2 methods, where n is the length of the output string. We also present pseudo work-optimal PRAM EREW decoders for finite window compression and LZ2 compression requiring logarithmic time with O(dn) work, where d is the window size and the alphabet size respectively. Finally, we observe that PRAM EREW decoders requiring O(log n) time and O(n/log n) processors are possible with the non-conservative assumption that the computer word length is O(log2 n) bits.
International Journal of Foundations of Computer Science | 2005
Luigi Cinque; Sergio De Agostino; Franco Liberati; Bart J. Westgeest
In this paper, we show a simple lossless compression heuristic for gray scale images. The main advantage of this approach is that it provides a highly parallelizable compressor and decompressor. In fact, it can be applied independently to each block of 8×8 pixels, achieving 80 percent of the compression obtained with LOCO-I (JPEG-LS), the current lossless standard in low-complexity applications. The compressed form of each block employs a header and a fixed length code, and the sequential implementations of the encoder and decoder are 50 to 60 percent faster than LOCO-I.
data compression, communications and processing | 2011
Sergio De Agostino
We present a survey of results concerning Lempel-Ziv data compression on parallel and distributed systems, starting from the theoretical approach to parallel time complexity to conclude with the practical goal of designing distributed algorithms with low communication cost. An extension by Storer to image compression is also discussed.
Information & Computation | 1997
Sergio De Agostino; Riccardo Silvestri
Abstract Sheinwald, Lempel, and Ziv (1995,Inform. and Comput.116, 128–133) proved that the power of off-line coding is not useful if we want on-line decodable files, as far as asymptotical results are concerned. In this paper, we are concerned with the finite case and consider the notion of on-line decodable optimal parsing based on the parsing defined by the Ziv–Lempel (LZ2) compression algorithm. De Agostino and Storer (1996,Inform. Process. Lett.59, 169–174) proved the NP-completeness of computing the optimal parsing and that a sublogarithmic factor approximation algorithm cannot be realized on-line. We show that the Ziv–Lempel algorithm and two widely used practical implementations produce an O(n1/4) approximation of the optimal parsing, wherenis the length of the string. By working with de Bruijn sequences, we show also infinite families of binary strings on which the approximation factor isΘ(n1/4).
Journal of Discrete Algorithms | 2015
Sergio De Agostino
The greedy approach to dictionary-based static text compression can be executed by a finite-state machine. When it is applied in parallel to different blocks of data independently, there is no lack of robustness even on standard large scale distributed systems with input files of arbitrary size. Beyond standard large scale, a negative effect on the compression effectiveness is caused by the very small size of the data blocks. A robust approach for extreme distributed systems is presented in this paper, where this problem is fixed by overlapping adjacent blocks and preprocessing the neighborhoods of the boundaries.
Mathematics in Computer Science | 2013
Luigi Cinque; Sergio De Agostino; Luca Lombardi
We present a method for compressing binary images via monochromatic pattern substitution. Such method has no relevant loss of compression effectiveness if the image is partitioned into up to a thousand blocks, approximately, and each block is compressed independently. Therefore, it can be implemented on a distributed system with no interprocessor communication. In the theoretical context of unbounded parallelism, interprocessor communication is needed. Compression effectiveness has a bell-shaped behaviour which is again competitive with the sequential performance when the highest degree of parallelism is reached. Finally, the method has a speed-up if applied sequentially to an image partitioned into up to 256 blocks. It follows that such speed-up can be applied to a parallel implementation on a small scale system.
symposium on theoretical aspects of computer science | 1998
Sergio De Agostino; Riccardo Silvestri
We study the parallel complexity of a bounded size dictionary version (LRU deletion heuristic) of the LZ2 compression algorithm. The unbounded version was shown to be P-complete. When the size of the dictionary is O(logk n), the algorithm is shown to be hard for the class of problems solvable simultaneously in polynomial time and O(logk n) space (that is, SCk). We also introduce a variation of this heuristic that turns out to be the first natural SCk-complete problem (the original heuristic belongs to SCk+1). In virtue of these results, we argue that there are no practical parallel algorithms for LZ2 compression with LRU deletion heuristic or any other heuristic deleting dictionary elements in a continuous way. For simpler heuristics (SWAP, RESTART, FREEZE), practical parallel algorithms are given.
Parallel Processing Letters | 1995
Giancarlo Bongiovanni; Pierluigi Crescenzi; Sergio De Agostino
We prove that the sequential approximation algorithms for the problems