Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Giovanni Manzini is active.

Publication


Featured researches published by Giovanni Manzini.


foundations of computer science | 2000

Opportunistic data structures with applications

Paolo Ferragina; Giovanni Manzini

We address the issue of compressing and indexing data. We devise a data structure whose space occupancy is a function of the entropy of the underlying data set. We call the data structure opportunistic since its space occupancy is decreased when the input is compressible and this space reduction is achieved at no significant slowdown in the query performance. More precisely, its space occupancy is optimal in an information-content sense because text T[1,u] is stored using O(H/sub k/(T))+o(1) bits per input symbol in the worst case, where H/sub k/(T) is the kth order empirical entropy of T (the bound holds for any fixed k). Given an arbitrary string P[1,p], the opportunistic data structure allows to search for the occurrences of P in T in O(p+occlog/sup /spl epsiv//u) time (for any fixed /spl epsiv/>0). If data are uncompressible we achieve the best space bound currently known (Grossi and Vitter, 2000); on compressible data our solution improves the succinct suffix array of (Grossi and Vitter, 2000) and the classical suffix tree and suffix array data structures either in space or in query time or both. We also study our opportunistic data structure in a dynamic setting and devise a variant achieving effective search and update time bounds. Finally, we show how to plug our opportunistic data structure into the Glimpse tool (Manber and Wu, 1994). The result is an indexing tool which achieves sublinear space and sublinear query time complexity.


Journal of the ACM | 2005

Indexing compressed text

Paolo Ferragina; Giovanni Manzini

We design two compressed data structures for the full-text indexing problem that support efficient substring searches using roughly the space required for storing the text in compressed form.Our first compressed data structure retrieves the <i>occ</i> occurrences of a pattern <i>P</i>[1,<i>p</i>] within a text <i>T</i>[1,<i>n</i>] in <i>O</i>(<i>p</i> + <i>occ</i> log<sup>1+ε</sup> <i>n</i>) time for any chosen ε, 0<ε<1. This data structure uses at most 5<i>n</i><i>H</i><inf><i>k</i></inf>(<i>T</i>) + <i>o</i>(<i>n</i>) bits of storage, where <i>H</i><inf><i>k</i></inf>(<i>T</i>) is the <i>k</i>th order empirical entropy of <i>T</i>. The space usage is Θ(<i>n</i>) bits in the worst case and <i>o</i>(<i>n</i>) bits for compressible texts. This data structure exploits the relationship between suffix arrays and the Burrows--Wheeler Transform, and can be regarded as a <i>compressed suffix array</i>.Our second compressed data structure achieves <i>O</i>(<i>p</i>+<i>occ</i>) query time using <i>O</i>(<i>n</i><i>H</i><inf><i>k</i></inf>(<i>T</i>)log<sup>ε</sup> <i>n</i>) + <i>o</i>(<i>n</i>) bits of storage for any chosen ε, 0<ε<1. Therefore, it provides optimal <i>output-sensitive</i> query time using <i>o</i>(<i>n</i>log <i>n</i>) bits in the worst case. This second data structure builds upon the first one and exploits the interplay between two compressors: the Burrows--Wheeler Transform and the <i>LZ78</i> algorithm.


ACM Transactions on Algorithms | 2007

Compressed representations of sequences and full-text indexes

Paolo Ferragina; Giovanni Manzini; Veli Mäkinen; Gonzalo Navarro

Given a sequence <i>S</i> = <i>s</i><sub>1</sub><i>s</i><sub>2</sub>…<i>s</i><sub><i>n</i></sub> of integers smaller than <i>r</i> = <i>O</i>(polylog(<i>n</i>)), we show how <i>S</i> can be represented using <i>nH</i><sub>0</sub>(<i>S</i>) + <i>o</i>(<i>n</i>) bits, so that we can know any <i>s</i><sub><i>q</i></sub>, as well as answer <i>rank</i> and <i>select</i> queries on <i>S</i>, in constant time. <i>H</i><sub>0</sub>(<i>S</i>) is the zero-order empirical entropy of <i>S</i> and <i>nH</i><sub>0</sub>(<i>S</i>) provides an information-theoretic lower bound to the bit storage of any sequence <i>S</i> via a fixed encoding of its symbols. This extends previous results on binary sequences, and improves previous results on general sequences where those queries are answered in <i>O</i>(log <i>r</i>) time. For larger <i>r</i>, we can still represent <i>S</i> in <i>nH</i><sub>0</sub>(<i>S</i>) + <i>o</i>(<i>n</i> log <i>r</i>) bits and answer queries in <i>O</i>(log <i>r</i>/log log <i>n</i>) time. Another contribution of this article is to show how to combine our compressed representation of integer sequences with a compression boosting technique to design <i>compressed full-text indexes</i> that scale well with the size of the input alphabet Σ. Specifically, we design a variant of the FM-index that indexes a string <i>T</i>[1, <i>n</i>] within <i>nH</i><sub><i>k</i></sub>(<i>T</i>) + <i>o</i>(<i>n</i>) bits of storage, where <i>H</i><sub><i>k</i></sub>(<i>T</i>) is the <i>k</i>th-order empirical entropy of <i>T</i>. This space bound holds simultaneously for all <i>k</i> ≤ α log<sub>|Σ|</sub> <i>n</i>, constant 0 < α < 1, and |Σ| = <i>O</i>(polylog(<i>n</i>)). This index counts the occurrences of an arbitrary pattern <i>P</i>[1, <i>p</i>] as a substring of <i>T</i> in <i>O</i>(<i>p</i>) time; it locates each pattern occurrence in <i>O</i>(log<sup>1+ϵ</sup> <i>n</i>) time for any constant 0 < ϵ < 1; and reports a text substring of length ℓ in <i>O</i>(ℓ + log<sup>1+ϵ</sup> <i>n</i>) time. Compared to all previous works, our index is the first that removes the alphabet-size dependance from all query times, in particular, counting time is linear in the pattern length. Still, our index uses essentially the same space of the <i>k</i>th-order entropy of the text <i>T</i>, which is the best space obtained in previous work. We can also handle larger alphabets of size |Σ| = <i>O</i>(<i>n</i><sup>β</sup>), for any 0 < β < 1, by paying <i>o</i>(<i>n</i> log|Σ|) extra space and multiplying all query times by <i>O</i>(log |Σ|/log log <i>n</i>).


Journal of the ACM | 2001

An analysis of the Burrows—Wheeler transform

Giovanni Manzini

The Burrows—Wheeler Transform (also known as Block-Sorting) is at the base of compression algorithms that are the state of the art in lossless data compression. In this paper, we analyze two algorithms that use this technique. The first one is the original algorithm described by Burrows and Wheeler, which, despite its simplicity outperforms the Gzip compressor. The second one uses an additional run-length encoding step to improve compression. We prove that the compression ratio of both algorithms can be bounded in terms of the kth order empirical entropy of the input string for any k ≥ 0. We make no assumptions on the input and we obtain bounds which hold in the worst case that is for every possible input string. All previous results for Block-Sorting algorithms were concerned with the average compression ratio and have been established assuming that the input comes from a finite-order Markov source.


Algorithmica | 2004

Engineering a Lightweight Suffix Array Construction Algorithm

Giovanni Manzini; Paolo Ferragina

Abstract In this paper we describe a new algorithm for building the suffix array of a string. This task is equivalent to the problem of lexicographically sorting all the suffixes of the input string. Our algorithm is based on a new approach called deep–shallow sorting: we use a “shallow” sorter for the suffixes with a short common prefix, and a “deep” sorter for the suffixes with a long common prefix. All the known algorithms for building the suffix array either require a large amount of space or are inefficient when the input string contains many repeated substrings. Our algorithm has been designed to overcome this dichotomy. Our algorithm is “lightweight” in the sense that it uses very small space in addition to the space required by the suffix array itself. At the same time our algorithm is fast even when the input contains many repetitions: this has been shown by extensive experiments with inputs of size up to 110 Mb. The source code of our algorithm, as well as a C library providing a simple API, is available under the GNU GPL.


foundations of computer science | 2005

Structuring labeled trees for optimal succinctness, and beyond

Paolo Ferragina; Fabrizio Luccio; Giovanni Manzini; S. Muthukrishnan

Consider an ordered, static tree /spl Tscr/ on t nodes where each node has a label from alphabet set /spl Sigma/. Tree /spl Tscr/ may be of arbitrary degree and of arbitrary shape. Say, we wish to support basic navigational operations such as find the parent of node u, the ith child of u, and any child of it with label /spl alpha/. In a seminal work over fifteen years ago, Jacobson (1989) observed that pointer-based tree representations are wasteful in space and introduced the notion of succinct data structures. He studied the special case of unlabeled trees and presented a succinct data structure of 2t + o(t) bits supporting navigational operations in O(1) time. The space used is asymptotically optimal with the information-theoretic lower bound averaged over all trees. This led to a slew of results on succinct data structures for arrays, trees, strings and multisets. Still, for the fundamental problem of structuring labeled trees succinctly, few results, if any, exist even though labeled trees arise frequently in practice, e.g. in the data as in markup text (XML) or in augmented data structures. We present a novel approach to the problem of succinct manipulation of labeled trees by designing what we call the xbw transform of the tree, in the spirit of the well-known Burrows-Wheeler transform for strings. The xbw transform uses path-sorting and grouping to linearize the labeled tree /spl Tscr/ into two coordinated arrays, one capturing the structure and the other the labels. Using the properties of the xbw transform, we (i) derive the first-known (near-)optimal results for succinct representation of labeled trees with O(1) time for navigation operations, (ii) optimally support the powerful subpath search operation for the first time, and (iii) introduce a notion of tree entropy and present linear time algorithms for compressing a given labeled tree up to its entropy beyond the information-theoretic lower bound averaged over all tree inputs. Our xbw transform is simple and likely to spur new results in the theory of tree compression and indexing, and may have some practical impact in XML data processing.


Journal of the ACM | 2005

Boosting textual compression in optimal linear time

Paolo Ferragina; Raffaele Giancarlo; Giovanni Manzini; Marinella Sciortino

We provide a general boosting technique for Textual Data Compression. Qualitatively, it takes a good compression algorithm and turns it into an algorithm with a better compression performance guarantee. It displays the following remarkable properties: (a) it can turn any memoryless compressor into a compression algorithm that uses the “best possible” contexts; (b) it is very simple and optimal in terms of time; and (c) it admits a decompression algorithm again optimal in time. To the best of our knowledge, this is the first boosting technique displaying these properties.Technically, our boosting technique builds upon three main ingredients: the Burrows--Wheeler Transform, the Suffix Tree data structure, and a greedy algorithm to process them. Specifically, we show that there exists a proper partition of the Burrows--Wheeler Transform of a string s that shows a deep combinatorial relation with the kth order entropy of s. That partition can be identified via a greedy processing of the suffix tree of s with the aim of minimizing a proper objective function over its nodes. The final compressed string is then obtained by compressing individually each substring of the partition by means of the base compressor we wish to boost.Our boosting technique is inherently combinatorial because it does not need to assume any prior probabilistic model about the source emitting s, and it does not deploy any training, parameter estimation and learning. Various corollaries are derived from this main achievement. Among the others, we show analytically that using our booster, we get better compression algorithms than some of the best existing ones, that is, LZ77, LZ78, PPMC and the ones derived from the Burrows--Wheeler Transform. Further, we settle analytically some long-standing open problems about the algorithmic structure and the performance of BWT-based compressors. Namely, we provide the first family of BWT algorithms that do not use Move-To-Front or Symbol Ranking as a part of the compression process.


BMC Bioinformatics | 2007

Compression-based classification of biological sequences and structures via the Universal Similarity Metric: experimental assessment

Paolo Ferragina; Raffaele Giancarlo; Valentina Greco; Giovanni Manzini; Gabriel Valiente

BackgroundSimilarity of sequences is a key mathematical notion for Classification and Phylogenetic studies in Biology. It is currently primarily handled using alignments. However, the alignment methods seem inadequate for post-genomic studies since they do not scale well with data set size and they seem to be confined only to genomic and proteomic sequences. Therefore, alignment-free similarity measures are actively pursued. Among those, USM (Universal Similarity Metric) has gained prominence. It is based on the deep theory of Kolmogorov Complexity and universality is its most novel striking feature. Since it can only be approximated via data compression, USM is a methodology rather than a formula quantifying the similarity of two strings. Three approximations of USM are available, namely UCD (Universal Compression Dissimilarity), NCD (Normalized Compression Dissimilarity) and CD (Compression Dissimilarity). Their applicability and robustness is tested on various data sets yielding a first massive quantitative estimate that the USM methodology and its approximations are of value. Despite the rich theory developed around USM, its experimental assessment has limitations: only a few data compressors have been tested in conjunction with USM and mostly at a qualitative level, no comparison among UCD, NCD and CD is available and no comparison of USM with existing methods, both based on alignments and not, seems to be available.ResultsWe experimentally test the USM methodology by using 25 compressors, all three of its known approximations and six data sets of relevance to Molecular Biology. This offers the first systematic and quantitative experimental assessment of this methodology, that naturally complements the many theoretical and the preliminary experimental results available. Moreover, we compare the USM methodology both with methods based on alignments and not. We may group our experiments into two sets. The first one, performed via ROC (Receiver Operating Curve) analysis, aims at assessing the intrinsic ability of the methodology to discriminate and classify biological sequences and structures. A second set of experiments aims at assessing how well two commonly available classification algorithms, UPGMA (Unweighted Pair Group Method with Arithmetic Mean) and NJ (Neighbor Joining), can use the methodology to perform their task, their performance being evaluated against gold standards and with the use of well known statistical indexes, i.e., the F-measure and the partition distance. Based on the experiments, several conclusions can be drawn and, from them, novel valuable guidelines for the use of USM on biological data. The main ones are reported next.ConclusionUCD and NCD are indistinguishable, i.e., they yield nearly the same values of the statistical indexes we have used, accross experiments and data sets, while CD is almost always worse than both. UPGMA seems to yield better classification results with respect to NJ, i.e., better values of the statistical indexes (10% difference or above), on a substantial fraction of experiments, compressors and USM approximation choices. The compression program PPMd, based on PPM (Prediction by Partial Matching), for generic data and Gencompress for DNA, are the best performers among the compression algorithms we have used, although the difference in performance, as measured by statistical indexes, between them and the other algorithms depends critically on the data set and may not be as large as expected. PPMd used with UCD or NCD and UPGMA, on sequence data is very close, although worse, in performance with the alignment methods (less than 2% difference on the F-measure). Yet, it scales well with data set size and it can work on data other than sequences. In summary, our quantitative analysis naturally complements the rich theory behind USM and supports the conclusion that the methodology is worth using because of its robustness, flexibility, scalability, and competitiveness with existing techniques. In particular, the methodology applies to all biological data in textual format. The software and data sets are available under the GNU GPL at the supplementary material web page.


scandinavian workshop on algorithm theory | 2004

Two Space Saving Tricks for Linear Time LCP Array Computation

Giovanni Manzini

In this paper we consider the linear time algorithm of Kasai et al. [6] for the computation of the Longest Common Prefix (LCP) array given the text and the suffix array. We show that this algorithm can be implemented without any auxiliary array in addition to the ones required for the input (the text and the suffix array) and the output (the LCP array). Thus, for a text of length n, we reduce the space occupancy of this algorithm from 13n bytes to 9n bytes.


Journal of the ACM | 2009

Compressing and indexing labeled trees, with applications

Paolo Ferragina; Fabrizio Luccio; Giovanni Manzini; S. Muthukrishnan

Consider an ordered, static tree T where each node has a label from alphabet Σ. Tree T may be of arbitrary degree and shape. Our goal is designing a compressed storage scheme of T that supports basic navigational operations among the immediate neighbors of a node (i.e. parent, ith child, or any child with some label,…) as well as more sophisticated path-based search operations over its labeled structure. We present a novel approach to this problem by designing what we call the XBW-transform of the tree in the spirit of the well-known Burrows-Wheeler transform for strings [1994]. The XBW-transform uses path-sorting to linearize the labeled tree T into two coordinated arrays, one capturing the structure and the other the labels. For the first time, by using the properties of the XBW-transform, our compressed indexes go beyond the information-theoretic lower bound, and support navigational and path-search operations over labeled trees within (near-)optimal time bounds and entropy-bounded space. Our XBW-transform is simple and likely to spur new results in the theory of tree compression and indexing, as well as interesting application contexts. As an example, we use the XBW-transform to design and implement a compressed index for XML documents whose compression ratio is significantly better than the one achievable by state-of-the-art tools, and its query time performance is order of magnitudes faster.

Collaboration


Dive into the Giovanni Manzini's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Travis Gagie

Diego Portales University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lavinia Egidi

University of Eastern Piedmont

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge