Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paolo Ferragina is active.

Publication


Featured researches published by Paolo Ferragina.


foundations of computer science | 2000

Opportunistic data structures with applications

Paolo Ferragina; Giovanni Manzini

We address the issue of compressing and indexing data. We devise a data structure whose space occupancy is a function of the entropy of the underlying data set. We call the data structure opportunistic since its space occupancy is decreased when the input is compressible and this space reduction is achieved at no significant slowdown in the query performance. More precisely, its space occupancy is optimal in an information-content sense because text T[1,u] is stored using O(H/sub k/(T))+o(1) bits per input symbol in the worst case, where H/sub k/(T) is the kth order empirical entropy of T (the bound holds for any fixed k). Given an arbitrary string P[1,p], the opportunistic data structure allows to search for the occurrences of P in T in O(p+occlog/sup /spl epsiv//u) time (for any fixed /spl epsiv/>0). If data are uncompressible we achieve the best space bound currently known (Grossi and Vitter, 2000); on compressible data our solution improves the succinct suffix array of (Grossi and Vitter, 2000) and the classical suffix tree and suffix array data structures either in space or in query time or both. We also study our opportunistic data structure in a dynamic setting and devise a variant achieving effective search and update time bounds. Finally, we show how to plug our opportunistic data structure into the Glimpse tool (Manber and Wu, 1994). The result is an indexing tool which achieves sublinear space and sublinear query time complexity.


Journal of the ACM | 2005

Indexing compressed text

Paolo Ferragina; Giovanni Manzini

We design two compressed data structures for the full-text indexing problem that support efficient substring searches using roughly the space required for storing the text in compressed form.Our first compressed data structure retrieves the <i>occ</i> occurrences of a pattern <i>P</i>[1,<i>p</i>] within a text <i>T</i>[1,<i>n</i>] in <i>O</i>(<i>p</i> + <i>occ</i> log<sup>1+ε</sup> <i>n</i>) time for any chosen ε, 0<ε<1. This data structure uses at most 5<i>n</i><i>H</i><inf><i>k</i></inf>(<i>T</i>) + <i>o</i>(<i>n</i>) bits of storage, where <i>H</i><inf><i>k</i></inf>(<i>T</i>) is the <i>k</i>th order empirical entropy of <i>T</i>. The space usage is Θ(<i>n</i>) bits in the worst case and <i>o</i>(<i>n</i>) bits for compressible texts. This data structure exploits the relationship between suffix arrays and the Burrows--Wheeler Transform, and can be regarded as a <i>compressed suffix array</i>.Our second compressed data structure achieves <i>O</i>(<i>p</i>+<i>occ</i>) query time using <i>O</i>(<i>n</i><i>H</i><inf><i>k</i></inf>(<i>T</i>)log<sup>ε</sup> <i>n</i>) + <i>o</i>(<i>n</i>) bits of storage for any chosen ε, 0<ε<1. Therefore, it provides optimal <i>output-sensitive</i> query time using <i>o</i>(<i>n</i>log <i>n</i>) bits in the worst case. This second data structure builds upon the first one and exploits the interplay between two compressors: the Burrows--Wheeler Transform and the <i>LZ78</i> algorithm.


conference on information and knowledge management | 2010

TAGME: on-the-fly annotation of short text fragments (by wikipedia entities)

Paolo Ferragina; Ugo Scaiella

We designed and implemented TAGME, a system that is able to efficiently and judiciously augment a plain-text with pertinent hyperlinks to Wikipedia pages. The specialty of TAGME with respect to known systems [5,8] is that it may annotate texts which are short and poorly composed, such as snippets of search-engine results, tweets, news, etc.. This annotation is extremely informative, so any task that is currently addressed using the bag-of-words paradigm could benefit from using this annotation to draw upon (the millions of) Wikipedia pages and their inter-relations.


Journal of the ACM | 1999

The string B-tree: a new data structure for string search in external memory and its applications

Paolo Ferragina; Roberto Grossi

We introduce a new text-indexing data structure, the String B-Tree, that can be seen as a link between some traditional external-memory and string-matching data structures. In a short phrase, it is a combination of B-trees and Patricia tries for internal-node indices that is made more effective by adding extra pointers to speed up search and update operations. Consequently, the String B-Tree overcomes the theoretical limitations of inverted files, B-trees, prefix B-trees, suffix arrays, compacted tries and suffix trees. String B-trees have the same worst-case performance as B-trees but they manage unbounded-length strings and perform much more powerful search operations such as the ones supported by suffix trees. String B-trees are also effective in main memory (RAM model) because they improve the online suffix tree search on a dynamic set of strings. They also can be successfully applied to database indexing and software duplication.


ACM Transactions on Algorithms | 2007

Compressed representations of sequences and full-text indexes

Paolo Ferragina; Giovanni Manzini; Veli Mäkinen; Gonzalo Navarro

Given a sequence <i>S</i> = <i>s</i><sub>1</sub><i>s</i><sub>2</sub>…<i>s</i><sub><i>n</i></sub> of integers smaller than <i>r</i> = <i>O</i>(polylog(<i>n</i>)), we show how <i>S</i> can be represented using <i>nH</i><sub>0</sub>(<i>S</i>) + <i>o</i>(<i>n</i>) bits, so that we can know any <i>s</i><sub><i>q</i></sub>, as well as answer <i>rank</i> and <i>select</i> queries on <i>S</i>, in constant time. <i>H</i><sub>0</sub>(<i>S</i>) is the zero-order empirical entropy of <i>S</i> and <i>nH</i><sub>0</sub>(<i>S</i>) provides an information-theoretic lower bound to the bit storage of any sequence <i>S</i> via a fixed encoding of its symbols. This extends previous results on binary sequences, and improves previous results on general sequences where those queries are answered in <i>O</i>(log <i>r</i>) time. For larger <i>r</i>, we can still represent <i>S</i> in <i>nH</i><sub>0</sub>(<i>S</i>) + <i>o</i>(<i>n</i> log <i>r</i>) bits and answer queries in <i>O</i>(log <i>r</i>/log log <i>n</i>) time. Another contribution of this article is to show how to combine our compressed representation of integer sequences with a compression boosting technique to design <i>compressed full-text indexes</i> that scale well with the size of the input alphabet Σ. Specifically, we design a variant of the FM-index that indexes a string <i>T</i>[1, <i>n</i>] within <i>nH</i><sub><i>k</i></sub>(<i>T</i>) + <i>o</i>(<i>n</i>) bits of storage, where <i>H</i><sub><i>k</i></sub>(<i>T</i>) is the <i>k</i>th-order empirical entropy of <i>T</i>. This space bound holds simultaneously for all <i>k</i> ≤ α log<sub>|Σ|</sub> <i>n</i>, constant 0 < α < 1, and |Σ| = <i>O</i>(polylog(<i>n</i>)). This index counts the occurrences of an arbitrary pattern <i>P</i>[1, <i>p</i>] as a substring of <i>T</i> in <i>O</i>(<i>p</i>) time; it locates each pattern occurrence in <i>O</i>(log<sup>1+ϵ</sup> <i>n</i>) time for any constant 0 < ϵ < 1; and reports a text substring of length ℓ in <i>O</i>(ℓ + log<sup>1+ϵ</sup> <i>n</i>) time. Compared to all previous works, our index is the first that removes the alphabet-size dependance from all query times, in particular, counting time is linear in the pattern length. Still, our index uses essentially the same space of the <i>k</i>th-order entropy of the text <i>T</i>, which is the best space obtained in previous work. We can also handle larger alphabets of size |Σ| = <i>O</i>(<i>n</i><sup>β</sup>), for any 0 < β < 1, by paying <i>o</i>(<i>n</i> log|Σ|) extra space and multiplying all query times by <i>O</i>(log |Σ|/log log <i>n</i>).


international world wide web conferences | 2005

A personalized search engine based on web-snippet hierarchical clustering

Paolo Ferragina; Antonio Gulli

In this paper we propose a hierarchical clustering engine, called snaket, that is able to organize on-the-fly the search results drawn from 16 commodity search engines into a hierarchy of labeled folders. The hierarchy offers a complementary view to the flat-ranked list of results returned by current search engines. Users can navigate through the hierarchy driven by their search needs. This is especially useful for informative, polysemous and poor queries.SnakeT is the first complete and open-source system in the literature that offers both hierarchical clustering and folder labeling with variable-length sentences. We extensively test SnakeT against all available web-snippet clustering engines, and show that it achieves efficiency and efficacy performance close to the best known engine Vivisimo.com.Recently, personalized search engines have been introduced with the aim of improving search results by focusing on the users, rather than on their submitted queries. We show how to plug SnakeT on top of any (un-personalized) search engine in order to obtain a form of personalization that is fully adaptive, privacy preserving, scalable, and non intrusive for underlying search engines.


vehicular technology conference | 1995

Optical recognition of motor vehicle license plates

Paolo Comelli; Paolo Ferragina; Mario Notturno Granieri; Flavio Stabile

A system for the recognition of car license plates is presented. The aim of the system is to read automatically the Italian license number of a car passing through a tollgate. A CCTV camera and a frame grabber card are used to acquire a rear-view image of the vehicle. The recognition process consists of three main phases. First, a segmentation phase locates the license plate within the image. Then, a procedure based upon feature projection estimates some image parameters needed to normalize the license plate characters. Finally, the character recognizer extracts some feature points and uses template matching operators to get a robust solution under multiple acquisition conditions. A test has been done on more than three thousand real images acquired under different weather and illumination conditions, thus obtaining a recognition rate close to 91%. >


Algorithmica | 2004

Engineering a Lightweight Suffix Array Construction Algorithm

Giovanni Manzini; Paolo Ferragina

Abstract In this paper we describe a new algorithm for building the suffix array of a string. This task is equivalent to the problem of lexicographically sorting all the suffixes of the input string. Our algorithm is based on a new approach called deep–shallow sorting: we use a “shallow” sorter for the suffixes with a short common prefix, and a “deep” sorter for the suffixes with a long common prefix. All the known algorithms for building the suffix array either require a large amount of space or are inefficient when the input string contains many repeated substrings. Our algorithm has been designed to overcome this dichotomy. Our algorithm is “lightweight” in the sense that it uses very small space in addition to the space required by the suffix array itself. At the same time our algorithm is fast even when the input contains many repetitions: this has been shown by extensive experiments with inputs of size up to 110 Mb. The source code of our algorithm, as well as a C library providing a simple API, is available under the GNU GPL.


Journal of the ACM | 2000

On the sorting-complexity of suffix tree construction

Paolo Ferragina; S. Muthukrishnan

The suffix tree of a string is the fundamental data structure of combinatorial pattern matching. We present a recursive technique for building suffix trees that yields optimal algorithms in different computational models. Sorting is an inherent bottleneck in building suffix trees and our algorithms match the sorting lower bound. Specifically, we present the following results. (1) Weiner [1973], who introduced the data structure, gave an optimal 0(n)-time algorithm for building the suffix tree of an n-character string drawn from a constant-size alphabet. In the comparison model, there is a trivial &Ogr;(n log n)-time lower bound based on sorting, and Weiners algorithm matches this bound. For integer alphabets, the fastest known algorithm is the O(n log n)time comparison-based algorithm, but no super-linear lower bound is known. Closing this gap is the main open question in stringology. We settle this open problem by giving a linear time reduction to sorting for building suffix trees. Since sorting is a lower-bound for building suffix trees, this algorithm is time-optimal in every alphabet mode. In particular, for an alphabet consisting of integers in a polynomial range we get the first known linear-time algorithm. (2) All previously known algorithms for building suffix trees exhibit a marked absence of locality of reference, and thus they tend to elicit many page faults (I/Os) when indexing very long strings. They are therefore unsuitable for building suffix trees in secondary storage devices, where I/Os dominate the overall computational cost. We give a linear-I/O reduction to sorting for suffix tree construction. Since sorting is a trivial I/O-lower bound for building suffix trees, our algorithm is I/O-optimal.


ACM Journal of Experimental Algorithms | 2009

Compressed text indexes: From theory to practice

Paolo Ferragina; Rodrigo González; Gonzalo Navarro; Rossano Venturini

A compressed full-text self-index represents a text in a compressed form and still answers queries efficiently. This represents a significant advancement over the (full-)text indexing techniques of the previous decade, whose indexes required several times the size of the text. Although it is relatively new, this algorithmic technology has matured up to a point where theoretical research is giving way to practical developments. Nonetheless this requires significant programming skills, a deep engineering effort, and a strong algorithmic background to dig into the research results. To date only isolated implementations and focused comparisons of compressed indexes have been reported, and they missed a common API, which prevented their re-use or deployment within other applications. The goal of this article is to fill this gap. First, we present the existing implementations of compressed indexes from a practitioners point of view. Second, we introduce the Pizza&Chili site, which offers tuned implementations and a standardized API for the most successful compressed full-text self-indexes, together with effective test-beds and scripts for their automatic validation and test. Third, we show the results of our extensive experiments on these codes with the aim of demonstrating the practical relevance of this novel algorithmic technology.

Collaboration


Dive into the Paolo Ferragina's collaboration.

Top Co-Authors

Avatar

Giovanni Manzini

University of Eastern Piedmont

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge