Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joaquín Adiego is active.

Publication


Featured researches published by Joaquín Adiego.


data compression conference | 2004

Lempel-Ziv compression of structured text

Joaquín Adiego; Gonzalo Navarro; P. de la Fuente

We describe a novel Lempel-Ziv approach suitable for compressing structured documents, called LZCS, which takes advantage of redundant information that can appear in the structure. The main idea is that frequently repeated subtrees may exist and these can be replaced by a backward reference to their first occurrence. The main advantage is that compressed documents generated by LZCS are easy to display, access at random, and navigate. In a second stage, processed documents can be further compressed using some semiadaptive technique, so that random access and navigability remain possible. LZCS is especially efficient to compress collections of highly structured data, such as XML forms, invoices, e-commerce and web-service exchange documents. The comparison against structure-based and standard compressors shows that LZCS is a competitive choice for this type of documents, while the others are not well-suited to support navigation or random access.


Information Processing and Management | 2007

Using structural contexts to compress semistructured text collections

Joaquín Adiego; Gonzalo Navarro; Pablo de la Fuente

We describe a compression model for semistructured documents, called Structural Contexts Model (SCM), which takes advantage of the context information usually implicit in the structure of the text. The idea is to use a separate model to compress the text that lies inside each different structure type (e.g., different XML tag). The intuition behind SCM is that the distribution of all the texts that belong to a given structure type should be similar, and different from that of other structure types. We mainly focus on semistatic models, and test our idea using a word-based Huffman method. This is the standard for compressing large natural language text databases, because random access, partial decompression, and direct search of the compressed collection is possible. This variant, dubbed SCMHuff, retains those features and improves Huffmans compression ratios. We consider the possibility that storing separate models may not pay off if the distribution of different structure types is not different enough, and present a heuristic to merge models with the aim of minimizing the total size of the compressed database. This gives an additional improvement over the plain technique. The comparison against existing prototypes shows that, among the methods that permit random access to the collection, SCMHuff achieves the best compression ratios, 2-4% better than the closest alternative. From a purely compression-aimed perspective, we combine SCM with PPM modeling. A separate PPM model is used to compress the text that lies inside each different structure type. The result, SCMPPM, does not permit random access nor direct search in the compressed text, but it gives 2-5% better compression ratios than other techniques for texts longer than 5MB.


data compression conference | 2004

Merging prediction by partial matching with structural contexts model

Joaquín Adiego; P. de la Puente; Gonzalo Navarro

This paper discusses the possibility of considering the text structure in the context of compressed structured documents. This paper also proposes a compression technique for structured documents, called SCMPPM, which combines the prediction by partial matching technique with structural contexts model idea, which takes advantage of the context information usually implicit in the structure of the text. The experimental results shows significant gains over the methods that are insensitive to the structure and over the current methods that consider the structure. This method actually improves compression ratios with respect to the basic SCM technique.


Journal of the Association for Information Science and Technology | 2007

Lempel‐Ziv compression of highly structured documents

Joaquín Adiego; Gonzalo Navarro; Pablo de la Fuente

The authors describe Lempel-Ziv to Compress Structure (LZCS), a novel Lempel-Ziv approach suitable for compressing structured documents. LZCS takes advantage of repeated substructures that may appear in the documents, by replacing them with a backward reference to their previous occurrence. The result of the LZCS transformation is still a valid structured document, which is human-readable and can be transmitted by ASCII channels. Moreover, LZCS transformed documents are easy to search, display, access at random, and navigate. In a second stage, the transformed documents can be further compressed using any semistatic technique, so that it is still possible to do all those operations efficiently; or with any adaptive technique to boost compression. LZCS is especially efficient in the compression of collections of highly structured data, such as extensible markup language (XML) forms, invoices, e-commerce, and Web-service exchange documents. The comparison with other structure-aware and standard compressors shows that LZCS is a competitive choice for these type of documents, whereas the others are not well-suited to support navigation or random access. When joined to an adaptive compressor, LZCS obtains by far the best compression ratios.


string processing and information retrieval | 2003

SCM: Structural Contexts model for improving compression in semistructured text databases

Joaquín Adiego; Gonzalo Navarro; Pablo de la Fuente

We describe a compression model for semistructured documents, called Structural Contexts Model, which takes advantage of the context information usually implicit in the structure of the text. The idea is to use a separate semiadaptive model to compress the text that lies inside each different structure type (e.g., different XML tag). The intuition behind the idea is that the distribution of all the texts that belong to a given structure type should be similar, and different from that of other structure types. We test our idea using a word-based Huffman coding, which is the standard for compressing large natural language textual databases, and show that our compression method obtains significant improvements in compression ratios. We also analyze the possibility that storing separate models may not pay off if the distribution of different structure types is not different enough, and present a heuristic to merge models with the aim of minimizing the total size of the compressed database. This technique gives an additional improvement over the plain technique. The comparison against existing prototypes shows that our method is a competitive choice for compressed text databases. Finally, we show how to apply SCM over text chunks, which allows one to adjust the different word frequencies as they change across the text collection.


Journal of Artificial Intelligence Research | 2012

Generalized biwords for bitext compression and translation spotting

Felipe Sánchez-Martínez; Rafael C. Carrasco; Miguel A. Martínez-Prieto; Joaquín Adiego

Large bilingual parallel texts (also known as bitexts) are usually stored in a compressed form, and previous work has shown that they can be more efficiently compressed if the fact that the two texts are mutual translations is exploited. For example, a bitext can be seen as a sequence of biwords ---pairs of parallel words with a high probability of co-occurrence--- that can be used as an intermediate representation in the compression process. However, the simple biword approach described in the literature can only exploit one-to-one word alignments and cannot tackle the reordering of words. We therefore introduce a generalization of biwords which can describe multi-word expressions and reorderings. We also describe some methods for the binary compression of generalized biword sequences, and compare their performance when different schemes are applied to the extraction of the biword sequence. In addition, we show that this generalization of biwords allows for the implementation of an efficient algorithm to look on the compressed bitext for words or text segments in one of the texts and retrieve their counterpart translations in the other text ---an application usually referred to as translation spotting--- with only some minor modifications in the compression algorithm.


string processing and information retrieval | 2009

A Two-Level Structure for Compressing Aligned Bitexts

Joaquín Adiego; Nieves R. Brisaboa; Miguel A. Martínez-Prieto; Felipe Sánchez-Martínez

A bitext , or bilingual parallel corpus , consists of two texts, each one in a different language, that are mutual translations. Bitexts are very useful in linguistic engineering because they are used as source of knowledge for different purposes. In this paper we propose a strategy to efficiently compress and use bitexts, saving, not only space, but also processing time when exploiting them. Our strategy is based on a two-level structure for the vocabularies, and on the use of biwords , a pair of associated words, one from each language, as basic symbols to be encoded with an ETDC [2] compressor. The resulting compressed bitext needs around 20% of the space and allows more efficient implementations of the different types of searches and operations that linguistic engineerings need to perform on them. In this paper we discuss and provide results for compression, decompression, different types of searches, and bilingual snippets extraction.


data compression conference | 2009

On the Use of Word Alignments to Enhance Bitext Compression

Miguel A. Martínez-Prieto; Joaquín Adiego; Felipe Sánchez-Martínez; Pablo de la Fuente; Rafael C. Carrasco

This paper describes a novel approach for bilingual parallel corpora (bitexts) compression. The approach takes advantage of the fact that the two texts that form a bitext are mutual translations. First, the two texts are aligned both at the sentence and the word level. Then, word alignments are used to define biwords, that is, pairs of two words, each one from a different text, that are mutual translations. Finally, a biword-based PPM compressor is applied. The results obtained compressing the two texts of the bitext together improve the compression ratios achieved when both texts are independently compressed through a word-based PPM compressor; thus, saving storage and transmission costs.


string processing and information retrieval | 2006

Mapping words into codewords on PPM

Joaquín Adiego; Pablo de la Fuente

We describe a simple and efficient scheme which allows words to be managed in PPM modelling when a natural language text file is being compressed. The main idea for managing words is to assign them codes to make them easier to manipulate. A general technique is used to obtain this objective: a dictionary mapping on PPM modelling. In order to test our idea, we are implementing three prototypes: one implements the basic dictionary mapping on PPM, another implements the dictionary mapping with the separate alphabets model and the last one implements the dictionary with the spaceless words model. This technique can be applied directly or it can be combined with some word compression model. The results for files of 1 Mb. and over are better than those achieved by the character PPM which was taken as a base. The comparison between different prototypes shows that the best option is to use a word based PPM in conjunction with the spaceless word concept.


data compression conference | 2009

High Performance Word-Codeword Mapping Algorithm on PPM

Joaquín Adiego; Miguel A. Martínez-Prieto; Pablo de la Fuente

The word-codeword mapping technique allows words to be managed in PPM modelling when a natural language text file is being compressed. The main idea for managing words is to assign them codes in order to improve the compression. The previous work was focused on proposing several mapping adaptive algorithms and evaluating them. In this paper, we propose a semi-static word-codeword mapping method that takes advantage of by previous knowledge of some statistical data of the vocabulary. We test our idea implementing a basic prototype, dubbed mppm2, which also retains all the desirable features of a word-codeword mapping technique. The comparison with other techniques and compressors shows that our proposal is a very competitive choice for compressing natural language texts. In fact, empirical results show that our prototype achieves a very good compression for this type of documents.

Collaboration


Dive into the Joaquín Adiego's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jesús Vegas

University of Valladolid

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alberto Pedrero

Pontifical University of Salamanca

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge