Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Edleno Silva de Moura is active.

Publication


Featured researches published by Edleno Silva de Moura.


international acm sigir conference on research and development in information retrieval | 2001

Rank-preserving two-level caching for scalable search engines

Patricia Correia Saraiva; Edleno Silva de Moura; Nivio Ziviani; Wagner Meira; Rodrigo Fonseca; Berthier A. Ribeiro-Neto

We present an e ective caching scheme that reduces the computing and I/O requirements of a Web search engine without altering its ranking characteristics. The novelty is a two-level caching scheme that simultaneously combines cached query results and cached inverted lists on a real case search engine. A set of log queries are used to measure and compare the performance and the scalability of the search engine with no cache, with the cache for query results, with the cache for inverted lists, and with the two-level cache. Experimental results show that the two-level cache is superior, and that it allows increasing the maximum number of queries processed per second by a factor of three, while preserving the response time. These results are new, have not been reported before, and demonstrate the importance of advanced caching schemes for real case search engines.


Information Retrieval | 2000

Adding Compression to Block Addressing Inverted Indexes

Gonzalo Navarro; Edleno Silva de Moura; Marden S. Neubert; Nivio Ziviani; Ricardo A. Baeza-Yates

Inverted index compression, block addressing and sequential search on compressed text are three techniques that have been separately developed for efficient, low-overhead text retrieval. Modern text compression techniques can reduce the text to less than 30% of its size and allow searching it directly and faster than the uncompressed text. Inverted index compression obtains significant reduction of its original size at the same processing speed. Block addressing makes the inverted lists point to text blocks instead of exact positions and pay the reduction in space with some sequential text scanning.In this work we combine the three ideas in a single scheme. We present a compressed inverted file that indexes compressed text and uses block addressing. We consider different techniques to compress the index and study their performance with respect to the block size. We compare the index against three separate techniques for varying block sizes, showing that our index is superior to each isolated approach. For instance, with just 4% of extra space overhead the index has to scan less than 12% of the text for exact searches and about 20% allowing one error in the matches.


international acm sigir conference on research and development in information retrieval | 2000

Link-based and content-based evidential information in a belief network model

Ilmério Silva; Berthier A. Ribeiro-Neto; Pável Calado; Edleno Silva de Moura; Nivio Ziviani

This work presents an information retrieval model developed to deal with hyperlinked environments. The model is based on belief networks and provides a framework for combining information extracted from the content of the documents with information derived from cross-references among the documents. The information extracted from the content of the documents is based on statistics regarding the keywords in the collection and is one of the basis for traditional information retrieval (IR) ranking algorithms. The information derived from cross-references among the documents is based on link references in a hyperlinked environment and has received increased attention lately due to the success of the Web. We discuss a set of strategies for combining these two types of sources of evidential information and experiment with them using a reference collection extracted from the Web. The results show that this type of combination can improve the retrieval performance without requiring any extra information from the users at query time. In our experiments, the improvements reach up to 59% in terms of average precision figures.


ACM Transactions on Information Systems | 2003

Local versus global link information in the Web

Pável Calado; Berthier A. Ribeiro-Neto; Nivio Ziviani; Edleno Silva de Moura; Ilmério Silva

Information derived from the cross-references among the documentsin a hyperlinked environment, usually referred to as linkinformation, is considered important since it can be used toeffectively improve document retrieval. Depending on the retrievalstrategy, link information can be local or global. Local linkinformation is derived from the set of documents returned asanswers to the current user query. Global link information isderived from all the documents in the collection. In this work, weinvestigate how the use of local link information compares to theuse of global link information. For the comparison, we run a seriesof experiments using a large document collection extracted from theWeb. For our reference collection, the results indicate that theuse of local link information improves precision by 74%.When global link information is used, precision improves by35%. However, when only the first 10 documents in theranking are considered, the average gain in precision obtained withthe use of global link information is higher than the gain obtainedwith the use of local link information. This is an interestingresult since it provides insight and justification for the use ofglobal link information in major Web search engines, where usersare mostly interested in the first 10 answers. Furthermore, globalinformation can be computed in the background, which allowsspeeding up query processing.


international acm sigir conference on research and development in information retrieval | 1999

Efficient distributed algorithms to build inverted files

Berthier A. Ribeiro-Neto; Edleno Silva de Moura; Marden S. Neubert; Nivio Ziviani

We present three distributed algorithms to build global inverted files for very large text collections. The distributed environment we use is a high bandwidth network of workstations with a shared-nothing memory organization. The text collection is assumed to be evenly distributed among the disks of the various workstations. Our algorithms consider that the total distributed main memory is considerably smaller than the inverted file to be generated. The inverted file is compressed to save memory and disk space and to save time for moving data in/out disk and across the network. We analyze our algorithms and discuss the tradeoffs among them. We show that, with 8 processors and 16 megabytes of RAM available in each processor, the advanced variants of our algorithms are able to invert a 100 gigabytes collection (the size of the very large TREC-7 collection) in roughly 8 hours. Using 16 processors this time drops to roughly 4 hours.


Abakós | 2012

Impact of template removal on Web search DOI 10.5752/P.2316-9451.2012v1n1p28

Kaio Wagner; Edleno Silva de Moura; David Fernandes; Marco Cristo; Altigran Soares da Silva

Previous work in literature has indicated that template of web pages represent noisy information in web collections, and advocate that the simple removal of template result in improvements in quality of results provided by Web search systems. In this paper, we study the impact of template removal in two distinct scenarios: large scale web search collections, which consist of several distinct websites, and intrasite web collections, involving searches inside of web sites. xa0Our xa0work xa0is the xa0first xa0in literature to xa0study the xa0impact of template removal xa0to xa0search systems in large xa0scale xa0Web xa0collections. The study was carried out using an automatic template detection method previously proposed by us. As contributions, we present statistics about the application of this automatic template detection method to the well known GOV2 reference collection, a large scale Web collection. We also present experiments comparing the amount of template detected by our automatic method to the ones obtained when humans select templates. And finally, experiments which indicate that, in both experimented scenarios, template removal does not improve the quality of results provided by search systems, but can play the role of an effective loss compression method by reducing the size of their indexes.


Archive | 2013

Multi-Objective Pareto-Efficient Approaches for Recommend er Systems

Marco Túlio de Freitas Ribeiro; Anisio Lacerda; Edleno Silva de Moura; Itamar Hata; Adriano Veloso; Nivio Ziviani


SBBD Companion | 2018

Lathe: light-Weight Keyword Query Processing over Multiple Databases.

Pericles de Oliveira; Altigran Soares da Silva; Edleno Silva de Moura; Gilberto Santos


Archive | 2009

Uma Abordagem Flex´õvel para Extracao de Metadados em Citacoes Bibliograficas

Altigran Soares da Silva; Edleno Silva de Moura


Archive | 2007

GERINDO: Managing and Retrieving Information in Large Document Collections

Nivio Ziviani; Alberto H. F. Laender; Edleno Silva de Moura; Altigran Soares da Silva; Carlos A. Heuser; Wagner Meira

Collaboration


Dive into the Edleno Silva de Moura's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Berthier A. Ribeiro-Neto

Universidade Federal de Minas Gerais

View shared research outputs
Top Co-Authors

Avatar

Carlos A. Heuser

Universidade Federal do Rio Grande do Sul

View shared research outputs
Top Co-Authors

Avatar

Ilmério Silva

Universidade Federal de Minas Gerais

View shared research outputs
Top Co-Authors

Avatar

Marden S. Neubert

Universidade Federal de Minas Gerais

View shared research outputs
Top Co-Authors

Avatar

Wagner Meira

Universidade Federal do Rio Grande do Sul

View shared research outputs
Top Co-Authors

Avatar

Pável Calado

Instituto Superior Técnico

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adriano Veloso

Universidade Federal de Minas Gerais

View shared research outputs
Researchain Logo
Decentralizing Knowledge