Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Artur Alves Pessoa is active.

Publication


Featured researches published by Artur Alves Pessoa.


international symposium on algorithms and computation | 2004

Efficient algorithms for the hotlink assignment problem: the worst case search

Artur Alves Pessoa; Eduardo Sany Laber; Críston de Souza

Let T be a rooted directed tree where nodes represent web pages of a web site and arcs represent hyperlinks In this case, when a user searches for an information i, it traverses a directed path in T, from the root node to the node that contains i In this context, hotlinks are defined as additional hyperlinks added to web pages in order to reduce the number of accessed pages per search In this paper, we address the problem of inserting at most 1 hotlink in each web page, so as to minimize the number of accesses in a worst case search We present a (14/3)-approximate algorithm that runs in a O(n log m) time and requires a linear space, where n and m are the number of nodes (internal and external) and the number of leaves in T, respectively We also introduce an exact dynamic programming algorithm which runs in O(n(nm)2.284) time and uses O(n(nm)1.441) space By extending the techniques presented here, a polynomial time algorithm can also be obtained when


ACM Transactions on Information Systems | 2007

Reducing human interactions in Web directory searches

Ornan Ori Gerstel; Shay Kutten; Eduardo Sany Laber; Rachel Matichin; David Peleg; Artur Alves Pessoa; Críston de Souza

{mathcal K}=O(1)


algorithm engineering and experimentation | 1999

Efficient Implementation of the WARM-UP Algorithm for the Construction of Length-Restricted Prefix Codes

Ruy Luiz Milidiú; Artur Alves Pessoa; Eduardo Sany Laber

hotlinks may be inserted in each page The best known result for this problem is a polytime algorithm with constant approximation ratio for trees with bounded degree presented by Gerstel et al [1].


Journal of Algorithms | 1999

Bounding the Compression Loss of the FGK Algorithm

Ruy Luiz Milidiú; Eduardo Sany Laber; Artur Alves Pessoa

Consider a website containing a collection of webpages with data such as in Yahoo or the Open Directory project. Each page is associated with a weight representing the frequency with which that page is accessed by users. In the tree hierarchy representation, accessing each page requires the user to travel along the path leading to it from the root. By enhancing the index tree with additional edges (hotlinks) one may reduce the access cost of the system. In other words, the hotlinks reduce the expected number of steps needed to reach a leaf page from the tree root, assuming that the user knows which hotlinks to take. The hotlink enhancement problem involves finding a set of hotlinks minimizing this cost.n This article proposes the first exact algorithm for the hotlink enhancement problem. This algorithm runs in polynomial time for trees with logarithmic depth. Experiments conducted with real data show that significant improvement in the expected number of accesses per search can be achieved in websites using this algorithm. These experiments also suggest that the simple and much faster heuristic proposed previously by Czyzowicz et al. [2003] creates hotlinks that are nearly optimal in the time savings they provide to the user.n The version of the hotlink enhancement problem in which the weight distribution on the leaves is unknown is discussed as well. We present a polynomial-time algorithm that is optimal for any tree for any depth.


string processing and information retrieval | 1998

In-place length-restricted prefix coding

Ruy Luiz Milidiú; Artur Alves Pessoa; Eduardo Sany Laber

Given an alphabet Σ = {a1, ..., an} with a corresponding list of positive weights {w1, ..., wn} and a length restriction L, the length-restricted prefix code problem is to find, a prefix code that minimizes Σni=1 wili, where li, the length of the codeword assigned to ai, cannot be greater than L, for i = 1, ..., n. In this paper, we present an efficient implementation of the WARM-UP algorithm, an approximative method for this problem. The worst-case time complexity of WARMUP is O(n log n + n log wn), where wn is the greatest weight. However, some experiments with a previous implementation of WARM-UP show that it runs in linear time for several practical cases, if the input weights are already sorted. In addition, it often produces optimal codes. The proposed implementation combines two new enhancements to reduce the space usage of WARM-UP and to improve its execution time. As a result, it is about ten times faster than the previous implementation of WARM-UP and overcomes the LRR Package Method, the faster known exact method.


data compression conference | 1999

A work-efficient parallel algorithm for constructing Huffman codes

Ruy Luiz Milidiú; Eduardo Sany Laber; Artur Alves Pessoa

An important issue related to coding schemes is their compression loss. A simple measure ? of the compression loss due to a coding scheme different than Huffman coding is defined by ?=AC?AH where AH is the average code length of a static Huffman encoding and AC is the average code length of an encoding based on the compression scheme C. When the scheme C is the FGK algorithm, Vitter conjectured that ??K for some real constant K. Here, we use an amortized analysis to prove this conjecture. We show that ?<2. Furthermore, we show through an example that our bound is asymptotically tight. This result explains the good performance of FGK that many authors have observed through practical experiments.


latin american symposium on theoretical informatics | 2002

Pipeline Transportation of Petroleum Products with No Due Dates

Ruy Luiz Milidiú; Artur Alves Pessoa; Eduardo Sany Laber

Huffman codes, combined with word-based models, are considered efficient compression schemes for full-text retrieval systems. The decoding rate for these schemes can be substantially improved if the maximum length of the codewords is not greater then the machine word size L. However, if the vocabulary is large, simple methods for generating optimal length-restricted codes are either too slow or require a significantly large amount of memory. We present an in-place, simple and fast implementation for the BRCI (Build, Remove, Condense and Insert) algorithm, an approximative method for length-restricted coding. It overwrites a sorted input list of n weights with the corresponding codeword lengths in O(n) time. In addition, the worst-case compression loss introduced by BRCI codes with respect to unrestricted Huffman codes is proved to be negligible for all practical values of both L and n.


IEEE Transactions on Information Theory | 2001

Three space-economical algorithms for calculating minimum-redundancy prefix codes

Ruy Luiz Milidiú; Artur Alves Pessoa; Eduardo Sany Laber

Given an alphabet /spl Sigma/={a/sub 1/,...,a/sub n/) and a corresponding list of weights [w/sub 1/,...,w/sub n/], a Huffman code for this alphabet is a prefix code that minimizes the weighted length of a code string, defined to be /spl Sigma//sub i=1//sup n/w/sub i/l/sub i/, where l/sub i/ is the length of the code assigned to a/sub i/. We present ES-ParHuff, a work-efficient PRAM CREW algorithm for constructing Huffman codes. An important feature of the algorithm is its simplicity. This algorithm is a direct parallelization of Huffmans algorithm. ES-ParHuff runs in O(Hloglog(n/H)) time with O(n) work, where H is the length of the longest generated code.


european symposium on algorithms | 1999

Strategies for Searching with Different Access Costs

Eduardo Sany Laber; Ruy Luiz Milidiú; Artur Alves Pessoa

We introduce a new model for pipeline transportation of petroleum products with no due dates. We use a directed graph G with n nodes, where arcs represent pipes and nodes represent locations. We also define a set L of r transportation orders and a subset F ? L of further orders. A feasible solution to our model is a pumping sequence that delivers the products corresponding to all orders in L-F. We prove that the problem of finding such a solution is NP-hard, even if G is acyclic. For the special case where the products corresponding to orders in F are initially stored at nodes, we propose the BPA algorithm. This algorithm finds a feasible solution in O(r2 log r + s2(rn + logs)) time, where s is the total volume in the arcs of G. We point out that the input size is ?(s). If G is acyclic, then BPA finds a minimum cost solution.


Theoretical Computer Science | 2003

The complexity of makespan minimization for pipeline transportation

Ruy Luiz Milidiú; Artur Alves Pessoa; Eduardo Sany Laber

The minimum-redundancy prefix code problem is to determine, for a given list W=[/spl omega//sub 1/,..., /spl omega//sub n/] of n positive symbol weights, a list L=[l/sub 1/,...,l/sub n/] of n corresponding integer codeword lengths such that /spl Sigma//sub i=1//sup n/ 2/sup -li//spl les/1 and /spl Sigma//sub i=1//sup n/ /spl omega//sub i/l/sub i/ is minimized. Let us consider the case where W is already sorted. In this case, the output list L can be represented by a list M=[m/sub 1/,..., m/sub H/], where m/sub l/, for l=1,...,H, denotes the multiplicity of the codeword length l in L and H is the length of the greatest codeword. Fortunately, H is proved to be O(min(log(1/p/sub 1/),n)), where p/sub 1/ is the smallest symbol probability, given by /spl omega//sub 1///spl Sigma//sub i=1//sup n/ /spl omega//sub i/. We present the Fast LazyHuff (F-LazyHuff), the Economical LazyHuff (E-LazyHuff), and the Best LazyHuff (B-LazyHuff) algorithms. F-LazyHuff runs in O(n) time but requires O(min(H/sup 2/, n)) additional space. On the other hand, E-LazyHuff runs in O(n+nlog(n/H)) time, requiring only O(H) additional space. Finally, B-LazyHuff asymptotically overcomes, the previous algorithms, requiring only O(n) time and O(H) additional space. Moreover, our three algorithms have the advantage of not writing over the input buffer during code calculation, a feature that is very useful in some applications.

Collaboration


Dive into the Artur Alves Pessoa's collaboration.

Top Co-Authors

Avatar

Eduardo Sany Laber

Pontifical Catholic University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

Ruy Luiz Milidiú

Pontifical Catholic University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

Críston de Souza

Pontifical Catholic University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Peleg

Weizmann Institute of Science

View shared research outputs
Top Co-Authors

Avatar

Rachel Matichin

Weizmann Institute of Science

View shared research outputs
Top Co-Authors

Avatar

Shay Kutten

Technion – Israel Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge