Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lee Breslau is active.

Publication


Featured researches published by Lee Breslau.


international conference on computer communications | 1999

Web caching and Zipf-like distributions: evidence and implications

Lee Breslau; Pei Cao; Li Fan; Graham Phillips; Scott Shenker

This paper addresses two unresolved issues about Web caching. The first issue is whether Web requests from a fixed user community are distributed according to Zipfs (1929) law. The second issue relates to a number of studies on the characteristics of Web proxy traces, which have shown that the hit-ratios and temporal locality of the traces exhibit certain asymptotic properties that are uniform across the different sets of the traces. In particular, the question is whether these properties are inherent to Web accesses or whether they are simply an artifact of the traces. An answer to these unresolved issues will facilitate both Web cache resource planning and cache hierarchy design. We show that the answers to the two questions are related. We first investigate the page request distribution seen by Web proxy caches using traces from a variety of sources. We find that the distribution does not follow Zipfs law precisely, but instead follows a Zipf-like distribution with the exponent varying from trace to trace. Furthermore, we find that there is only (i) a weak correlation between the access frequency of a Web page and its size and (ii) a weak correlation between access frequency and its rate of change. We then consider a simple model where the Web accesses are independent and the reference probability of the documents follows a Zipf-like distribution. We find that the model yields asymptotic behaviour that are consistent with the experimental observations, suggesting that the various observed properties of hit-ratios and temporal locality are indeed inherent to Web accesses observed by proxies. Finally, we revisit Web cache replacement algorithms and show that the algorithm that is suggested by this simple model performs best on real trace data. The results indicate that while page requests do indeed reveal short-term correlations and other structures, a simple model for an independent request stream following a Zipf-like distribution is sufficient to capture certain asymptotic properties observed at Web proxies.


acm special interest group on data communication | 2003

Making gnutella-like P2P systems scalable

Yatin Chawathe; Sylvia Ratnasamy; Lee Breslau; Nick Lanham; Scott Shenker

Napster pioneered the idea of peer-to-peer file sharing, and supported it with a centralized file search facility. Subsequent P2P systems like Gnutella adopted decentralized search algorithms. However, Gnutellas notoriously poor scaling led some to propose distributed hash table solutions to the wide-area file search problem. Contrary to that trend, we advocate retaining Gnutellas simplicity while proposing new mechanisms that greatly improve its scalability. Building upon prior research [1, 12, 22], we propose several modifications to Gnutellas design that dynamically adapt the overlay topology and the search algorithms in order to accommodate the natural heterogeneity present in most peer-to-peer systems. We test our design through simulations and the results show three to five orders of magnitude improvement in total system capacity. We also report on a prototype implementation and its deployment on a testbed.


acm special interest group on data communication | 2000

Endpoint admission control: architectural issues and performance

Lee Breslau; Edward W. Knightly; Scott Shenker; Ion Stoica; Hui Zhang

The traditional approach to implementing admission control, as exemplified by the Integrated Services proposal in the IETF, uses a signalling protocol to establish reservations at all routers along the path. While providing excellent quality-of-service, this approach has limited scalability because it requires routers to keep per-flow state and to process per-flow reservation messages. In an attempt to implement admission control without these scalability problems, several recent papers have proposed various forms of endpoint admission control. In these designs, the hosts (the endpoints) probe the network to detect the level of congestion; the host admits the flow only if the detected level of congestion is sufficiently low. This paper is devoted to the study of endpoint admission control. We first consider several architectural issues that guide (and constrain) the design of such systems. We then use simulations to evaluate the performance of endpoint admission control in various settings. The modest performance degradation between traditional router-based admission control and endpoint admission control suggests that a real-time service based on endpoint probing may be viable.


acm special interest group on data communication | 2002

On the characteristics and origins of internet flow rates

Yin Zhang; Lee Breslau; Vern Paxson; Scott Shenker

This paper considers the distribution of the rates at which flows transmit data, and the causes of these rates. First, using packet level traces from several Internet links, and summary flow statistics from an ISP backbone, we examine Internet flow rates and the relationship between the rate and other flow characteristics such as size and duration. We find, as have others, that while the distribution of flow rates is skewed, it is not as highly skewed as the distribution of flow sizes. We also find that for large flows the size and rate are highly correlated. Second, we attempt to determine the cause of the rates at which flows transmit data by developing a tool, T-RAT, to analyze packet-level TCP dynamics. In our traces, the most frequent causes appear to be network congestion and receiver window limits.


acm special interest group on data communication | 1999

A scalable Web cache consistency architecture

Haobo Yu; Lee Breslau; Scott Shenker

The rapid increase in web usage has led to dramatically increased loads on the network infrastructure and on individual web servers. To ameliorate these mounting burdens, there has been much recent interest in web caching architectures and algorithms. Web caching reduces network load, server load, and the latency of responses. However, web caching has the disadvantage that the pages returned to clients by caches may be stale, in that they may not be consistent with the version currently on the server. In this paper we describe a scalable web cache consistency architecture that provides fairly tight bounds on the staleness of pages. Our architecture borrows heavily from the literature, and can best be described as an invalidation approach made scalable by using a caching hierarchy and application-level multicast routing to convey the invalidations. We evaluate this design with calculations and simulations, and compare it to several other approaches.


acm special interest group on data communication | 2003

Approximate fairness through differential dropping

Rong Pan; Lee Breslau; Balaji Prabhakar; Scott Shenker

Many researchers have argued that the Internet architecture would be more robust and more accommodating of heterogeneity if routers allocated bandwidth fairly. However, most of the mechanisms proposed to accomplish this, such as Fair Queueing [16, 6] and its many variants [2, 23, 15], involve complicated packet scheduling algorithms. These algorithms, while increasingly common in router designs, may not be inexpensively implementable at extremely high speeds; thus, finding more easily implementable variants of such algorithms may be of significant practical value. This paper proposes an algorithm called Approximate Fair Dropping (AFD), which bases its dropping decisions on the recent history of packet arrivals. AFD retains a simple forwarding path and requires an amount of additional state that is small compared to current packet buffers. Simulation results, which we describe here, suggest that the design provides a reasonable degree of fairness in a wide variety of operating conditions. The performance of our approach is aided by the fact that the vast majority of Internet flows are slow but the fast flows send the bulk of the bits. This allows a small sample of recent history to provide accurate rate estimates of the fast flows.


acm special interest group on data communication | 1998

Uniform versus priority dropping for layered video

Sandeep Bajaj; Lee Breslau; Scott Shenker

In this paper, we analyze the relative merits of uniform versus priority dropping for the transmission of layered video. We first present our original intuitions about these two approaches, and then investigate the issue more thoroughly through simulations and analysis in which we explicitly model the performance of layered video applications. We compare both their performance characteristics and incentive properties, and find that the performance benefit of priority dropping is smaller than we expected, while uniform dropping has worse incentive properties than we previously believed.


acm special interest group on data communication | 1998

Best-effort versus reservations: a simple comparative analysis

Lee Breslau; Scott Shenker

Using a simple analytical model, this paper addresses the following question: Should the Internet retain its best-effort-only architecture, or should it adopt one that is reservation-capable? We characterize the differences between reservation-capable and best-effort-only networks in terms of application performance and total welfare. Our analysis does not yield a definitive answer to the question we pose, since it would necessarily depend on unknowable factors such as the future cost of network bandwidth and the nature of the future traffic load. However, our model does reveal some interesting phenomena. First, in some circumstances, the amount of incremental bandwidth needed to make a best-effort-only network perform as well as a reservation capable one diverges as capacity increases. Second, in some circumstances reservation-capable networks retain significant advantages over best-effort-only networks, no matter how cheap bandwidth becomes. Lastly, we find bounds on the maximum performance advantage a reservation-capable network can achieve over best-effort architectures.


acm special interest group on data communication | 1995

Two issues in reservation establishment

Scott Shenker; Lee Breslau

This paper addresses two issues related to resource reservation establishment in packet switched networks offering real-time services. The first issue arises out of the natural tension between the local nature of reservations (i.e., they control the service provided on a particular link) and the end-to-end nature of application service requirements. How do reservation establishment protocols enable applications to receive their desired end-to-end service? We review the current one-pass and two-pass approaches, and then propose a new hybrid approach called one-pass-with-advertising. The second issue in reservation establishment we consider arises from the inevitable heterogeneity in network router capabilities. Some routers and subnets in the Internet will support real-time services and others, such as ethernets, will not. How can a reservation establishment mechanism enable applications to achieve the end-to-end service they desire in the face of this heterogeneity? We propose an approach involving replacement services and advertising to build end-to-end service out of heterogeneous per-link service offerings.


measurement and modeling of computer systems | 2004

Coping with network failures: routing strategies for optimal demand oblivious restoration

David Applegate; Lee Breslau; Edith Cohen

Link and node failures in IP networks pose a challenge for network control algorithms. Routing restoration, which computes new routes that avoid failed links, involves fundamental tradeoffs between efficient use of network resources, complexity of the restoration strategy and disruption to network traffic. In order to achieve a balance between these goals, obtaining routings that provide good performance guarantees under failures is desirable.In this paper, building on previous work that provided performance guarantees under uncertain (and potentially unknown) traffic demands, we develop algorithms for computing optimal restoration paths and a methodology for evaluating the performance guarantees of routing under failures. We then study the performance of route restoration on a diverse collection of ISP networks. Our evaluation uses a competitive analysis type framework, where performance of routing with restoration paths under failures is compared to the best possible performance on the failed network. We conclude that with careful selection of restoration paths one can obtain restoration strategies that retain nearly optimal performance on the failed network while minimizing disruptions to traffic flows that did not traverse the failed parts of the network.

Collaboration


Dive into the Lee Breslau's collaboration.

Researchain Logo
Decentralizing Knowledge