Peter B. Danzig
University of Southern California
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Peter B. Danzig.
international world wide web conferences | 1995
C. Mic Bowman; Peter B. Danzig; Darren R. Hardy; Udi Manber; Michael F. Schwartz
It is increasingly difficult to make effective use of Internet information, given the rapid growth in data volume, user base, and data diversity. In this paper we introduce Harvest, a system that provides a scalable, customizable architecture for gathering, indexing, caching, replicating, and accessing Internet information.
IEEE ACM Transactions on Networking | 1997
Sugih Jamin; Peter B. Danzig; Scott Shenker; Lixia Zhang
Many designs for integrated services networks offer a bounded delay packet delivery service to support real-time applications. To provide a bounded delay service, networks must use admission control to regulate their load. Previous work on admission control mainly focused on algorithms that compute the worst case theoretical queueing delay to guarantee an absolute delay bound for all packets. In this paper, we describe a measurement-based admission control algorithm (ACA) for predictive service, which allows occasional delay violations. We have tested our algorithm through simulations on a wide variety of network topologies and driven with various source models, including some that exhibit long-range dependence, both in themselves and in their aggregation. Our simulation results suggest that measurement-based approach combined with the relaxed service commitment of predictive service enables us to achieve a high level of network utilization while still reliably meeting the delay bound.
international conference on computer communications | 1997
Sugih Jamin; Scott Shenker; Peter B. Danzig
We compare the performance of four admission control algorithms-one parameter-based and three measurement-based-for controlled-load service. The parameter-based admission control ensures that the sum of reserved resources is bounded by the capacity. The three measurement-based algorithms are based on measured bandwidth, acceptance region and equivalent bandwidth. We use simulation on several network scenarios to evaluate the link utilization and adherence to service commitment achieved by these four algorithms.
Communications of The ACM | 1994
C. Mic Bowman; Peter B. Danzig; Udi Manber; Michael F. Schwartz
Over the past several years, a number of information discovery and access tools have been introduced in the Internet, including Archie, Gopher, Net nd, and WAIS. These tools have become quite popular, and are helping to rede ne how people think about wide-area network applications. Yet, they are not well suited to supporting the future information infrastructure, which will be characterized by enormous data volume, rapid growth in the user base, and burgeoning data diversity. In this paper we indicate trends in these three dimensions and survey problems these trends will create for current approaches. We then suggest several promising directions of future resource discovery research, along with some initial results from projects carried out by members of the Internet Research Task Force Research Group on Resource Discovery and Directory Service.
acm special interest group on data communication | 1995
Jong Suk Ahn; Peter B. Danzig; Zhen Liu; Limin Yan
This paper explores the claims that TCP Vegas [2] both uses network bandwidth more efficiently and achieves higher network throughput than TCP Reno [6]. It explores how link bandwidth, network buffer capacity, TCP receiver acknowledgment algorithm, and degree of network congestion affect the relative performance of Vegas and Reno.
acm special interest group on data communication | 1995
Sugih Jamin; Peter B. Danzig; Scott Shenker; Lixia Zhang
Many designs for integrated service networks offer a bounded delay packet delivery service to support real-time applications. To provide bounded delay service, networks must use admission control to regulate their load. Previous work on admission control mainly focused on algorithms that compute the worst case theoretical queueing delay to guarantee an absolute delay bound for all packets. In this paper we describe a measurement-based admission control algorithm for predictive service, which allows occasional delay violations. We have tested our algorithm through simulations on a wide variety of network topologies and driven with various source models, including some that exhibit long-range dependence, both in themselves and in their aggregation. Our simulation results suggest that, at least for the scenarios studied here, the measurement-based approach combined with the relaxed service commitment of predictive service enables us to achieve a high level of network utilization while still reliably meeting the delay bound.
acm special interest group on data communication | 1993
Peter B. Danzig; Richard S. Hall; Michael F. Schwartz
This paper presents evidence that several, judiciously placed file caches could reduce the volume of FTP traffic by 42%, and hence the volume of all NSFNET backbone traffic by 21%. In addition, if FTP client and server software automatically compressed data, this savings could increase to 27%. We believe that a hierarchical architecture of whole file caches, modeled after the existing name servers caching architecture, could become a valuable part of any internet.We derived these conclusions by performing trace driven simulations of various file caching architectures, cache sizes, and replacement policies. We collected the traces of file transfer traffic employed in our simulations on a network that connects the NSFNET backbone to a large, regional network. This particular regional network is responsible for about 5 to 7% of NSFNET traffic.While this papers analysis and discussion focus on caching for FTP file transfer, the proposed caching architecture applies to caching objects from other internetwork services.
IEEE Computer | 1993
Katia Obraczka; Peter B. Danzig; Shih-Hao Li
An overview of resource discovery services currently available on the Internet is presented. The authors concentrate on the following discovery tools: the Wide Area Information Servers (WAIS) project, Archie, Prospero, Gopher. The World-Wide Web (WWW), Netfind, the X.500 directory, Indie, the Knowbot Information Service (KIS), Alex, Semantic File Systems, and Nomenclator. The authors summarize the surveyed tools by presenting a taxonomy of their characteristics and design decisions. They also describe where to find and how to access several of the surveyed discovery services. They conclude with a discussion of future directions in the area of resource discovery and retrieval.<<ETX>>
acm special interest group on data communication | 1992
Peter B. Danzig; Katia Obraczka; Anant Kumar
Over a million computers implement the Internets Domain Name System of DNS, making it the worlds most distributed database and the Internets most significant source of wide-area RPC-like traffic. Last year, over eight percent of the packets and four percent of the bytes that traversed the NSFnet were due to DNS. We estimate that a third of this wide-area DNS traffic was destined to seven root name servers. This paper explores the performance of DNS based on two 24-hour traces of traffic destined to one of these root name servers. It considers the effectiveness of name caching and retransmission timeout calculation, shows how algorithms to increase DNSs resiliency lead to disastrous behavior when servers fail or when certain implementation faults are triggered, explains the paradoxically high fraction of wide-area DNS packets, and evaluates the impact of flaws in various implementations of DNS. It shows that negative caching would improve DNS performance only marginally in an internet of correctly implemented name servers. It concludes by calling for a fundamental change in the way we specify and implement future name servers and distributed applications.
IEEE ACM Transactions on Networking | 1996
Jong Suk Ahn; Peter B. Danzig
This paper describes a new technique that can speedup simulation of high-speed, wide-area packet networks by one to two orders of magnitude. Speedup is achieved by coarsening the representation of network traffic from packet-by-packet to train-by-train, where a train represents a cluster of closely spaced packets. Coarsening the timing granularity creates longer trains and makes the simulation proceed more quickly since the cost of processing trains is independent of train size. Coarsening the timing granularity introduces, of course, a degree of approximation. This paper presents experiments that evaluate our coarse time-grain simulation technique for first in/first out (FIFO) switched, TCP/IP, and asynchronous transfer mode (ATM) networks carrying a mix of data and streaming traffic. We show that delay, throughput, and loss rate can frequently be estimated within a few percent via coarse time-grain simulation. This paper also describes how to apply coarse time-grain simulation to other switch disciplines. Finally, this paper introduces three more simulation techniques which together can double the performance of well written packet simulators without trading with the simulation accuracy. These techniques reduce the number of outstanding simulation events and reduce the cost of manipulating the event list.