Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sally Floyd is active.

Publication


Featured researches published by Sally Floyd.


IEEE ACM Transactions on Networking | 1993

Random early detection gateways for congestion avoidance

Sally Floyd; Van Jacobson

The authors present random early detection (RED) gateways for congestion avoidance in packet-switched networks. The gateway detects incipient congestion by computing the average queue size. The gateway could notify connections of congestion either by dropping packets arriving at the gateway or by setting a bit in packet headers. When the average queue size exceeds a present threshold, the gateway drops or marks each arriving packet with a certain probability, where the exact probability is a function of the average queue size. RED gateways keep the average queue size low while allowing occasional bursts of packets in the queue. During congestion, the probability that the gateway notifies a particular connection to reduce its window is roughly proportional to that connections share of the bandwidth through the gateway. RED gateways are designed to accompany a transport-layer congestion control protocol such as TCP. The RED gateway has no bias against bursty traffic and avoids the global synchronization of many connections decreasing their window at the same time. Simulations of a TCP/IP network are used to illustrate the performance of RED gateways. >


acm special interest group on data communication | 2000

Equation-based congestion control for unicast applications

Sally Floyd; Mark Handley; Jitendra Padhye; Jörg Widmer

This paper proposes a mechanism for equation-based congestion control for unicast traffic. Most best-effort traffic in the current Internet is well-served by the dominant transport protocol, TCP. However, traffic such as best-effort unicast streaming multimedia could find use for a TCP-friendly congestion control mechanism that refrains from reducing the sending rate in half in response to a single packet drop. With our mechanism, the sender explicitly adjusts its sending rate as a function of the measured rate of loss events, where a loss event consists of one or more packets dropped within a single round-trip time. We use both simulations and experiments over the Internet to explore performance.


IEEE ACM Transactions on Networking | 1995

Link-sharing and resource management models for packet networks

Sally Floyd; Van Jacobson

Discusses the use of link-sharing mechanisms in packet networks and presents algorithms for hierarchical link-sharing. Hierarchical link-sharing allows multiple agencies, protocol families, or traffic types to share the bandwidth on a link in a controlled fashion. Link-sharing and real-time services both require resource management mechanisms at the gateway. Rather than requiring a gateway to implement separate mechanisms for link-sharing and real-time services, the approach in the paper is to view link-sharing and real-time service requirements as simultaneous, and in some respect complementary, constraints at a gateway that can be implemented with a unified set of mechanisms. While it is not possible to completely predict the requirements that might evolve in the Internet over the next decade, the authors argue that controlled link-sharing is an essential component that can provide gateways with the flexibility to accommodate emerging applications and network protocols. >


acm special interest group on data communication | 1996

Simulation-based comparisons of Tahoe, Reno and SACK TCP

Kevin R. Fall; Sally Floyd

This paper uses simulations to explore the benefits of adding selective acknowledgments (SACK) and selective repeat to TCP. We compare Tahoe and Reno TCP, the two most common reference implementations for TCP, with two modified versions of Reno TCP. The first version is New-Reno TCP, a modified version of TCP without SACK that avoids some of Reno TCPs performance problems when multiple packets are dropped from a window of data. The second version is SACK TCP, a conservative extension of Reno TCP modified to use the SACK option being proposed in the Internet Engineering Task Force (IETF). We describe the congestion control algorithms in our simulated implementation of SACK TCP and show that while selective acknowledgments are not required to solve Reno TCPs performance problems when multiple packets are dropped, the absence of selective acknowledgments does impose limits to TCPs ultimate performance. In particular, we show that without selective acknowledgments, TCP implementations are constrained to either retransmit at most one dropped packet per round-trip time, or to retransmit packets that might have already been successfully delivered.


acm special interest group on data communication | 1994

TCP and explicit congestion notification

Sally Floyd

This paper discusses the use of Explicit Congestion Notification (ECN) mechanisms in the TCP/IP protocol. The first part proposes new guidelines for TCPs response to ECN mechanisms (e.g., Source Quench packets, ECN fields in packet headers). Next, using simulations, we explore the benefits and drawbacks of ECN in TCP/IP networks. Our simulations use RED gateways modified to set an ECN bit in the IP packet header as an indication of congestion, with Reno-style TCP modified to respond to ECN as well as to packet drops as indications of congestion. The simulations show that one advantage of ECN mechanisms is in avoiding unnecessary packet drops, and therefore avoiding unnecessary delay for packets from low-bandwidth delay-sensitive TCP connections. A second advantage of ECN mechanisms is in networks (generally LANs) where the effectiveness of TCP retransmit timers is limited by the coarse granularity of the TCP clock. The paper also discusses some implementation issues concerning specific ECN mechanisms in TCP/IP networks.


acm special interest group on data communication | 1994

Wide-area traffic: the failure of Poisson modeling

Vern Paxson; Sally Floyd

Network arrivals are often modeled as Poisson processes for analytic simplicity, even though a number of traffic studies have shown that packet interarrivals are not exponentially distributed. We evaluate 21 wide-area traces, investigating a number of wide-area TCP arrival processes (session and connection arrivals, FTPDATA connection arrivals within FTP sessions, and TELNET packet arrivals) to determine the error introduced by modeling them using Poisson processes. We find that user-initiated TCP session arrivals, such as remote-login and file-transfer, are well-modeled as Poisson processes with fixed hourly rates, but that other connection arrivals deviate considerably from Poisson; that modeling TELNET packet interarrivals as exponential grievously underestimates the burstiness of TELNET traffic, but using the empirical Tcplib[DJCME92] interarrivals preserves burstiness over many time scales; and that FTPDATA connection arrivals within FTP sessions come bunched into “connection burst”, the largest of which are so large that they completely dominate FTPDATA traffic. Finally, we offer some preliminary results regarding how our findings relate to the possible self-similarity of wide-area traffic.


IEEE Computer | 2000

Advances in network simulation

Lee Breslau; Deborah Estrin; Kevin R. Fall; Sally Floyd; John S. Heidemann; Ahmed Helmy; Polly Huang; Steven McCanne; Kannan Varadhan; Ya Xu; Haobo Yu

Network researchers must test Internet protocols under varied conditions to determine whether they are robust and reliable. The paper discusses the Virtual Inter Network Testbed (VINT) project which has enhanced its network simulator and related software to provide several practical innovations that broaden the conditions under which researchers can evaluate network protocols.


acm special interest group on data communication | 2002

Controlling high bandwidth aggregates in the network

Ratul Mahajan; Steven Michael Bellovin; Sally Floyd; John Ioannidis; Vern Paxson; Scott Shenker

The current Internet infrastructure has very few built-in protection mechanisms, and is therefore vulnerable to attacks and failures. In particular, recent events have illustrated the Internets vulnerability to both denial of service (DoS) attacks and flash crowds in which one or more links in the network (or servers at the edge of the network) become severely congested. In both DoS attacks and flash crowds the congestion is due neither to a single flow, nor to a general increase in traffic, but to a well-defined subset of the traffic --- an aggregate. This paper proposes mechanisms for detecting and controlling such high bandwidth aggregates. Our design involves both a local mechanism for detecting and controlling an aggregate at a single router, and a cooperative pushback mechanism in which a router can ask upstream routers to control an aggregate. While certainly not a panacea, these mechanisms could provide some needed relief from flash crowds and flooding-style DoS attacks. The presentation in this paper is a first step towards a more rigorous evaluation of these mechanisms.


acm special interest group on data communication | 1991

Connections with multiple congested gateways in packet-switched networks part 1: one-way traffic

Sally Floyd

In this paper we explore the bias in TCP/IP networks against connections with multiple congested gateways. We consider the interaction between the bias against connections with multiple congested gateways, the bias of the TCP window modification algorithm against connections with longer roundtrip times, and the bias of Drop Tail and Random Drop gateways against bursty traffic. Using simulations and a heuristic analysis, we show that in a network with the window modification algorithm in 4.3 tahoe BSD TCP and with Random Drop or Drop Tail gateways, a longer connection with multiple congested gateways can receive unacceptably low throughput. We show that in a network with no bias against connections with longer roundtrip times and with no bias against bursty traffic, a connection with multiple congested gateways can receive an acceptable level of throughput.We discuss the application of several current measures of fairness to networks with multiple congested gateways, and show that different measures of fairness have quite different implications. One view is that each connection should receive the same throughput in bytes/second, regardless of roundtrip times or numbers of congested gateways. Another view is that each connection should receive the same share of the networks scarce congested resources. In general, we believe that the fairness criteria for connections with multiple congested gateways requires further consideration.


winter simulation conference | 1997

Why we don't know how to simulate the Internet

Vern Paxson; Sally Floyd

Simulating how the global Internet data network behaves is an immensely challenging undertaking because of the networks great heterogeneity and rapid change. The heterogeneity ranges from the individual links that carry the networks traffic, to the protocols that interoperate over the links, to the “mix” of different applications used at a site and the levels of congestion (load) seen on different links. We discuss two key strategies for developing meaningful simulations in the face of these difficulties: searching for invariants and judiciously exploring the simulation parameter space. We finish with a look at a collaborative effort to build a common simulation environment for conducting Internet studies.

Collaboration


Dive into the Sally Floyd's collaboration.

Top Co-Authors

Avatar

Mark Handley

University College London

View shared research outputs
Top Co-Authors

Avatar

Mark Allman

International Computer Science Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vern Paxson

University of California

View shared research outputs
Top Co-Authors

Avatar

Kevin R. Fall

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Scott Shenker

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge