Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark Allman is active.

Publication


Featured researches published by Mark Allman.


acm special interest group on data communication | 2001

On estimating end-to-end network path properties

Mark Allman; Vern Paxson

The more information about current network conditions available to a transport protocol, the more efficiently it can use the network to transfer its data. In networks such as the Internet, the transport protocol must often form its own estimates of network properties based on measurements performed by the connection endpoints. We consider two basic transport estimation problems: determining the setting of the retransmission timer (RTO) for a reliable protocol, and estimating the bandwidth available to a connection as it begins. We look at both of these problems in the context of TCP, using a large TCP measurement set [Pax97b] for trace-driven simulations. For RTO estimation, we evaluate a number of different algorithms, finding that the performance of the estimators is dominated by their minimum values, and to a lesser extent, the timer granularity, while being virtually unaffected by how often round-trip time measurements are made or the settings of the parameters in the exponentially-weighted moving average estimators commonly used. For bandwidth estimation, we explore techniques previously sketched in the literature [Hoe96, AD98] and find that in practice they perform less well than anticipated. We then develop a receiver-side algorithm that performs significantly better.


internet measurement conference | 2009

On dominant characteristics of residential broadband internet traffic

Gregor Maier; Anja Feldmann; Vern Paxson; Mark Allman

While residential broadband Internet access is popular in many parts of the world, only a few studies have examined the characteristics of such traffic. In this paper we describe observations from monitoring the network activity for more than 20,000 residential DSL customers in an urban area. To ensure privacy, all data is immediately anonymized. We augment the anonymized packet traces with information about DSL-level sessions, IP (re-)assignments, and DSL link bandwidth. Our analysis reveals a number of surprises in terms of the mental models we developed from the measurement literature. For example, we find that HTTP - not peer-to-peer - traffic dominates by a significant margin; that more often than not the home users immediate ISP connectivity contributes more to the round-trip times the user experiences than the WAN portion of the path; and that the DSL lines are frequently not the bottleneck in bulk-transfer performance.


acm special interest group on data communication | 2006

The devil and packet trace anonymization

Ruoming Pang; Mark Allman; Vern Paxson; Jason Lee

Releasing network measurement data---including packet traces---to the research community is a virtuous activity that promotes solid research. However, in practice, releasing anonymized packet traces for public use entails many more vexing considerations than just the usual notion of how to scramble IP addresses to preserve privacy. Publishing traces requires carefully balancing the security needs of the organization providing the trace with the research usefulness of the anonymized trace. In this paper we recount our experiences in (i) securing permission from a large site to release packet header traces of the sites internal traffic, (ii) implementing the corresponding anonymization policy, and (iii) validating its correctness. We present a general tool, tcpmkpub, for anonymizing traces, discuss the process used to determine the particular anonymization policy, and describe the use of metadata accompanying the traces to provide insight into features that have been obfuscated by anonymization


internet measurement conference | 2005

A first look at modern enterprise traffic

Ruoming Pang; Mark Allman; Michael Bennett; Jason Lee; Vern Paxson; Brian Tierney

While wide-area Internet traffic has been heavily studied for many years, the characteristics of traffic inside Internet enterprises remain almost wholly unexplored. Nearly all of the studies of enterprise traffic available in the literature are well over a decade old and focus on individual LANs rather than whole sites. In this paper we present a broad overview of internal enterprise traffic recorded at a medium-sized site. The packet traces span more than 100 hours, over which activity from a total of several thousand internal hosts appears. This wealth of data--which we are publicly releasing in anonymized form--spans a wide range of dimensions. While we cannot form general conclusions using data from a single site, and clearly this sort of data merits additional in-depth study in a number of ways, in this work we endeavor to characterize a number of the most salient aspects of the traffic. Our goal is to provide a first sense of ways in which modern enterprise traffic is similar to wide-area Internet traffic, and ways in which it is quite different.


acm special interest group on data communication | 2005

Measuring the evolution of transport protocols in the internet

Alberto Medina; Mark Allman; Sally Floyd

In this paper we explore the evolution of both the Internets most heavily used transport protocol, TCP, and the current network environment with respect to how the networks evolution ultimately impacts end-to-end protocols. The traditional end-to-end assumptions about the Internet are increasingly challenged by the introduction of intermediary network elements (middleboxes) that intentionally or unintentionally prevent or alter the behavior of end-to-end communications. This paper provides measurement results showing the impact of the current network environment on a number of traditional and proposed protocol mechanisms (e.g., Path MTU Discovery, Explicit Congestion Notification, etc.). In addition, we investigate the prevalence and correctness of implementations using proposed TCP algorithmic and protocol changes (e.g., selective acknowledgment-based loss recovery, congestion window growth based on byte counting, etc.). We present results of measurements taken using an active measurement framework to study web servers and a passive measurement survey of clients accessing information from our web server. We analyze our results to gain further understanding of the differences between the behavior of the Internet in theory versus the behavior we observed through measurements. In addition, these measurements can be used to guide the definition of more realistic Internet modeling scenarios. Finally, we present several lessons that will benefit others taking Internet measurements.


acm special interest group on data communication | 2000

A web server's view of the transport layer

Mark Allman

This paper presents observations of traffic to and from a particular World-Wide Web server over the course a year and a half. This paper presents a longitudinal look at various network path properties, as well as the implementation status of various protocol options and mechanisms. In particular, this paper considers how World-Wide Web clients utilize TCP connections to transfer web data; the deployment of various TCP and HTTP options; the range of round-trip times observed in the network; packet sizes used for WWW transfers; the implications of the measured advertised window sizes; and the impact of using larger initial congestion window sizes. These properties/mechanisms and their implications are explored. An additional goal of this paper is to provide information to help researchers better simulate and emulate realistic networks.


acm special interest group on data communication | 1999

On the effective evaluation of TCP

Mark Allman; Aaron Falk

Understanding the performance of the Internets Transmission Control Protocol (TCP) is important because it is the dominant protocol used in the Internet today. Various testing methods exist to evaluate TCP performance, however all have pitfalls that need to be understood prior to obtaining useful results. Simulating TCP is difficult because of the wide range of variables, environments, and implementations available. Testing TCP modifications in the global Internet may not be the answer either: testing new protocols on real networks endangers other peoples traffic and, if not done correctly, may also yield inaccurate or misleading results. In order for TCP research to be independently evaluated in the Internet research community there is a set of questions that researchers should try to answer. This paper attempts to list some of those questions and make recommendations as to how TCP testing can be structured to provide useful answers.


internet measurement conference | 2007

A brief history of scanning

Mark Allman; Vern Paxson; Jeff Terrell

Incessant scanning of hosts by attackers looking for vulnerable servers has become a fact of Internet life. In this paper we present an initial study of the scanning activity observed at one site over the past 12.5 years. We study the onset of scanning in the late 1990s and its evolution in terms of characteristics such as the number of scanners, targets and probing patterns. While our study is preliminary in many ways, it provides the first longitudinal examination of a now ubiquitous Internet phenomenon.


Computer Networks | 2004

Explicit transport error notification (ETEN) for error-prone wireless and satellite networks

Rajesh Krishnan; James P. G. Sterbenz; Wesley M. Eddy; Craig Partridge; Mark Allman

Wireless and satellite networks often have non-negligible packet corruption rates that can significantly degrade TCP performance. This is due to TCPs assumption that every packet loss is an indication of network congestion (causing TCP to reduce the transmission rate). This problem has received much attention in the literature. In this paper, we take a broad look at the problem of enhancing TCP performance under corruption losses, and include a discussion of the key issues. The main contributions of this paper are: (i) a confirmation of previous studies that show the reduction of TCP performance in the face of corruption loss, and in addition a plausible upper bound achievable with perfect knowledge of the cause of loss, (ii) a classification of the potential mitigation space, and (iii) the introduction of a promising new mitigation that employs rich cumulative information from intermediate nodes in a path to form a better congestion response.We first illustrate the performance implications of corruption-based loss for a variety of networks via simulation. In addition, we show a rough upper bound on the performance gains a TCP could get if it could perfectly determine the cause of each segment loss--independent of any specific mechanism for TCP to learn the root cause of packet loss. Next, we provide a taxonomy of potential practical classes of mitigations that TCP end-points and intermediate network elements can cooperatively use to decrease the performance impact of corruption-based loss. Finally, we briefly consider a potential mitigation, called cumulative explicit transport error notification (CETEN), which covers a portion of the solution space previously unexplored. CETEN is shown to be a promising mitigation strategy, but a strategy with numerous formidable practical hurdles still to overcome.


acm special interest group on data communication | 1998

On the generation and use of TCP acknowledgments

Mark Allman

This paper presents a simulation study of various TCP acknowledgment generation and utilization techniques. We investigate the standard version of TCP and the two standard acknowledgment strategies employed by receivers: those that acknowledge each incoming segment and those that implement delayed acknowledgments. We show the delayed acknowledgment mechanism hurts TCP performance, especially during slow start. Next we examine three alternate mechanisms for generating and using acknowledgments designed to mitigate the negative impact of delayed acknowledgments. The first method is to generate delayed ACKs only when the sender is not using the slow start algorithm. The second mechanism, called byte counting, allows TCP senders to increase the amount of data being injected into the network based on the amount of data acknowledged rather than on the number of acknowledgments received. The last mechanism is a limited form of byte counting. Each of these mechanisms is evaluated in a simulated network with no competing traffic, as well as a dynamic environment with a varying amount of competing traffic. We study the costs and benefits of the alternate mechanisms when compared to the standard algorithm with delayed ACKs.

Collaboration


Dive into the Mark Allman's collaboration.

Top Co-Authors

Avatar

Vern Paxson

University of California

View shared research outputs
Top Co-Authors

Avatar

Michael Rabinovich

Case Western Reserve University

View shared research outputs
Top Co-Authors

Avatar

Sally Floyd

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge