Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kimberly C. Claffy is active.

Publication


Featured researches published by Kimberly C. Claffy.


acm special interest group on data communication | 2002

Code-Red: a case study on the spread and victims of an internet worm

David Moore; Colleen Shannon; Kimberly C. Claffy

On July 19, 2001, more than 359,000 computers connected to the Internet were infected with the Code-Red (CRv2) worm in less than 14 hours. The cost of this epidemic, including subsequent strains of Code-Red, is estimated to be in excess of


acm special interest group on data communication | 2014

Named data networking

Lixia Zhang; Alexander Afanasyev; Jeffrey A Burke; Van Jacobson; Kimberly C. Claffy; Patrick Crowley; Christos Papadopoulos; Lan Wang; Beichuan Zhang

2.6 billion. Despite the global damage caused by this attack, there have been few serious attempts to characterize the spread of the worm, partly due to the challenge of collecting global information about worms. Using a technique that enables global detection of worm spread, we collected and analyzed data over a period of 45 days beginning July 2nd, 2001 to determine the characteristics of the spread of Code-Red throughout the Internet.In this paper, we describe the methodology we use to trace the spread of Code-Red, and then describe the results of our trace analyses. We first detail the spread of the Code-Red and CodeRedII worms in terms of infection and deactivation rates. Even without being optimized for spread of infection, Code-Red infection rates peaked at over 2,000 hosts per minute. We then examine the properties of the infected host population, including geographic location, weekly and diurnal time effects, top-level domains, and ISPs. We demonstrate that the worm was an international event, infection activity exhibited time-of-day effects, and found that, although most attention focused on large corporations, the Code-Red worm primarily preyed upon home and small business users. We also qualified the effects of DHCP on measurements of infected hosts and determined that IP addresses are not an accurate measure of the spread of a worm on timescales longer than 24 hours. Finally, the experience of the Code-Red worm demonstrates that wide-spread vulnerabilities in Internet hosts can be exploited quickly and dramatically, and that techniques other than host patching are required to mitigate Internet worms.


conference on emerging network experiment and technology | 2008

Internet traffic classification demystified: myths, caveats, and the best practices

Hyunchul Kim; Kimberly C. Claffy; Marina Fomenkov; Dhiman Barman; Michalis Faloutsos; Ki-Young Lee

Named Data Networking (NDN) is one of five projects funded by the U.S. National Science Foundation under its Future Internet Architecture Program. NDN has its roots in an earlier project, Content-Centric Networking (CCN), which Van Jacobson first publicly presented in 2006. The NDN project investigates Jacobsons proposed evolution from todays host-centric network architecture (IP) to a data-centric network architecture (NDN). This conceptually simple shift has far-reaching implications for how we design, develop, deploy, and use networks and applications. We describe the motivation and vision of this new architecture, and its basic components and operations. We also provide a snapshot of its current design, development status, and research challenges. More information about the project, including prototype implementations, publications, and annual reports, is available on named-data.net.


acm special interest group on data communication | 2006

The internet AS-level topology: three data sources and one definitive metric

Priya Mahadevan; Dmitri V. Krioukov; Marina Fomenkov; Xenofontas A. Dimitropoulos; Kimberly C. Claffy; Amin Vahdat

Recent research on Internet traffic classification algorithms has yield a flurry of proposed approaches for distinguishing types of traffic, but no systematic comparison of the various algorithms. This fragmented approach to traffic classification research leaves the operational community with no basis for consensus on what approach to use when, and how to interpret results. In this work we critically revisit traffic classification by conducting a thorough evaluation of three classification approaches, based on transport layer ports, host behavior, and flow features. A strength of our work is the broad range of data against which we test the three classification approaches: seven traces with payload collected in Japan, Korea, and the US. The diverse geographic locations, link characteristics and application traffic mix in these data allowed us to evaluate the approaches under a wide variety of conditions. We analyze the advantages and limitations of each approach, evaluate methods to overcome the limitations, and extract insights and recommendations for both the study and practical application of traffic classification. We make our software, classifiers, and data available for researchers interested in validating or extending this work.


IEEE Transactions on Dependable and Secure Computing | 2005

Remote physical device fingerprinting

Tadayoshi Kohno; Andre Broido; Kimberly C. Claffy

We calculate an extensive set of characteristics for Internet AS topologies extracted from the three data sources most frequently used by the research community: traceroutes, BGP, and WHOIS. We discover that traceroute and BGP topologies are similar to one another but differ substantially from the WHOIS topology. Among the widely considered metrics, we find that the joint degree distribution appears to fundamentally characterize Internet AS topologies as well as narrowly define values for other important metrics. We discuss the interplay between the specifics of the three data collection mechanisms and the resulting topology views. In particular, we how how the data collection peculiarities explain differences in the resulting joint degree distributions of the respective topologies. Finally, we release to the community the input topology datasets, along with the scripts and output of our calculations. This supplement hould enable researchers to validate their models against real data and to make more informed election of topology data sources for their specific needsWe calculate an extensive set of characteristics for Internet AS topologies extracted from the three data sources most frequently used by the research community: traceroutes, BGP, and WHOIS. We discover that traceroute and BGP topologies are similar to one another but differ substantially from the WHOIS topology. Among the widely considered metrics, we find that the joint degree distribution appears to fundamentally characterize Internet AS topologies as well as narrowly define values for other important metrics. We discuss the interplay between the specifics of the three data collection mechanisms and the resulting topology views. In particular, we how how the data collection peculiarities explain differences in the resulting joint degree distributions of the respective topologies. Finally, we release to the community the input topology datasets, along with the scripts and output of our calculations. This supplement hould enable researchers to validate their models against real data and to make more informed election of topology data sources for their specific needs


acm special interest group on data communication | 1993

Application of sampling methodologies to network traffic characterization

Kimberly C. Claffy; George C. Polyzos; Hans-Werner Braun

We introduce the area of remote physical device fingerprinting, or fingerprinting a physical device, as opposed to an operating system or class of devices, remotely, and without the fingerprinted devices known cooperation. We accomplish this goal by exploiting small, microscopic deviations in device hardware: clock skews. Our techniques do not require any modification to the fingerprinted devices. Our techniques report consistent measurements when the measurer is thousands of miles, multiple hops, and tens of milliseconds away from the fingerprinted device, and when the fingerprinted device is connected to the Internet from different locations and via different access technologies. Further one can apply our passive and semi-passive techniques when the fingerprinted device is behind a NAT or firewall, and also when the devices system time is maintained via NTP or SNTP. One can use our techniques to obtain information about whether two devices an the Internet, possibly shifted in time or IP addresses, are actually the same physical device. Example applications include: computer forensics; tracking, with some probability, a physical device as it connects to the Internet from different public access points; counting the number of devices behind a NAT even when the devices use constant or random IP ID; remotely probing a block of addresses to determine if the addresses correspond to virtual hosts, e.g., as part of a virtual honeynet; and unanonymizing anonymized network traces.


IEEE Network | 2012

Issues and future directions in traffic classification

Alberto Dainotti; Antonio Pescapé; Kimberly C. Claffy

The relative performance of different data collection methods in the assessment of various traffic parameters is significant when the amount of data generated by a complete trace of a traffic interval is computationally overwhelming, and even capturing summary statistics for all traffic is impractical. This paper presents a study of the performance of various methods of sampling in answering questions related to wide area network traffic characterization. Using a packet trace from a network environment that aggregates traffic from a large number of sources, we simulate various sampling approaches, including time-driven and event-driven methods, with both random and deterministic selection patterns, at a variety of granularities. Using several metrics which indicate the similarity between two distributions, we then compare the sampled traces to the parent population. Our results revealed that the time-triggered techniques did not perform as well as the packet-triggered ones. Furthermore, the performance differences within each class (packet-based or time-based techniques) are small.


IEEE Journal on Selected Areas in Communications | 1998

ICP and the Squid web cache

Duane Wessels; Kimberly C. Claffy

Traffic classification technology has increased in relevance this decade, as it is now used in the definition and implementation of mechanisms for service differentiation, network design and engineering, security, accounting, advertising, and research. Over the past 10 years the research community and the networking industry have investigated, proposed and developed several classification approaches. While traffic classification techniques are improving in accuracy and efficiency, the continued proliferation of different Internet application behaviors, in addition to growing incentives to disguise some applications to avoid filtering or blocking, are among the reasons that traffic classification remains one of many open problems in Internet research. In this article we review recent achievements and discuss future directions in traffic classification, along with their trade-offs in applicability, reliability, and privacy. We outline the persistently unsolved challenges in the field over the last decade, and suggest several strategies for tackling these challenges to promote progress in the science of Internet traffic classification.


passive and active network measurement | 2005

Comparison of public end-to-end bandwidth estimation tools on high-speed links

Alok Shriram; Margaret Murray; Young Hyun; Nevil Brownlee; Andre Broido; Marina Fomenkov; Kimberly C. Claffy

We describe the structure and functionality of the Internet cache protocol (ICP) and its implementation in the Squid web caching software. ICP is a lightweight message format used for communication among Web caches. Caches exchange ICP queries and replies to gather information to use in selecting the most appropriate location from which to retrieve an object. We present background on the history of ICP, and discuss issues in ICP deployment, efficiency, security, and interaction with other aspects of Web traffic behavior. We catalog successes, failures, and lessons learned from using ICP to deploy a global Web cache hierarchy.


2009 Cybersecurity Applications & Technology Conference for Homeland Security | 2009

Internet Mapping: From Art to Science

Kimberly C. Claffy; Young Hyun; Ken Keys; Marina Fomenkov; Dmitri V. Krioukov

In this paper we present results of a series of bandwidth estimation experiments conducted on a high-speed testbed at the San Diego Supercomputer Center and on OC-48 and GigE paths in real world networks. We test and compare publicly available bandwidth estimation tools: abing, pathchirp, pathload, and Spruce. We also tested Iperf which measures achievable TCP throughput. In the lab we used two different sources of known and reproducible cross-traffic in a fully controlled environment. In real world networks we had a complete knowledge of link capacities and had access to SNMP counters for independent cross-traffic verification. We compare the accuracy and other operational characteristics of the tools and analyze factors impacting their performance.

Collaboration


Dive into the Kimberly C. Claffy's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andre Broido

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Duane Wessels

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Moore

University of California

View shared research outputs
Top Co-Authors

Avatar

Young Hyun

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge