Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brian Trammell is active.

Publication


Featured researches published by Brian Trammell.


IEEE Communications Surveys and Tutorials | 2014

Flow monitoring explained: From packet capture to data analysis with NetFlow and IPFIX

Rick Hofstede; Pavel Čeleda; Brian Trammell; Idilio Drago; Ramin Sadre; Anna Sperotto; Aiko Pras

Flow monitoring has become a prevalent method for monitoring traffic in high-speed networks. By focusing on the analysis of flows, rather than individual packets, it is often said to be more scalable than traditional packet-based traffic analysis. Flow monitoring embraces the complete chain of packet observation, flow export using protocols such as NetFlow and IPFIX, data collection, and data analysis. In contrast to what is often assumed, all stages of flow monitoring are closely intertwined. Each of these stages therefore has to be thoroughly understood, before being able to perform sound flow measurements. Otherwise, flow data artifacts and data loss can be the consequence, potentially without being observed. This paper is the first of its kind to provide an integrated tutorial on all stages of a flow monitoring setup. As shown throughout this paper, flow monitoring has evolved from the early 1990s into a powerful tool, and additional functionality will certainly be added in the future. We show, for example, how the previously opposing approaches of deep packet inspection and flow monitoring have been united into novel monitoring approaches.


acm special interest group on data communication | 2010

The role of network trace anonymization under attack

Martin Burkhart; Dominik Schatzmann; Brian Trammell; Elisa Boschi; Bernhard Plattner

In recent years, academic literature has analyzed many attacks on network trace anonymization techniques. These attacks usually correlate external information with anonymized data and successfully de-anonymize objects with distinctive signatures. However, analyses of these attacks still underestimate the real risk of publishing anonymized data, as the most powerful attack against anonymization is traffic injection. We demonstrate that performing live traffic injection attacks against anonymization on a backbone network is not difficult, and that potential countermeasures against these attacks, such as traffic aggregation, randomization or field generalization, are not particularly effective. We then discuss tradeoffs of the attacker and defender in the so-called injection attack space. An asymmetry in the attack space significantly increases the chance of a successful de-anonymization through lengthening the injected traffic pattern. This leads us to re-examine the role of network data anonymization. We recommend a unified approach to data sharing, which uses anonymization as a part of a technical, legal, and social approach to data protection in the research and operations communities.


WEIS | 2010

Modeling the Security Ecosystem - The Dynamics of (In)Security

Stefan Frei; Dominik Schatzmann; Bernhard Plattner; Brian Trammell

The security of information technology and computer networks is effected by a wide variety of actors and processes which together make up a security ecosystem; here we examine this ecosystem, consolidating many aspects of security that have hitherto been discussed only separately. First, we analyze the roles of the major actors within this ecosystem and the processes they participate in, and the the paths vulnerability data take through the ecosystem and the impact of each of these on security risk. Then, based on a quantitative examination of 27,000 vulnerabilities disclosed over the past decade and taken from publicly available data sources, we quantify the systematic gap between exploit and patch availability. We provide the first examination of the impact and the risks associated with this gap on the ecosystem as a whole. Our analysis provides a metric for the success of the “responsible disclosure” process. We measure the prevalence of the commercial markets for vulnerability information and highlight the role of security information providers (SIP), which function as the “free press” of the ecosystem.


passive and active network measurement | 2013

On the state of ECN and TCP options on the internet

Mirja Kühlewind; Sebastian Neuner; Brian Trammell

Explicit Congestion Notification (ECN) is a TCP/IP extension that can avoid packet loss and thus improve network performance. Though standardized in 2001, it is barely used in todays Internet. This study, following on previous active measurement studies over the past decade, shows marked and continued increase in the deployment of ECN-capable servers, and usability of ECN on the majority of paths to such servers. We additionally present new measurements of ECN on IPv6, passive observation of actual ECN usage from flow data, and observations on other congestion-relevant TCP options (SACK, Timestamps and Window Scaling). We further present initial work on burst loss metrics for loss-based congestion control following from our findings.


IEEE Communications Magazine | 2014

mPlane: an intelligent measurement plane for the internet

Brian Trammell; Pedro Casas; Dario Rossi; Arian Bär; Zied Ben Houidi; Ilias Leontiadis; Tivadar Szemethy; Marco Mellia

The Internets universality is based on its decentralization and diversity. However, its distributed nature leads to operational brittleness and difficulty in identifying the root causes of performance and availability issues, especially when the involved systems span multiple administrative domains. The first step to address this fragmentation is coordinated measurement: we propose to complement the current Internets data and control planes with a measurement plane, or mPlane for short. mPlanes distributed measurement infrastructure collects and analyzes traffic measurements at a wide variety of scales to monitor the network status. Its architecture is centered on a flexible control interface, allowing the incorporation of existing measurement tools through lightweight mPlane proxy components, and offering dynamic support for new capabilities. A focus on automated, iterative measurement makes the platform well-suited to troubleshooting support. This is supported by a reasoning system, which applies machine learning algorithms to learn from success and failure in drilling down to the root cause of a problem. This article describes the mPlane architecture and shows its applicability to several distributed measurement problems involving content delivery networks and Internet service roviders. A first case study presents the tracking and iterative analysis of cache selection policies in Akamai, while a second example focuses on the cooperation between Internet service providers and content delivery networks to better orchestrate their traffic engineering decisions and jointly improve their performance.


IEEE Communications Magazine | 2011

An introduction to IP flow information export (IPFIX)

Brian Trammell; Elisa Boschi

The IP Flow Information Export protocol (IPFIX) is an IETF Proposed Standard for the export of information about network flows in IP networks. It is the logical successor to Cisco Net- Flow version 9, upon which it is based. The key innovations of IPFIX are the flexible definition of a network flow and the runtime description of record formats through templates based on a well defined, extensible information model. These allow IPFIX to export flow information from present as well as future networks, and make it applicable to network management beyond the network and transport layers. In this article, we describe the protocol, from its motivation and history through its design and implementation, and explore its deployment within a network monitoring research project.


passive and active network measurement | 2011

Peeling away timing error in netflow data

Brian Trammell; Bernhard Tellenbach; Dominik Schatzmann; Martin Burkhart

In this paper, we characterize, quantify, and correct timing errors introduced into network flow data by collection and export via Cisco NetFlow version 9. We find that while some of these sources of error (clock skew, export delay) are generally implementation-dependent and known in the literature, there is an additional cyclic error of up to one second that is inherent to the design of the export protocol. We present a method for correcting this cyclic error in the presence of clock skew and export delay. In an evaluation using traffic with known timing collected from a national-scale network, we show that this method can successfully correct the cyclic error. However, there can also be other implementation-specific errors for which insufficient information remains for correction. On the routers we have deployed in our network, this limits the accuracy to about 70ms, reinforcing the point that implementation matters when conducting research on network measurement data.


passive and active network measurement | 2015

Enabling Internet-Wide Deployment of Explicit Congestion Notification

Brian Trammell; Mirja Kühlewind; Damiano Boppart; Iain R. Learmonth; Gorry Fairhurst; Richard Scheffenegger

Explicit Congestion Notification (ECN) is an TCP/IP extension to signal network congestion without packet loss, which has barely seen deployment though it was standardized and implemented more than a decade ago. On-going activities in research and standardization aim to make the usage of ECN more beneficial. This measurement study provides an update on deployment status and newly assesses the marginal risk of enabling ECN negotiation by default on client end-systems. Additionally, we dig deeper into causes of connectivity and negotiation issues linked to ECN. We find that about five websites per thousand suffer additional connection setup latency when fallback per RFC 3168 is correctly implemented; we provide a patch for Linux to properly perform this fallback. Moreover, we detect and explore a number of cases in which ECN brokenness is clearly path-dependent, i.e. on middleboxes beyond the access or content provider network. Further analysis of these cases can guide their elimination, further reducing the risk of enabling ECN by default.


acm special interest group on data communication | 2013

Standardizing large-scale measurement platforms

Marcelo Bagnulo; Philip Eardley; Trevor Burbridge; Brian Trammell; Rolf Winter

Over the last few years, we have witnessed the deployment of large measurement platforms that enable measurements from many vantage points. Examples of these platforms include SamKnows and RIPE ATLAS. All told, there are tens of thousands of measurement agents. Most of these measurement agents are located in the end-user premises; these can run measurements against other user agents located in strategic locations, according to the measurements to be performed. Thanks to the large number of measurement agents, these platforms can provide data about key network performance indicators from the end-user perspective. This data is useful to network operators to improve their operations, as well to regulators and to end users themselves. Currently deployed platforms use proprietary protocols to exchange information between the different parts. As these platforms grow to become an important tool to understand network performance, it is important to standardize the protocols between the different elements of the platform. In this paper, we present ongoing standardization efforts in this area as well as the main challenges that these efforts are facing.


international conference on computer communications | 2010

Measurement Data Reduction through Variation Rate Metering

Giuseppe Bianchi; Elisa Boschi; Simone Teofili; Brian Trammell

We present an efficient network measurement primitive that measures the rate of variations, or unique values for a given characteristic of a traffic flow. The primitive is widely applicable to a variety of data reduction and pre-analysis tasks at the measurement interface, and we show it to be particularly useful for building data-reducing preanalysis stages for scan detection within a multistage network analysis architecture. The presented approach is based upon data structures derived from Bloom filters, and as such yields high performance with probabilistic accuracy and controllable worst-case time and memory complexity. This predictability makes it suitable for hardware implementation in dedicated network measurement devices. One key innovation of the present work is that it is self-tuning, adapting to the characteristics of the measured traffic.

Collaboration


Dive into the Brian Trammell's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Giuseppe Bianchi

University of Rome Tor Vergata

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marcelo Bagnulo

Instituto de Salud Carlos III

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge