Matthew Roughan
University of Adelaide
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Matthew Roughan.
measurement and modeling of computer systems | 2003
Yin Zhang; Matthew Roughan; Nick G. Duffield; Albert G. Greenberg
A matrix giving the traffic volumes between origin and destination in a network has tremendously potential utility for network capacity planning and management. Unfortunately, traffic matrices are generally unavailable in large operational IP networks. On the other hand, link load measurements are readily available in IP networks. In this paper, we propose a new method for practical and rapid inference of traffic matrices in IP networks from link load measurements, augmented by readily available network and routing configuration information. We apply and validate the method by computing backbone-router to backbone-router traffic matrices on a large operational tier-1 IP network -- a problem an order of magnitude larger than any other comparable method has tackled. The results show that the method is remarkably fast and accurate, delivering the traffic matrix in under five seconds.
internet measurement conference | 2004
Matthew Roughan; Subhabrata Sen; Oliver Spatscheck; Nick G. Duffield
The ability to provide different Quality of Service (QoS) guarantees to traffic from different applications is a highly desired feature for many IP network operators, particularly for enterprise networks. Although various mechanisms exist for providing QoS in the network, QoS is yet to be widely deployed. We believe that a key factor holding back widespread QoS adoption is the absence of suitable methodologies/processes for appropriately mapping the traffic from different applications to different QoS classes. This is a challenging task, because many enterprise network operators who are interested in QoS do not know all the applications running on their network, and furthermore, over recent years port-based application classification has become problematic. We argue that measurement based automated Class of Service (CoS) mapping is an important practical problem that needs to be studied. In this paper we describe the requirements and associated challenges, and outline a solution framework for measurement based classification of traffic for QoS based on statistical application signatures. In our approach the signatures are chosen in such as way as to make them insensitive to the particular application layer protocol, but rather to determine the way in which an application is used -- for instance is it used interactively, or for bulk-data transport. The resulting application signature can then be used to derive the network layer signatures required to determine the CoS class for individual IP datagrams. Our evaluations using traffic traces from a variety of network locations, demonstrate the feasibility and potential of the approach.
acm special interest group on data communication | 2009
Yin Zhang; Matthew Roughan; Walter Willinger; Lili Qiu
Many basic network engineering tasks (e.g., traffic engineering, capacity planning, anomaly detection) rely heavily on the availability and accuracy of traffic matrices. However, in practice it is challenging to reliably measure traffic matrices. Missing values are common. This observation brings us into the realm of compressive sensing, a generic technique for dealing with missing values that exploits the presence of structure and redundancy in many real-world systems. Despite much recent progress made in compressive sensing, existing compressive-sensing solutions often perform poorly for traffic matrix interpolation, because real traffic matrices rarely satisfy the technical conditions required for these solutions. To address this problem, we develop a novel spatio-temporal compressive sensing framework with two key components: (i) a new technique called Sparsity Regularized Matrix Factorization (SRMF) that leverages the sparse or low-rank nature of real-world traffic matrices and their spatio-temporal properties, and (ii) a mechanism for combining low-rank approximations with local interpolation procedures. We illustrate our new framework and demonstrate its superior performance in problems involving interpolation with real traffic matrices where we can successfully replace up to 98% of the values. Evaluation in applications such as network tomography, traffic prediction, and anomaly detection confirms the flexibility and effectiveness of our approach.
acm special interest group on data communication | 2006
Wolfgang Mühlbauer; Anja Feldmann; Olaf Maennel; Matthew Roughan; Steve Uhlig
An understanding of the topological structure of the Internet is needed for quite a number of networking tasks, e. g., making decisions about peering relationships, choice of upstream providers, inter-domain traffic engineering. One essential component of these tasks is the ability to predict routes in the Internet. However, the Internet is composed of a large number of independent autonomous systems (ASes) resulting in complex interactions, and until now no model of the Internet has succeeded in producing predictions of acceptable accuracy.We demonstrate that there are two limitations of prior models: (i) they have all assumed that an Autonomous System (AS) is an atomic structure - it is not, and (ii) models have tended to oversimplify the relationships between ASes. Our approach uses multiple quasi-routers to capture route diversity within the ASes, and is deliberately agnostic regarding the types of relationships between ASes. The resulting model ensures that its routing is consistent with the observed routes. Exploiting a large number of observation points, we show that our model provides accurate predictions for unobserved routes, a first step towards developing structural mod-els of the Internet that enable real applications.
Proceedings of the IEEE | 2002
Ashok Erramilli; Matthew Roughan; Darryl Veitch; Walter Willinger
One of the most significant findings of traffic measurement studies over the last decade has been the observed self-similarity in packet network traffic. Subsequent research has focused on the origins of this self-similarity, and the network engineering significance of this phenomenon. This paper reviews what is currently known about network traffic self-similarity and its significance. We then consider a matter of current research, namely, the manner in which network dynamics (specifically, the dynamics of transmission control protocol (TCP), the predominant transport protocol used in todays Internet) can affect the observed self-similarity. To this end, we first discuss some of the pitfalls associated with applying traditional performance evaluation techniques to highly-interacting, large-scale networks such as the Internet. We then present one promising approach based on chaotic maps to capture and model the dynamics of TCP-type feedback control in such networks. Not only can appropriately chosen chaotic map models capture a range of realistic source characteristics, but by coupling these to network state equations, one can study the effects of network dynamics on the observed scaling behavior We consider several aspects of TCP feedback, and illustrate by examples that while TCP-type feedback can modify the self-similar scaling behavior of network traffic, it neither generates it nor eliminates it.
internet measurement conference | 2003
Z. Morley Mao; Randy Bush; Timothy G. Griffin; Matthew Roughan
The desire to better understand global BGP dynamics has motivated several studies using active measurement techniques, which inject announcements and withdrawals of prefixes from the global routing domain. From these one can measure quantities such as the BGP convergence time. Previously, the route injection infrastructure of such experiments has either been temporary in nature, or its use has been restricted to the experimenters. The routing research community would benefit from a permanent and public infrastructure for such active probes. We use the term BGP Beacon to refer to a publicly documented prefix having global visibility and a published schedule for announcements and withdrawals. A BGP Beacon is to be used for the ongoing study of BGP dynamics, and so should be supportedwith a long-term commitment. We describe several BGP Beacons thathave been set up at various points in the Internet. We then describe techniques for processing BGP updates when a BGP Beacon is observed from a BGP monitoring point such as Oregons Route Views. Finally, we illustrate the use of BGP Beacons in the analysis of convergence delays, route flap damping, and update inter-arrival times.
passive and active network measurement | 2005
Renata Teixeira; Nick G. Duffield; Jennifer Rexford; Matthew Roughan
A traffic matrix represents the load from each ingress point to each egress point in an IP network. Although networks are engineered to tolerate some variation in the traffic matrix, large changes can lead to congested links and poor performance. The variations in the traffic matrix are caused by statistical fluctuations in the traffic entering the network and shifts in where the traffic leaves the network. For an accurate view of how the traffic matrix evolves over time, we combine fine-grained traffic measurements with a continuous view of routing, including changes in the egress points. Our approach is in sharp contrast to previous work that either inferred the traffic matrix from link-load statistics or computed it using periodic snapshots of routing tables. Analyzing seven months of data from eight vantage points in a large Internet Service Provider (ISP) network, we show that routing changes are responsible for the majority of the large traffic variations. In addition, we identify the shifts caused by internal routing changes and show that these events are responsible for the largest traffic shifts. We discuss the implications of our findings on the accuracy of previous work on traffic matrix estimation and analysis.
acm special interest group on data communication | 2005
Matthew Roughan
A recent paper [8] presented methods for several steps along the road to synthesis of realistic traffic matrices. Such synthesis is needed because traffic matrices are a crucial input for testing many new networking algorithms, but traffic matrices themselves are generally kept secret by providers. Furthermore, even given traffic matrices from a real network, it is difficult to realistically adjust these to generate a range of scenarios (for instance for different network sizes). This note is concerned with the first step presented in [8]: generation of a matrix with similar statistics to that of a real traffic matrix. The method applied in [8] is based on fitting a large number of distributions, and finding that the log-normal distribution appears to fit most consistently. Best fits (without some intuitive explanation for the fit) are fraught with problems. How general are the results? How do the distribution parameters relate? This note presents a simpler approach based on a gravity model. Its simplicity provides us with a better understanding of the origins of the results of [8], and this insight is useful, particularly because it allows one to adapt the synthesis process to different scenarios in a more intuitive manner. Additionally, [8] measures the quality of its fit to the distributions body. This note shows that the tails of the distributions are less heavy than the log-normal distribution (a counterintuitive result for Internet traffic), and that the gravity model replicates these tails more accurately.
IEEE ACM Transactions on Networking | 2012
Matthew Roughan; Yin Zhang; Walter Willinger; Lili Qiu
Despite advances in measurement technology, it is still challenging to reliably compile large-scale network datasets. For example, because of flaws in the measurement systems or difficulties posed by the measurement problem itself, missing, ambiguous, or indirect data are common. In the case where such data have spatio-temporal structure, it is natural to try to leverage this structure to deal with the challenges posed by the problematic nature of the data. Our work involving network datasets draws on ideas from the area of compressive sensing and matrix completion, where sparsity is exploited in estimating quantities of interest. However, the standard results on compressive sensing are: 1) reliant on conditions that generally do not hold for network datasets; and 2) do not allow us to exploit all we know about their spatio-temporal structure. In this paper, we overcome these limitations with an algorithm that has at its heart the same ideas espoused in compressive sensing, but adapted to the problem of network datasets. We show how this algorithm can be used in a variety of ways, in particular on traffic data, to solve problems such as simple interpolation of missing values, traffic matrix inference from link data, prediction, and anomaly detection. The elegance of the approach lies in the fact that it unifies all of these tasks and allows them to be performed even when as much as 98% of the data is missing.
acm special interest group on data communication | 2008
Haakon Ringberg; Matthew Roughan; Jennifer Rexford
Anomalous events that affect the performance of networks are a fact of life. It is therefore not surprising that recent years have seen an explosion in research on network anomaly detection. What is quite surprising, however, is the lack of controlled evaluation of these detectors. In this paper we argue that there are numerous important questions regarding the effectiveness of anomaly detectors that cannot be answered by the evaluation techniques employed today. We present four central requirements of a rigorous evaluation that can only be met by simulating both the anomaly and its surrounding environment. While simulation is necessary, it is not sufficient. We therefore present an outline of an evaluation methodology that leverages both simulation and traces from operational networks