Arno Wagner
ETH Zurich
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Arno Wagner.
workshops on enabling technologies: infrastracture for collaborative enterprises | 2005
Arno Wagner; Bernhard Plattner
Detecting massive network events like worm outbreaks in fast IP networks such as Internet backbones, is hard. One problem is that the amount of traffic data does not allow real-time analysis of details. Another problem is that the specific characteristics of these events are not known in advance. There is a need for analysis methods that are real-time capable and can handle large amounts of traffic data. We have developed an entropy-based approach that determines and reports entropy contents of traffic parameters such as IP addresses. Changes in the entropy content indicate a massive network event. We give analyses on two Internet worms as proof-of-concept. While our primary focus is detection of fast worms, our approach should also be able to detect other network events. We discuss implementation alternatives and give benchmark results. We also show that our approach scales very well.
internet measurement conference | 2006
Daniela Brauckhoff; Bernhard Tellenbach; Arno Wagner; Martin May; Anukool Lakhina
Packet sampling methods such as Ciscos NetFlow are widely employed by large networks to reduce the amount of traffic data measured. A key problem with packet sampling is that it is inherently a lossy process, discarding (potentially useful) information. In this paper, we empirically evaluate the impact of sampling on anomaly detection metrics. Starting with unsampled flow records collected during the Blaster worm outbreak, we reconstruct the underlying packet trace and simulate packet sampling at increasing rates. We then use our knowledge of the Blaster anomaly to build a baseline of normal traffic (without Blaster), against which we can measure the anomaly size at various sampling rates. This approach allows us to evaluate the impact of packet sampling on anomaly detection without being restricted to (or biased by) a particular anomaly detection method.We find that packet sampling does not disturb the anomaly size when measured in volume metrics such as the number of bytes and number of packets, but grossly biases the number of flows. However, we find that recently proposed entropy-based summarizations of packet and flow counts are affected less by sampling, and expose the Blaster worm outbreak even at higher sampling rates. Our findings suggest that entropy summarizations are more resilient to sampling than volume metrics. Thus, while not perfect, sampling still preserves sufficient distributional structure, which when harnessed by tools like entropy, can expose hard-to-detect scanning anomalies.
workshop on rapid malcode | 2003
Arno Wagner; Thomas Dübendorfer; Bernhard Plattner; Roman Hiestand
Fast Internet worms are a relatively new threat to Internet infrastructure and hosts. We discuss motivation and possibilities to study the behaviour of such worms and degrees of freedom that worm writers have. To facilitate the study of fast worms we have designed a simulator. We describe the design of this simulator and discuss practical experiences we have made with it and compare observation of past worms with simulated behaviour. One specific feature of the simulator is that the Internet model used can represent network bandwidth and latency constraints.
IEEE ACM Transactions on Networking | 2012
Daniela Brauckhoff; Xenofontas A. Dimitropoulos; Arno Wagner; Kavé Salamatian
Anomaly extraction refers to automatically finding, in a large set of flows observed during an anomalous time interval, the flows associated with the anomalous event(s). It is important for root-cause analysis, network forensics, attack mitigation, and anomaly modeling. In this paper, we use meta-data provided by several histogram-based detectors to identify suspicious flows, and then apply association rule mining to find and summarize anomalous flows. Using rich traffic data from a backbone network, we show that our technique effectively finds the flows associated with the anomalous event(s) in all studied cases. In addition, it triggers a very small number of false positives, on average between 2 and 8.5, which exhibit specific patterns and can be trivially sorted out by an administrator. Our anomaly extraction method significantly reduces the work-hours needed for analyzing alarms, making anomaly detection systems more practical.
workshops on enabling technologies: infrastracture for collaborative enterprises | 2004
Thomas Dübendorfer; Arno Wagner; Bernhard Plattner
Companies that rely on the Internet for their daily business are challenged by uncontrolled massive worm spreading and the lurking threat of large-scale distributed denial of service attacks. We present a new model and methodology, which allows a company to qualitatively and quantitatively estimate possible financial losses due to partial or complete interruption of Internet connectivity. Our systems engineering approach is based on an in-depth analysis of the Internet dependence of different types of enterprises and on interviews with Swiss telcos, backbone and Internet service providers. A discussion of sample scenarios illustrates the flexibility and applicability of our model.
First IEEE International Workshop on Critical Infrastructure Protection (IWCIP'05) | 2005
Thomas Dübendorfer; Arno Wagner; Bernhard Plattner
We developed an open source Internet backbone monitoring and traffic analysis framework named UPFrame. It captures UDP NetFlow packets, buffers it in shared memory and feeds it to customised plug-ins. UPFrame is highly tolerant to misbehaving plug-ins and provides a watchdog mechanism for restarting crashed plug-ins. This makes UP-Frame an ideal platform for experiments. It also features a traffic shaper for smoothing incoming traffic bursts. Using this framework, we have investigated IDS-like anomaly detection possibilities for high-speed Internet backbone networks. We have implemented several plug-ins for host behaviour classification, traffic activity pattern recognition, and traffic monitoring. We successfully detected the recent Blaster, Nachi and Witty worm outbreaks in a medium-sized Swiss Internet backbone (AS559) using border router NetFlow data captured in the DDoSVax project. The framework is efficient and robust and can complement traditional intrusion detection systems.
international conference on image and signal processing | 2006
Arno Wagner; Thomas Dübendorfer; Lukas Hämmerle; Bernhard Plattner
One major new and often not welcome source of Internet traffic is P2P filesharing traffic. Banning P2P usage is not always possible or enforcible, especially in a university environment. A more restrained approach allows P2P usage, but limits the available bandwidth. This approach fails when users start to use non-default ports for the client software. We have developed the PeerTracker algorithm that allows detection of running P2P clients from NetFlow data in near real-time. The algorithm is especially suitable to identify clients that generate large amounts of traffic and can easily be used to find P2P heavy-hitters. A prototype system based on the PeerTracker algorithm is currently used by the network operations staff at the Swiss Federal Institute of Technology Zurich. We present measurements done on a medium sized Internet backbone and discuss accuracy issues, as well as possibilities and results from validation of the detection algorithm by direct polling in real-time
international conference on detection of intrusions and malware and vulnerability assessment | 2005
Thomas Dübendorfer; Arno Wagner; Theus Hossmann; Bernhard Plattner
We present an extensive flow-level traffic analysis of the network worm Blaster.A and of the e-mail worm Sobig.F. Based on packet-level measurements with these worms in a testbed we defined flow-level filters. We then extracted the flows that carried malicious worm traffic from AS559 (SWITCH) border router backbone traffic that we had captured in the DDoSVax project. We discuss characteristics and anomalies detected during the outbreak phases, and present an in-depth analysis of partially and completely successful Blaster infections. Detailed flow-level traffic plots of the outbreaks are given. We found a short network test of a Blaster pre-release, significant changes of various traffic parameters, backscatter effects due to non-existent hosts, ineffectiveness of certain temporary port blocking countermeasures, and a surprisingly low frequency of successful worm code transmissions due to Blaster‘s multi-stage nature. Finally, we detected many TCP packet retransmissions due to Sobig.F‘s far too greedy spreading algorithm.
IWAN'04 Proceedings of the 6th IFIP TC6 international working conference on Active networks | 2004
Lukas Ruf; Arno Wagner; Károly Farkas; Bernhard Plattner
Distributed denial of service (DDoS) attacks in the Internet pose huge problems on nowadays communication infrastructure. Attacks either destroy information or impede access to a service. Since the significance of the Internet to business and economy is growing rapidly, efficient protection mechanisms are urgently required to protect hosts from being infected and, more important, sites from being attacked. Detection of DDoS attacks requires deep packet inspection at link speed, and context-dependent packet handling for countermeasures. This functionality is not achievable with nowadays commercial high-performance routers. In this paper, we therefore present our problem space exploration of DDoS attacks and propose a flexible service architecture for detection and filter mechanisms to counteract DDoS attacks. To achieve the performance required for backbone routers together with the flexibility needed for services counteracting DDoS attacks, we base the proposal on our PromethOS NP router platform that manages and controls hierarchical network nodes built of network and host processors.
international conference on internet monitoring and protection | 2009
Laurent Zimmerli; Bernhard Tellenbach; Arno Wagner; Bernhard Plattner
Rating Autonomous Systems helps in establishing and maintaining mission critical Internet communication paths. We elaborate performance metrics, tools, and quality indicators to rate Autonomous Systems. An initial rating approach based on traceroute measurements led to the discovery of the frequent effect of non-increasing round-trip times in traceroute measurements. Our improved outlier-based rating approach addresses this issue and allows the real-time detection of Autonomous Systems causing poor Internet connection performance as well as the comparison of Autonomous Systems against each other over an extended period of time.