Bernhard Tellenbach
ETH Zurich
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Bernhard Tellenbach.
internet measurement conference | 2006
Daniela Brauckhoff; Bernhard Tellenbach; Arno Wagner; Martin May; Anukool Lakhina
Packet sampling methods such as Ciscos NetFlow are widely employed by large networks to reduce the amount of traffic data measured. A key problem with packet sampling is that it is inherently a lossy process, discarding (potentially useful) information. In this paper, we empirically evaluate the impact of sampling on anomaly detection metrics. Starting with unsampled flow records collected during the Blaster worm outbreak, we reconstruct the underlying packet trace and simulate packet sampling at increasing rates. We then use our knowledge of the Blaster anomaly to build a baseline of normal traffic (without Blaster), against which we can measure the anomaly size at various sampling rates. This approach allows us to evaluate the impact of packet sampling on anomaly detection without being restricted to (or biased by) a particular anomaly detection method.We find that packet sampling does not disturb the anomaly size when measured in volume metrics such as the number of bytes and number of packets, but grossly biases the number of flows. However, we find that recently proposed entropy-based summarizations of packet and flow counts are affected less by sampling, and expose the Blaster worm outbreak even at higher sampling rates. Our findings suggest that entropy summarizations are more resilient to sampling than volume metrics. Thus, while not perfect, sampling still preserves sufficient distributional structure, which when harnessed by tools like entropy, can expose hard-to-detect scanning anomalies.
passive and active network measurement | 2009
Bernhard Tellenbach; Martin Burkhart; Didier Sornette; Thomas Maillart
Tracking changes in feature distributions is very important in the domain of network anomaly detection. Unfortunately, these distributions consist of thousands or even millions of data points. This makes tracking, storing and visualizing changes over time a difficult task. A standard technique for capturing and describing distributions in a compact form is the Shannon entropy analysis. Its use for detecting network anomalies has been studied in-depth and several anomaly detection approaches have applied it with considerable success. However, reducing the information about a distribution to a single number deletes important information such as the nature of the change or it might lead to overlooking a large amount of anomalies entirely. In this paper, we show that a generalized form of entropy is better suited to capture changes in traffic features, by exploring different moments. We introduce the Traffic Entropy Spectrum (TES) to analyze changes in traffic feature distributions and demonstrate its ability to characterize the structure of anomalies using traffic traces from a large ISP.
Computer Networks | 2011
Bernhard Tellenbach; Martin Burkhart; Dominik Schatzmann; David Gugelmann; Didier Sornette
The accurate detection and classification of network anomalies based on traffic feature distributions is still a major challenge. Together with volume metrics, traffic feature distributions are the primary source of information of approaches scalable to high-speed and large scale networks. In previous work, we proposed to use the Tsallis entropy based traffic entropy spectrum (TES) to capture changes in specific activity regions, such as the region of heavy-hitters or rare elements. Our preliminary results suggested that the TES does not only provide more details about an anomaly but might also be better suited for detecting them than traditional approaches based on Shannon entropy. We refine the TES and propose a comprehensive anomaly detection and classification system called the entropy telescope. We analyze the importance of different entropy features and refute findings of previous work reporting a supposedly strong correlation between different feature entropies and provide an extensive evaluation of our entropy telescope. Our evaluation with three different detection methods (Kalman filter, PCA, KLE), one classification method (SVM) and a rich set of anomaly models and real backbone traffic demonstrates the superiority of the refined TES approach over TES and the classical Shannon-only approaches. For instance, we found that when switching from Shannon to the refined TES approach, the PCA method detects small to medium sized anomalies up to 20% more accurately. Classification accuracy is improved by up to 19% when switching from Shannon-only to TES and by another 8% when switching from TES to the refined TES approach. To complement our evaluation, we run the entropy telescope on one month of backbone traffic finding that most prevalent anomalies are different types of scanning (69-84%) and reflector DDoS attacks (15-29%).
passive and active network measurement | 2011
Brian Trammell; Bernhard Tellenbach; Dominik Schatzmann; Martin Burkhart
In this paper, we characterize, quantify, and correct timing errors introduced into network flow data by collection and export via Cisco NetFlow version 9. We find that while some of these sources of error (clock skew, export delay) are generally implementation-dependent and known in the literature, there is an additional cyclic error of up to one second that is inherent to the design of the export protocol. We present a method for correcting this cyclic error in the presence of clock skew and export delay. In an evaluation using traffic with known timing collected from a national-scale network, we show that this method can successfully correct the cyclic error. However, there can also be other implementation-specific errors for which insufficient information remains for correction. On the routers we have deployed in our network, this limits the accuracy to about 70ms, reinforcing the point that implementation matters when conducting research on network measurement data.
international conference on internet monitoring and protection | 2008
Bernhard Tellenbach; Daniela Brauckhoff; Martin May
Detection of network traffic anomalies is a key requirement for the provisioning of a reliable networking Infrastructure. In this paper, we examine how anomaly metrics are affected by different environmental settings. To evaluate the effect of the traffic mix on the anomaly visibility, we use traces collected at the different border routers of a medium size national ISP. Since the traces consist of unsampled NetFlow traces, we further examine the impact of sampling on the selected metrics. For our analysis, we use our knowledge of the Blaster and Witty worms to establish a baseline of normal traffic against which we measure the size of the anomaly at various sampling rates. To evaluate the impact of the traffic mix, we compare the visibility of the anomaly for the four different routers and discuss the results. Among other results, we find that traffic mix characteristics sometimes compensate or even boost anomaly visibility in sampled views. We further show that, depending on the anomaly and the traffic mix, some anomaly metrics outperform unsampled data views even at sampling rates of up to 1 out of 10000 packets.
TIK-Schriftenreihe | 2012
Bernhard Tellenbach
Today, the Internet allows virtually anytime, anywhere acc ess to a seemingly unlimited supply of information and services. Statistics s uch as the six-fold increase of U.S. online retail sales since 2000 illustrate i ts growing importance to the global economy, and fuel our demand for rapid, ro und-the-clock Internet provision. This growth has created a need for syste m of control and management to regulate an increasingly complex infrast ructure. Unfortunately, the prospect of making fast money from this burgeoni ng industry has also started to attract criminals. This has driven an increa se in, and professionalization of, cyber-crime. As a result, a variety of methods have been designed with the intention of better protecting the Internet, its us ers and its underlying infrastructure from both accidental and malicious threats . Firewalls, which restrict network access, intrusion detection systems, whi ch locate and prevent unauthorized access, and network monitors, which over see the correct functioning of network infrastructures, have all been deve lop d in order to detect and avert potential problems. These systems can be br oadly defined as either reactive or proactive. The reactive approach seeks t o identify specific problem patterns. It uses models learnt from theory or pract ice to locate common dangers as they develop. The number of patterns applied g rows as each new problem is encountered. Proactive methods work differe ntly. They start defining an idealized model of the normal behavior of a given s ystem. Any significant deviation from this model is assumed to be an aber r nce caused by an external danger. However, this assumption may turn out to be incorrect, having actually not arisen from a disruption or a malicious a ct. Despite considerable improvements, the development of accurate proac tive detection and classification methods is still an area of intense research. This is particularly true of methods fit for high speed networks. To cope with the hu ge amounts of data at hand, these methods utilize highly aggregated forms f data. Volume
international conference on internet monitoring and protection | 2009
Laurent Zimmerli; Bernhard Tellenbach; Arno Wagner; Bernhard Plattner
Rating Autonomous Systems helps in establishing and maintaining mission critical Internet communication paths. We elaborate performance metrics, tools, and quality indicators to rate Autonomous Systems. An initial rating approach based on traceroute measurements led to the discovery of the frequent effect of non-increasing round-trip times in traceroute measurements. Our improved outlier-based rating approach addresses this issue and allows the real-time detection of Autonomous Systems causing poor Internet connection performance as well as the comparison of Autonomous Systems against each other over an extended period of time.
international conference on internet monitoring and protection | 2016
Simon Aebersold; Krzysztof Kryszczuk; Sergio Paganoni; Bernhard Tellenbach; Timothy Trowbridge
Archive | 2010
Bernhard Tellenbach; Didier Sornette; Thomas Maillard; Martin Burkhart
Archive | 2006
Janneth Malibago; Daniela Brauckhoff; Bernhard Tellenbach