Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dominik Schatzmann is active.

Publication


Featured researches published by Dominik Schatzmann.


workshop challenged networks | 2011

WiFi-Opp: ad-hoc-less opportunistic networking

Sacha Trifunovic; Bernhard Distl; Dominik Schatzmann; Franck Legendre

Opportunistic networking offers many appealing application perspectives from local social-networking applications to supporting communications in remote areas or in disaster and emergency situations. Yet, despite the increasing penetration of smartphones, opportunistic networking is not feasible with most popular mobile devices. There is still no support for WiFi Ad-Hoc and protocols such as Bluetooth have severe limitations (short range, pairing). We believe that WiFi Ad-Hoc communication will not be supported by most popular mobile OSes (i.e., iOS and Android) and that WiFi Direct will not bring the desired features. Instead, we propose WiFi-Opp, a realistic opportunistic setup relying on (i) open stationary APs and (ii) spontaneous mobile APs (i.e., smartphones in AP or tethering mode), a feature used to share Internet access, which we use to enable opportunistic communications. We compare WiFi-Opp to WiFi Ad-Hoc by replaying real-world contact traces and evaluate their performance in terms of capacity for content dissemination as well as energy consumption. While achieving comparable throughput, WiFi-Opp is up to 10 times more energy efficient than its Ad-Hoc counterpart. Eventually, a proof of concept demonstrates the feasibility of WiFi-Opp, which opens new perspectives for opportunistic networking.


acm special interest group on data communication | 2010

The role of network trace anonymization under attack

Martin Burkhart; Dominik Schatzmann; Brian Trammell; Elisa Boschi; Bernhard Plattner

In recent years, academic literature has analyzed many attacks on network trace anonymization techniques. These attacks usually correlate external information with anonymized data and successfully de-anonymize objects with distinctive signatures. However, analyses of these attacks still underestimate the real risk of publishing anonymized data, as the most powerful attack against anonymization is traffic injection. We demonstrate that performing live traffic injection attacks against anonymization on a backbone network is not difficult, and that potential countermeasures against these attacks, such as traffic aggregation, randomization or field generalization, are not particularly effective. We then discuss tradeoffs of the attacker and defender in the so-called injection attack space. An asymmetry in the attack space significantly increases the chance of a successful de-anonymization through lengthening the injected traffic pattern. This leads us to re-examine the role of network data anonymization. We recommend a unified approach to data sharing, which uses anonymization as a part of a technical, legal, and social approach to data protection in the research and operations communities.


WEIS | 2010

Modeling the Security Ecosystem - The Dynamics of (In)Security

Stefan Frei; Dominik Schatzmann; Bernhard Plattner; Brian Trammell

The security of information technology and computer networks is effected by a wide variety of actors and processes which together make up a security ecosystem; here we examine this ecosystem, consolidating many aspects of security that have hitherto been discussed only separately. First, we analyze the roles of the major actors within this ecosystem and the processes they participate in, and the the paths vulnerability data take through the ecosystem and the impact of each of these on security risk. Then, based on a quantitative examination of 27,000 vulnerabilities disclosed over the past decade and taken from publicly available data sources, we quantify the systematic gap between exploit and patch availability. We provide the first examination of the impact and the risks associated with this gap on the ecosystem as a whole. Our analysis provides a metric for the success of the “responsible disclosure” process. We measure the prevalence of the commercial markets for vulnerability information and highlight the role of security information providers (SIP), which function as the “free press” of the ecosystem.


Computer Networks | 2011

Accurate network anomaly classification with generalized entropy metrics

Bernhard Tellenbach; Martin Burkhart; Dominik Schatzmann; David Gugelmann; Didier Sornette

The accurate detection and classification of network anomalies based on traffic feature distributions is still a major challenge. Together with volume metrics, traffic feature distributions are the primary source of information of approaches scalable to high-speed and large scale networks. In previous work, we proposed to use the Tsallis entropy based traffic entropy spectrum (TES) to capture changes in specific activity regions, such as the region of heavy-hitters or rare elements. Our preliminary results suggested that the TES does not only provide more details about an anomaly but might also be better suited for detecting them than traditional approaches based on Shannon entropy. We refine the TES and propose a comprehensive anomaly detection and classification system called the entropy telescope. We analyze the importance of different entropy features and refute findings of previous work reporting a supposedly strong correlation between different feature entropies and provide an extensive evaluation of our entropy telescope. Our evaluation with three different detection methods (Kalman filter, PCA, KLE), one classification method (SVM) and a rich set of anomaly models and real backbone traffic demonstrates the superiority of the refined TES approach over TES and the classical Shannon-only approaches. For instance, we found that when switching from Shannon to the refined TES approach, the PCA method detects small to medium sized anomalies up to 20% more accurately. Classification accuracy is improved by up to 19% when switching from Shannon-only to TES and by another 8% when switching from TES to the refined TES approach. To complement our evaluation, we run the entropy telescope on one month of backbone traffic finding that most prevalent anomalies are different types of scanning (69-84%) and reflector DDoS attacks (15-29%).


passive and active network measurement | 2011

Peeling away timing error in netflow data

Brian Trammell; Bernhard Tellenbach; Dominik Schatzmann; Martin Burkhart

In this paper, we characterize, quantify, and correct timing errors introduced into network flow data by collection and export via Cisco NetFlow version 9. We find that while some of these sources of error (clock skew, export delay) are generally implementation-dependent and known in the literature, there is an additional cyclic error of up to one second that is inherent to the design of the export protocol. We present a method for correcting this cyclic error in the presence of clock skew and export delay. In an evaluation using traffic with known timing collected from a national-scale network, we show that this method can successfully correct the cyclic error. However, there can also be other implementation-specific errors for which insufficient information remains for correction. On the routers we have deployed in our network, this limits the accuracy to about 70ms, reinforcing the point that implementation matters when conducting research on network measurement data.


passive and active network measurement | 2011

FACT: flow-based approach for connectivity tracking

Dominik Schatzmann; Simon Leinen; Jochen Kögel; Wolfgang Mühlbauer

More than 20 years after the launch of the public Internet, operator forums are still full of reports about temporary unreachability of complete networks. We propose FACT, a system that helps network operators to track connectivity problems with remote autonomous systems, networks, and hosts. In contrast to existing solutions, our approach relies solely on flow-level information about observed traffic, is capable of online data processing, and is highly efficient in alerting only about those events that actually affect the studied network or its users. We evaluate FACT based on flow-level traces from a medium-sized ISP. Studying a time period of one week in September 2010, we explain the key principles behind our approach. Ultimately, these can be leveraged to detect connectivity problems and to summarize suspicious events for manual inspection by the network operator. In addition, when replaying archived traces from the past, FACT reliably recognizes reported connectivity problems that were relevant for the studied network.


international conference on communications | 2014

Network anomaly detection in the cloud: The challenges of virtual service migration

Kirila Adamova; Dominik Schatzmann; Bernhard Plattner; Paul Smith

The use of virtualisation technology in the cloud enables services to migrate within and across geographically diverse data centres, e.g., to enable load balancing and fault tolerance. An important part of securing cloud services is being able to detect anomalous behaviour, caused by attacks, that is evident in network traffic. However, it is not clear whether virtual service migration adversely affects the performance of contemporary network-based anomaly detection approaches. In this paper, we explore this issue, and show that wide-area virtual service migration can adversely affect state of the art approaches to network flow-based anomaly detection techniques, potentially rendering them unusable.


IEEE Communications Magazine | 2013

Collaborative network outage troubleshooting with secure multiparty computation

Mentari Djatmiko; Dominik Schatzmann; Xenofontas A. Dimitropoulos; Arik Friedman; Roksana Boreli

Troubleshooting network outages is a complex and time-consuming process. Network administrators are typically overwhelmed with large volumes of monitoring data, like SMTP and NetFlow measurements, from which it is very hard to separate between actionable and non-actionable events. In addition, they can only debug network problems using very basic tools, like ping and traceroute. In this context, intelligent correlation of measurements from different Internet locations is essential for analyzing the root cause of outages. However, correlating measurements across domains raises privacy concerns and hence is largely avoided. A possible solution to the privacy barrier is secure multi-party computation (MPC), that is, a set of cryptographic methods that enable a number of parties to aggregate private data without revealing sensitive information. In this article, we propose a distributed mechanism based on MPC for privacy-preserving correlation of NetFlow measurements from multiple ISPs, which helps in the diagnosis of network outages. We first outline an MPC protocol that can be used to analyze the scope (local, global, or semi-global) and severity of network outages across multiple ISPs. Then we use NetFlow data from a medium-sized ISP to evaluate the performance of our protocol. Our findings indicate that correlating data from several dozens of ISPs is feasible in near real time, with a delay of just a few seconds. This demonstrates the scalability and potential for real-world deployment of MPC-based schemes. Finally, as a case study we demonstrate how our scheme helped analyze, from multiple domains, the impact that Hurricane Sandy had on Internet connectivity in terms of scope and severity.


traffic monitoring and analysis | 2011

Identifying skype traffic in a large-scale flow data repository

Brian Trammell; Elisa Boschi; Christian Callegari; Peter Dorfinger; Dominik Schatzmann

We present a novel method for identifying Skype clients and supernodes on a network using only flow data, based upon the detection of certain Skype control traffic. Flow-level identification allows long-term retrospective studies of Skype traffic as well as studies of Skype traffic on much larger scale networks than existing packet-based approaches. We use this method to identify Skype hosts and connection events to the network in a historical flow data set containing 182 full days of data over the six years from 2004 to 2009, in order to explore the evolution of the Skype network in general and a large observed portion thereof in particular. This represents, to the best of our knowledge, the first long-term retrospective analysis of the behavior of the Skype network based solely on flow data, and the first successful application of a Skype detection algorithm to flow data collected from a production network.


conference on emerging network experiment and technology | 2013

Federated flow-based approach for privacy preserving connectivity tracking

Mentari Djatmiko; Dominik Schatzmann; Xenofontas A. Dimitropoulos; Arik Friedman; Roksana Boreli

Network outages are an important issue for Internet Service Providers (ISPs) and, more generally, online service providers, as they can result in major financial losses and negatively impact relationships with their customers. Troubleshooting network outages is a complex and time-consuming process. Network administrators are overwhelmed with large volumes of monitoring data and are limited to using very basic tools for debugging, e.g., ping and traceroute. Intelligent correlation of measurements from different Internet locations is very useful for analyzing the root cause of outages. However, correlating measurements of user traffic across domains is largely avoided as it raises privacy concerns. A possible solution is secure multi-party computation (MPC), a set of cryptographic methods that enable a number of parties to aggregate data in a privacy-preserving manner. In this work, we describe a novel system that helps diagnose network outages by correlating passive measurements from multiple ISPs in a privacy-preserving manner. We first show how MPC can be used to compute the scope (local, global, or semi-global) and severity (number of affected hosts) of network outages. To meet near-real-time monitoring guarantees, we then present an efficient protocol for MPC multiset union that uses counting Bloom filters (CBF) to drastically accelerate MPC comparison operations. Finally, we demonstrate the utility of our scheme using real-world traffic measurements from a national ISP and we discuss the trade-offs of the CBF-based computation.

Collaboration


Dive into the Dominik Schatzmann's collaboration.

Researchain Logo
Decentralizing Knowledge