Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andreas Haeberlen is active.

Publication


Featured researches published by Andreas Haeberlen.


acm/ieee international conference on mobile computing and networking | 2004

Practical robust localization over large-scale 802.11 wireless networks

Andreas Haeberlen; Eliot Flannery; Andrew M. Ladd; Algis Rudys; Dan S. Wallach; Lydia E. Kavraki

We demonstrate a system built using probabilistic techniques that allows for remarkably accurate localization across our entire office building using nothing more than the built-in signal intensity meter supplied by standard 802.11 cards. While prior systems have required significant investments of human labor to build a detailed signal map, we can train our system by spending less than one minute per office or region, walking around with a laptop and recording the observed signal intensities of our buildings unmodified base stations. We actually collected over two minutes of data per office or region, about 28 man-hours of effort. Using less than half of this data to train the localizer, we can localize a user to the precise, correct location in over 95% of our attempts, across the entire building. Even in the most pathological cases, we almost never localize a user any more distant than to the neighboring office. A user can obtain this level of accuracy with only two or three signal intensity measurements, allowing for a high frame rate of localization results. Furthermore, with a brief calibration period, our system can be adapted to work with previously unknown user hardware. We present results demonstrating the robustness of our system against a variety of untrained time-varying phenomena, including the presence or absence of people in the building across the day. Our system is sufficiently robust to enable a variety of location-aware applications without requiring special-purpose hardware or complicated training and calibration procedures.


internet measurement conference | 2007

Characterizing residential broadband networks

Andreas Haeberlen; Krishna P. Gummadi; Stefan Saroiu

A large and rapidly growing proportion of users connect to the Internet via residential broadband networks such as Digital Subscriber Lines (DSL) and cable. Residential networks are often the bottleneck in the last mile of todays Internet. Their characteristics critically affect Internet applications, including voice-over-IP, online games, and peer-to-peer content sharing/delivery systems. However, to date, few studies have investigated commercial broadband deployments, and rigorous measurement data that characterize these networks at scale are lacking. In this paper, we present the first large-scale measurement study of major cable and DSL providers in North America and Europe. We describe and evaluate the measurement tools we developed for this purpose. Our study characterizes several properties of broadband networks, including link capacities, packet round-trip times and jitter, packet loss rates, queue lengths, and queue drop policies. Our analysis reveals important ways in which residential networks differ from how the Internet is conventionally thought to operate. We also discuss the implications of our findings for many emerging protocols and systems, including delay-based congestion control (e.g., PCP) and network coordinate systems (e.g., Vivaldi).


symposium on operating systems principles | 2007

PeerReview: practical accountability for distributed systems

Andreas Haeberlen; Petr Kouznetsov; Peter Druschel

We describe PeerReview, a system that provides accountability in distributed systems. PeerReview ensures that Byzantine faults whose effects are observed by a correct node are eventually detected and irrefutably linked to a faulty node. At the same time, PeerReview ensures that a correct node can always defend itself against false accusations. These guarantees are particularly important for systems that span multiple administrative domains, which may not trust each other.PeerReview works by maintaining a secure record of the messages sent and received by each node. The record isused to automatically detect when a nodes behavior deviates from that of a given reference implementation, thus exposing faulty nodes. PeerReview is widely applicable: it only requires that a correct nodes actions are deterministic, that nodes can sign messages, and that each node is periodically checked by a correct node. We demonstrate that PeerReview is practical by applying it to three different types of distributed systems: a network filesystem, a peer-to-peer system, and an overlay multicast system.


Operating Systems Review | 2010

A case for the accountable cloud

Andreas Haeberlen

For many companies, clouds are becoming an interesting alternative to a dedicated IT infrastructure. However, cloud computing also carries certain risks for both the customer and the cloud provider. The customer places his computation and data on machines he cannot directly control; the provider agrees to run a service whose details he does not know. If something goes wrong - for example, data leaks to a competitor, or the computation returns incorrect results - it can be difficult for customer and provider to determinewhich of themhas caused the problem, and, in the absence of solid evidence, it is nearly impossible for them to hold each other responsible for the problem if a dispute arises. In this paper, we propose that the cloud should be made accountable to both the customer and the provider. Both parties should be able to check whether the cloud is running the service as agreed. If a problem appears, they should be able to determine which of them is responsible, and to prove the presence of the problem to a third party, such as an arbitrator or a judge. We outline the technical requirements for an accountable cloud, and we describe several challenges that are not yet met by current accountability techniques.


internet measurement conference | 2008

Detecting bittorrent blocking

Alan Mislove; Andreas Haeberlen; Krishna P. Gummadi

Recently, it has been reported that certain access ISPs are surreptitiously blocking their customers from uploading data using the popular BitTorrent file-sharing protocol. The reports have sparked an intense and wide-ranging policy debate on network neutrality and ISP traffic management practices. However, to date, end users lack access to measurement tools that can detect whether their access ISPs are blocking their BitTorrent traffic. And since ISPs do not voluntarily disclose their traffic management policies, no one knows how widely BitTorrent traffic blocking is deployed in the current Internet. In this paper, we address this problem by designing an easy-to-use tool to detect BitTorrent blocking and by presenting results from a widely used public deployment of the tool.


symposium on operating systems principles | 2011

Secure network provenance

Wenchao Zhou; Qiong Fei; Arjun Narayan; Andreas Haeberlen; Boon Thau Loo; Micah Sherr

This paper introduces secure network provenance (SNP), a novel technique that enables networked systems to explain to their operators why they are in a certain state -- e.g., why a suspicious routing table entry is present on a certain router, or where a given cache entry originated. SNP provides network forensics capabilities by permitting operators to track down faulty or misbehaving nodes, and to assess the damage such nodes may have caused to the rest of the system. SNP is designed for adversarial settings and is robust to manipulation; its tamper-evident properties ensure that operators can detect when compromised nodes lie or falsely implicate correct nodes. We also present the design of SNooPy, a general-purpose SNP system. To demonstrate that SNooPy is practical, we apply it to three example applications: the Quagga BGP daemon, a declarative implementation of Chord, and Hadoop MapReduce. Our results indicate that SNooPy can efficiently explain state in an adversarial setting, that it can be applied with minimal effort, and that its costs are low enough to be practical.


symposium on principles of programming languages | 2013

Linear dependent types for differential privacy

Marco Gaboardi; Andreas Haeberlen; Justin Hsu; Arjun Narayan; Benjamin C. Pierce

Differential privacy offers a way to answer queries about sensitive information while providing strong, provable privacy guarantees, ensuring that the presence or absence of a single individual in the database has a negligible statistical effect on the querys result. Proving that a given query has this property involves establishing a bound on the querys sensitivity---how much its result can change when a single record is added or removed. A variety of tools have been developed for certifying that a given query differentially private. In one approach, Reed and Pierce [34] proposed a functional programming language, Fuzz, for writing differentially private queries. Fuzz uses linear types to track sensitivity and a probability monad to express randomized computation; it guarantees that any program with a certain type is differentially private. Fuzz can successfully verify many useful queries. However, it fails when the sensitivity analysis depends on values that are not known statically. We present DFuzz, an extension of Fuzz with a combination of linear indexed types and lightweight dependent types. This combination allows a richer sensitivity analysis that is able to certify a larger class of queries as differentially private, including ones whose sensitivity depends on runtime information. As in Fuzz, the differential privacy guarantee follows directly from the soundness theorem of the type system. We demonstrate the enhanced expressivity of DFuzz by certifying differential privacy for a broad class of iterative algorithms that could not be typed previously.


ieee computer security foundations symposium | 2014

Differential Privacy: An Economic Method for Choosing Epsilon

Justin Hsu; Marco Gaboardi; Andreas Haeberlen; Sanjeev Khanna; Arjun Narayan; Benjamin C. Pierce; Aaron Roth

Differential privacy is becoming a gold standard notion of privacy; it offers a guaranteed bound on loss of privacy due to release of query results, even under worst-case assumptions. The theory of differential privacy is an active research area, and there are now differentially private algorithms for a wide range of problems. However, the question of when differential privacy works in practice has received relatively little attention. In particular, there is still no rigorous method for choosing the key parameter ε, which controls the crucial tradeoff between the strength of the privacy guarantee and the accuracy of the published results. In this paper, we examine the role of these parameters in concrete applications, identifying the key considerations that must be addressed when choosing specific values. This choice requires balancing the interests of two parties with conflicting objectives: the data analyst, who wishes to learn something abou the data, and the prospective participant, who must decide whether to allow their data to be included in the analysis. We propose a simple model that expresses this balance as formulas over a handful of parameters, and we use our model to choose ε on a series of simple statistical studies. We also explore a surprising insight: in some circumstances, a differentially private study can be more accurate than a non-private study for the same cost, under our model. Finally, we discuss the simplifying assumptions in our model and outline a research agenda for possible refinements.


european conference on computer systems | 2006

Experiences in building and operating ePOST, a reliable peer-to-peer application

Alan Mislove; Ansley Post; Andreas Haeberlen; Peter Druschel

Peer-to-peer (p2p) technology can potentially be used to build highly reliable applications without a single point of failure. However, most of the existing applications, such as file sharing or web caching, have only moderate reliability demands. Without a challenging proving ground, it remains unclear whether the full potential of p2p systems can be realized.To provide such a proving ground, we have designed, deployed and operated a p2p-based email system. We chose email because users depend on it for their daily work and therefore place high demands on the availability and reliability of the service, as well as the durability, integrity, authenticity and privacy of their email. Our system, ePOST, has been actively used by a small group of participants for over two years.In this paper, we report the problems and pitfalls we encountered in this process. We were able to address some of them by applying known principles of system design, while others turned out to be novel and fundamental, requiring us to devise new solutions. Our findings can be used to guide the design of future reliable p2p systems and provide interesting new directions for future research.


acm special interest group on data communication | 2008

Satellitelab: adding heterogeneity to planetary-scale network testbeds

Andreas Haeberlen; Ivan Beschastnikh; Krishna P. Gummadi; Stefan Saroiu

Planetary-scale network testbeds like PlanetLab and RON have become indispensable for evaluating prototypes of distributed systems under realistic Internet conditions. However, current testbeds lack the heterogeneity that characterizes the commercial Internet. For example, most testbed nodes are connected to well-provisioned research networks, whereas most Internet nodes are in edge networks. In this paper, we present the design, implementation, and evaluation of SatelliteLab, a testbed that includes nodes from a diverse set of Internet edge networks. SatelliteLab has a two-tier architecture, in which well-provisioned nodes called planets form the core, and lightweight nodes called satellites connect to the planets from the periphery. The application code of an experiment runs on the planets, whereas the satellites only forward network traffic. Thus, the traffic is subjected to the network conditions of the satellites, which greatly improves the testbeds network heterogeneity. The separation of code execution and traffic forwarding enables satellites to remain lightweight, which lowers the barrier to entry for Internet edge nodes. Our prototype of SatelliteLab uses PlanetLab nodes as planets and a set of 32 volunteered satellites with diverse network characteristics. These satellites consist of desktops, laptops, and handhelds connected to the Internet via cable, DSL, ISDN, Wi-Fi, Bluetooth, and cellular links. We evaluate SatelliteLabs design, and we demonstrate the benefits of evaluating applications on SatelliteLab.

Collaboration


Dive into the Andreas Haeberlen's collaboration.

Top Co-Authors

Avatar

Boon Thau Loo

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ang Chen

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arjun Narayan

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Yang Wu

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zachary G. Ives

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge