Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John Jannotti is active.

Publication


Featured researches published by John Jannotti.


workshop on privacy in the electronic society | 2007

Making p2p accountable without losing privacy

Mira Belenkiy; Melissa Chase; C. Christopher Erway; John Jannotti; Alptekin Küpçü; Anna Lysyanskaya; Eric Rachlin

Peer-to-peer systems have been proposed for a wide variety of applications, including file-sharing, web caching, distributed computation, cooperative backup, and onion routing. An important motivation for such systems is self-scaling. That is, increased participation increases the capacity of the system. Unfortunately, this property is at risk from selfish participants. The decentralized nature of peer-to-peer systems makes accounting difficult. We show that e-cash can be a practical solution to the desire for accountability in peer-to-peer systems while maintaining their ability to self-scale. No less important, e-cash is a natural fit for peer-to-peer systems that attempt to provide (or preserve) privacy for their participants. We show that e-cash can be used to provide accountability without compromising the existing privacy goals of a peer-to-peer system. We show how e-cash can be practically applied to a file sharing application. Our approach includes a set of novel cryptographic protocols that mitigate the computational and communication costs of anonymous e-cash transactions, and system design choices that further reduce overhead and distribute load. We conclude that provably secure, anonymous, and scalable peer-to-peer systems are within reach.


acm special interest group on data communication | 2008

Incentivizing outsourced computation

Mira Belenkiy; Melissa Chase; C. Christopher Erway; John Jannotti; Alptekin Küpçü; Anna Lysyanskaya

We describe different strategies a central authority, the boss, can use to distribute computation to untrusted contractors. Our problem is inspired by volunteer distributed computing projects such as SETI@home, which outsource computation to large numbers of participants. For many tasks, verifying a tasks output requires as much work as computing it again; additionally, some tasks may produce certain outputs with greater probability than others. A selfish contractor may try to exploit these factors, by submitting potentially incorrect results and claiming a reward. Further, malicious contractors may respond incorrectly, to cause direct harm or to create additional overhead for result-checking. We consider the scenario where there is a credit system whereby users can be rewarded for good work and fined for cheating. We show how to set rewards and fines that incentivize proper behavior from rational contractors, and mitigate the damage caused by malicious contractors. We analyze two strategies: random double-checking by the boss, and hiring multiple contractors to perform the same job. We also present a bounty mechanism when multiple contractors are employed; the key insight is to give a reward to a contractor who catches another worker cheating. Furthermore, if we can assume that at least a small fraction h of the contractors are honest (1% - 10%), then we can provide graceful degradation for the accuracy of the system and the work the boss has to perform. This is much better than the Byzantine approach, which typically assumes h > 60%.


european conference on computer systems | 2008

BorderPatrol: isolating events for black-box tracing

Eric Koskinen; John Jannotti

Causal request traces are valuable to developers of large concurrent and distributed applications, yet difficult to obtain. Traces show how a request is processed, and can be analyzed by tools to detect performance or correctness errors and anomalous behavior. We present BorderPatrol, which obtains precise request traces through suystems built from a litany of unmodified modules. Traced components include Apache, thttpd, PostgreSQL, TurboGears, BIND and notably Zeus, a closed-source event-driven web server. BorderPatrol obtains traces using active observation which carefully modifies the event stream observed by modules, simplifying precise observation. Protocol processors leverage knowledge about standard protocols, avoiding application-specific instrumentation. BorderPatrol obtains precise traces for black-box systems that cannot be traced by any other technique. We confirm the accuracy of BorderPatrols traces by comparing to manual instrumentation, and compare the developer effort required for each kind of trace. BorderPatrol imposes limited overhead on real systems (approximately 10-15%) and it may be enabled or disabled in at run-time, making it a viable option for deployment in production environments.


international conference on data engineering | 2005

Locality Aware Networked Join Evaluation

Yanif Ahmad; Ugur Çetintemel; John Jannotti; Alexander Zgolinski

We pose the question: how do we efficiently evaluate a join operator, distributed over a heterogeneous network? Our objective here is to optimize the delay of output tuples. We discuss key challenges involved in the distribution, namely how to partition the join operator, how to place the resulting partitions on the network, and how to route inputs values from sources to our operators. Our model revolves on one simple concept - exploiting locality. We consider data locality in the distributions of input data values, and network locality in the distribution of network distances between sites. We sketch strategies to partition the input data space, and instantiate a structured topology, consisting of operator replicas to whom to route tuples for processing. Finally, we briefly discuss implementation issues that require addressing to enable the networked join proposed here.


distributed event-based systems | 2008

Event-based constraints for sensornet programming

Jie Mao; John Jannotti; Mert Akdere; Ugur Çetintemel

We propose a sensornet programming model based on declarative spatio-temporal constraints on events only, not sensors. Where previous approaches conflate events and sensors because they are often colocated, a focus on events allows programmers to specify their intent more directly, and better supports remote sensing devices such as cameras, microphones, and rangefinders. In our model, complex events are specified as aggregations of events in time or space, without regard to sensor locations or communication paths. New techniques are required to aggregate events based on these constraints without knowledge of nearby nodes. We present a decentralized, scalable event detection framework that allows for efficient in-network aggregation without coupling events and sensors. First, we describe a SQL-style declarative language with spatio-temporal constraints between events that can be used to express complex events. Next, we show how these complex events can be assembled efficiently. The distributed event detection mechanism scales to very large networks, load balances work across sensors, and is fault tolerant to network partitions and node failure.


international conference on data engineering | 2009

Supporting Generic Cost Models for Wide-Area Stream Processing

Olga Papaemmanouil; Ugur Çetintemel; John Jannotti

Existing stream processing systems are optimized for a specific metric, which may limit their applicability to diverse applications and environments. This paper presents XFlow, a generic data stream collection, processing, and dissemination system that addresses this limitation efficiently. XFlow can express and optimize a variety of optimization metrics and constraints by distributing stream processing queries across a wide-area network. It uses metric-independent decentralized algorithms that work on localized, aggregated statistics, while avoiding local optima. To facilitate light-weight dynamic changes on the query deployment, XFlow relies on a loosely-coupled, flexible architecture consisting of multiple publish-subscribe overlay trees that can gracefully scale and adapt to changes to network and workload conditions. Based on the desired performance goals, the system progressively refines the query deployment, the structure of the overlay trees, as well as the statistics collection process. We provide an overview of XFlows architecture and discuss its decentralized optimization model. We demonstrate its flexibility and the effectiveness using real-world streams and experimental results obtained from XFlows deployment on PlanetLab. The experiments reveal that XFlow can effectively optimize various performance metrics in the presence of varying network and workload conditions.


geosensor networks | 2008

Data-Centric Visual Sensor Networks for 3D Sensing

Mert Akdere; Ugur Çetintemel; Daniel E. Crispell; John Jannotti; Jie Mao; Gabriel Taubin

Visual Sensor Networks (VSNs) represent a qualitative leap in functionality over existing sensornets. With high data rates and precise calibration requirements, VSNs present challenges not faced by todays sensornets. The power and bandwidth required to transmit video data from hundreds or thousands of cameras to a central location for processing would be enormous. A network of smart cameras should process video data in real time, extracting features and three-dimensional geometry from the raw images of cooperating cameras. These results should be stored and processed in the network, near their origin. New content-routing techniques can allow cameras to find common features--critical for calibration, search, and tracking. We describe a novel query mechanism to mediate access to this distributed datastore, allowing high-level features to be described as compositions in space-time of simpler features.


workshop on program analysis for software tools and engineering | 2008

Elyze: enabling safe parallelism in event-driven servers

Kiran Pamnany; John Jannotti

It is increasingly necessary for applications to take advantage of concurrency in order to increase performance. Unfortunately, it is notoriously difficult to write correct concurrent applications as they are subject to a variety of subtle bugs that can be difficult to reproduce. We advocate an approach to developing these applications, particularly servers, in which a conservative static analysis determines when code segments may safely run in parallel, and a runtime scheduler respects these constraints, an approach that is safe by default. We have built an analyzer for event-driven servers written in C that detects unsafe data sharing between event handlers. When the analysis determines that two event handlers might access the same data, whether through a global variable or through the request-specific data structure passed to the handlers, it produces a constraint on the concurrent execution of those handlers. We are building a complementary runtime system that will use these constraints to safely run event handlers concurrently. We have analyzed thttpd, an event-driven web server, and show that our analyzer finds safe parallelism in this server, enabling increased performance without the hazards of threads and locks.


international conference on management of data | 2006

Extensible optimization in overlay dissemination trees

Olga Papaemmanouil; Yanif Ahmad; Ugur Çetintemel; John Jannotti; Yenel Yildirim


IEEE Data(base) Engineering Bulletin | 2005

Network Awareness in Internet-Scale Stream Processing.

Yanif Ahmad; Ugur Çetintemel; John Jannotti; Alexander Zgolinski; Stanley B. Zdonik

Collaboration


Dive into the John Jannotti's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yanif Ahmad

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge