Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where M. Frans Kaashoek is active.

Publication


Featured researches published by M. Frans Kaashoek.


IEEE ACM Transactions on Networking | 2003

Chord: a scalable peer-to-peer lookup protocol for Internet applications

Ion Stoica; Robert Tappan Morris; David Liben-Nowell; David R. Karger; M. Frans Kaashoek; Frank Dabek; Hari Balakrishnan

A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.


international workshop on peer to peer systems | 2003

Koorde: A simple degree-optimal distributed hash table

M. Frans Kaashoek; David R. Karger

Koorde is a new distributed hash table (DHT) based on Chord 15 and the de Bruijn graphs 2. While inheriting the simplicity of Chord, Koorde meets various lower bounds, such as O(log n) hops per lookup request with only 2 neighbors per node (where n is the number of nodes in the DHT), and O(log n/log log n) hops per lookup request with O(log n) neighbors per node.


Archive | 2003

Peer-to-Peer Systems II

M. Frans Kaashoek; Ion Stoica

Structured peer-to-peer overlay networks provide a useful substrate for building distributed applications but there are general concerns over the cost of maintaining these overlays. The current approach is to configure the overlays statically and conservatively to achieve the desired reliability even under uncommon adverse conditions. This results in high cost in the common case, or poor reliability in worse than expected conditions. We analyze the cost of overlay maintenance in realistic dynamic environments and design novel techniques to reduce this cost by adapting to the operating conditions. With our techniques, the concerns over the overlay maintenance cost are no longer warranted. Simulations using real traces show that they enable high reliability and performance even in very adverse conditions with low maintenance cost.


symposium on principles of programming languages | 1996

C: a language for high-level, efficient, and machine-independent dynamic code generation

Dawson R. Engler; Wilson C. Hsieh; M. Frans Kaashoek

Dynamic code generation allows specialized code sequences to be created using runtime information. Since this information is by definition not available statically, the use of dynamic code generation can achieve performance inherently beyond that of static code generation. Previous attempts to support dynamic code generation have been low-level, expensive, or machine-dependent. Despite the growing use of dynamic code generation, no mainstream language provides flexible, portable, and efficient support for it.We describe C (Tick C), a superset of ANSI C that allows flexible, high-level. efficient, and machine-independent specification of dynamically generated code. C provides many of the performance benefits of pure partial evaluation, but in the context of a complex, statically typed, but widely used language. C examples illustrate the ease of specifying dynamically generated code and how it can be put to use. Experiments with a prototype compiler show that C enables excellent performance improvement (in some cases, more than an order of magnitude).


conference on high performance computing (supercomputing) | 1996

Dynamic Computation Migration in DSM Systems

Wilson C. Hsieh; M. Frans Kaashoek; William E. Weihl

Dynamic computation migration is the runtime choice between computation and data migration. Dynamic computation migration speeds up access to concurrent data structures with unpredictable read/write patterns. This paper describes the design, implementation, and evaluation of dynamic computation migration in a multithreaded distributed shared-memory system, MCRL. Two policies are studied, STATIC and REPEAT. Both migrate computation for writes. STATIC migrates data for reads, while REPEAT maintains a limited history of accesses and sometimes migrates computation for reads. On a concurrent, distributed B-tree with 50% lookups and 50% inserts, STATIC improves performance by about 17% on both Alewife and the CM-5. REPEAT generally performs better than STATIC. With 80% lookups and 20% inserts, REPEATimproves performance by 23% on Alewife, and by 46% on the CM-5.


computer and communications security | 2014

VerSum: Verifiable Computations over Large Public Logs

Jelle van den Hooff; M. Frans Kaashoek; Nickolai Zeldovich

VerSum allows lightweight clients to outsource expensive computations over large and frequently changing data structures, such as the Bitcoin or Namecoin blockchains, or a Certificate Transparency log. VerSum clients ensure that the output is correct by comparing the outputs from multiple servers. VerSum assumes that at least one server is honest, and crucially, when servers disagree, VerSum uses an efficient conflict resolution protocol to determine which server(s) made a mistake and thus obtain the correct output. VerSums contribution lies in achieving low server-side overhead for both incremental re-computation and conflict resolution, using three key ideas: (1) representing the computation as a functional program, which allows memoization of previous results; (2) recording the evaluation trace of the functional program in a carefully designed computation history to help clients determine which server made a mistake; and (3) introducing a new authenticated data structure for sequences, called SeqHash, that makes it efficient for servers to construct summaries of computation histories in the presence of incremental re-computation. Experimental results with an implementation of VerSum show that VerSum can be used for a variety of computations, that it can support many clients, and that it can easily keep up with Bitcoins rate of new blocks with transactions.


european conference on computer systems | 2015

Hare: a file system for non-cache-coherent multicores

Charles Gruenwald; Filippo Sironi; M. Frans Kaashoek; Nickolai Zeldovich

Hare is a new file system that provides a POSIX-like interface on multicore processors without cache coherence. Hare allows applications on different cores to share files, directories, and file descriptors. The challenge in designing Hare is to support the shared abstractions faithfully enough to run applications that run on traditional shared-memory operating systems, with few modifications, and to do so while scaling with an increasing number of cores. To achieve this goal, Hare must support features (such as shared file descriptors) that traditional network file systems dont support, as well as implement them in a way that scales (e.g., shard a directory across servers to allow concurrent operations in that directory). Hare achieves this goal through a combination of new protocols (including a 3-phase commit protocol to implement directory operations correctly and scalably) and leveraging properties of non-cache-coherent multiprocessors (e.g., atomic low-latency message delivery and shared DRAM). An evaluation on a 40-core machine demonstrates that Hare can run many challenging Linux applications (including a mail server and a Linux kernel build) with minimal or no modifications. The results also show these applications achieve good scalability on Hare, and that Hares techniques are important to achieving scalability.


Communications of The ACM | 2017

Certifying a file system using crash hoare logic: correctness in the presence of crashes

Tej Chajed; Haogang Chen; Adam Chlipala; M. Frans Kaashoek; Nickolai Zeldovich; Daniel Ziegler

FSCQ is the first file system with a machine-checkable proof that its implementation meets a specification, even in the presence of fail-stop crashes. FSCQ provably avoids bugs that have plagued previous file systems, such as performing disk writes without sufficient barriers or forgetting to zero out directory blocks. If a crash happens at an inopportune time, these bugs can lead to data loss. FSCQs theorems prove that, under any sequence of crashes followed by reboots, FSCQ will recover its state correctly without losing data. To state FSCQs theorems, this paper introduces the Crash Hoare logic (CHL), which extends traditional Hoare logic with a crash condition, a recovery procedure, and logical address spaces for specifying disk states at different abstraction levels. CHL also reduces the proof effort for developers through proof automation. Using CHL, we developed, specified, and proved the correctness of the FSCQ file system. Although FSCQs design is relatively simple, experiments with FSCQ as a user-level file system show that it is sufficient to run Unix applications with usable performance. FSCQs specifications and proofs required significantly more work than the implementation, but the work was manageable even for a small team of a few researchers.


asia pacific workshop on systems | 2011

Retroactive auditing

Xi Wang; Nickolai Zeldovich; M. Frans Kaashoek

Retroactive auditing is a new approach for detecting past intrusions and vulnerability exploits based on security patches. It works by spawning two copies of the code that was patched, one with and one without the patch, and running both of them on the same inputs observed during the systems original execution. If the resulting outputs differ, an alarm is raised, since the input may have triggered the patched vulnerability. Unlike prior tools, retroactive auditing does not require developers to write predicates for each vulnerability.


workshop on computer architecture education | 2006

A systems approach to teaching computer systems

Jerome H. Saltzer; M. Frans Kaashoek

At M.I.T. we teach a class titled computer system engineering, a required class that provides an introduction to computer systems. It provides a broad and in-depth introduction to the main principles and abstractions of engineering computer systems, be it an operating system, a client/server application, a database application, a secure Web site, or a fault-tolerant disk cluster. These principles and abstractions are timeless and are of value to any computer science or computer engineering student, whether specializing in computer systems or not.

Collaboration


Dive into the M. Frans Kaashoek's collaboration.

Top Co-Authors

Avatar

Nickolai Zeldovich

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jerome H. Saltzer

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

David R. Karger

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hari Balakrishnan

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Robert Tappan Morris

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Xi Wang

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Haogang Chen

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ion Stoica

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge