Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark David Lillibridge is active.

Publication


Featured researches published by Mark David Lillibridge.


programming language design and implementation | 2002

Extended static checking for Java

Cormac Flanagan; K. Rustan M. Leino; Mark David Lillibridge; Greg Nelson; James B. Saxe; Raymie Stata

Software development and maintenance are costly endeavors. The cost can be reduced if more software defects are detected earlier in the development cycle. This paper introduces the Extended Static Checker for Java (ESC/Java), an experimental compile-time program checker that finds common programming errors. The checker is powered by verification-condition generation and automatic theorem-proving techniques. It provides programmers with a simple annotation language with which programmer design decisions can be expressed formally. ESC/Java examines the annotated software and warns of inconsistencies between the design decisions recorded in the annotations and the actual code, and also warns of potential runtime errors in the code. This paper gives an overview of the checker architecture and annotation language and describes our experience applying the checker to tens of thousands of lines of Java programs.


modeling, analysis, and simulation on computer and telecommunication systems | 2009

Extreme Binning: Scalable, parallel deduplication for chunk-based file backup

Deepavali Bhagwat; Kave Eshghi; Darrell D. E. Long; Mark David Lillibridge

Data deduplication is an essential and critical component of backup systems. Essential, because it reduces storage space requirements, and critical, because the performance of the entire backup operation depends on its throughput. Traditional backup workloads consist of large data streams with high locality, which existing deduplication techniques require to provide reasonable throughput. We present Extreme Binning, a scalable deduplication technique for non-traditional backup workloads that are made up of individual files with no locality among consecutive files in a given window of time. Due to lack of locality, existing techniques perform poorly on these workloads. Extreme Binning exploits file similarity instead of locality, and makes only one disk access for chunk lookup per file, which gives reasonable throughput. Multi-node backup systems built with Extreme Binning scale gracefully with the amount of input data; more backup nodes can be added to boost throughput. Each file is allocated using a stateless routing algorithm to only one node, allowing for maximum parallelization, and each backup node is autonomous with no dependency across nodes, making data management tasks robust with low overhead.


ACM Transactions on Computer Systems | 2017

Reliability Analysis of SSDs Under Power Fault

Mai Zheng; Joseph Tucek; Feng Qin; Mark David Lillibridge; Bill W. Zhao; Elizabeth S. Yang

Modern storage technology (solid-state disks (SSDs), NoSQL databases, commoditized RAID hardware, etc.) brings new reliability challenges to the already-complicated storage stack. Among other things, the behavior of these new components during power faults—which happen relatively frequently in data centers—is an important yet mostly ignored issue in this dependability-critical area. Understanding how new storage components behave under power fault is the first step towards designing new robust storage systems. In this article, we propose a new methodology to expose reliability issues in block devices under power faults. Our framework includes specially designed hardware to inject power faults directly to devices, workloads to stress storage components, and techniques to detect various types of failures. Applying our testing framework, we test 17 commodity SSDs from six different vendors using more than three thousand fault injection cycles in total. Our experimental results reveal that 14 of the 17 tested SSD devices exhibit surprising failure behaviors under power faults, including bit corruption, shorn writes, unserializable writes, metadata corruption, and total device failure.


Sigplan Notices | 2013

PLDI 2002: Extended static checking for Java

Cormac Flanagan; K. Rustan M. Leino; Mark David Lillibridge; Greg Nelson; James B. Saxe; Raymie Stata

Software development and maintenance are costly endeavors. The cost can be reduced if more software defects are detected earlier in the development cycle. This paper introduces the Extended Static Checker for Java (ESC/Java), an experimental compile-time program checker that finds common programming errors. The checker is powered by verification-condition generation and automatic theoremproving techniques. It provides programmers with a simple annotation language with which programmer design decisions can be expressed formally. ESC/Java examines the annotated software and warns of inconsistencies between the design decisions recorded in the annotations and the actual code, and also warns of potential runtime errors in the code. This paper gives an overview of the checker architecture and annotation language and describes our experience applying the checker to tens of thousands of lines of Java programs.


international conference on peer-to-peer computing | 2008

Transaction Rate Limiters for Peer-to-Peer Systems

Marcos Kawazoe Aguilera; Mark David Lillibridge; Xiaozhou Li

We introduce transaction rate limiters, new mechanisms that limit (probabilistically) the maximum number of transactions a user of a peer-to-peer system can do in any given period. They can be used to limit the consumption of selfish users and the damage done by malicious users. They complement reputation systems, solving the traitor problem. We give simple distributed algorithms that work over time frames as short as seconds and are very robust: they use no trusted servers and continue to work even when attacked by a large fraction of users colluding. Our algorithms are based on a new primitive we have devised, probably-anonymous queries, which guarantees anonymity with a specified probability.


symposium on cloud computing | 2018

Memory-Oriented Distributed Computing at Rack Scale.

Haris Volos; Kimberly Keeton; Yupu Zhang; Milind Chabbi; Se Kwon Lee; Mark David Lillibridge; Yuvraj Patel; Wei Zhang

Introduction: Recent advances provide the building blocks for constructing rack-scale architectures with a large pool of disaggregated non-volatile memory (NVM) that can be shared across a high-performance system interconnect by decentralized compute resources. NVDIMMs and new NVM technologies provide byte-addressable persistent storage accessible through loads and stores, rather than the block I/O path used today. High-performance system interconnects, such as Gen-Z, OmniPath, and RDMA over InfiniBand, provide low-latency access from compute nodes to fabricattached memory (e.g., microsecond-scale remote memory latencies are already possible with RDMA). Disaggregated memory architectures1 share several characteristics: 1) a high-capacity pool of memory that can be shared by heterogeneous computing resources at low latency; 2) a partially disaggregated architecture that treats node-local memory as private and disaggregated memory as shared; 3) a heterogeneous memory system containing both volatile DRAM and NVM; 4) unmediated access from a compute node to disaggregated memory provided by one-sided loads/stores or gets/puts and facilitated through atomic operations (e.g., compare-and-swap as in RDMA or Gen-Z); 5) hardwareenforced cache coherence domains limited to a single compute node; and 6) a separation of fault domains between processing and disaggregated memory. MODC: Our goal is to investigate how to program this emerging class of system architectures. We propose memoryoriented distributed computing (MODC), an approach for building system runtimes that exploits disaggregated memory to facilitate work distribution, coordination and fault tolerance. Global state is maintained as shared data structures in disaggregated memory that are visible to all participating processes, rather than being physically partitioned. Because processes on all nodes have direct access to global data structures, data can be efficiently shared, without the need for message overheads. Processes are equally able to analyze and service requests for any part of the dataset, which provides better load balancing and more robust performance for skewed workloads. Shared access to global data also


file and storage technologies | 2009

Sparse indexing: large scale, inline deduplication using sampling and locality

Mark David Lillibridge; Kave Eshghi; Deepavali Bhagwat; Vinay Deolalikar; Greg Trezise; Peter Thomas Camble


usenix annual technical conference | 2003

A cooperative internet backup scheme

Mark David Lillibridge; Sameh Elnikety; Andrew Birrell; Michael Burrows; Michael Isard


file and storage technologies | 2013

Improving restore speed for backup systems that use inline chunk-based deduplication

Mark David Lillibridge; Kave Eshghi; Deepavali Bhagwat


file and storage technologies | 2003

Block-level security for network-attached disks

Marcos Kawazoe Aguilera; Minwen Ji; Mark David Lillibridge; John MacCormick; Erwin Oertli; David G. Andersen; Michael Burrows; Timothy Mann; Chandramohan A. Thekkath

Collaboration


Dive into the Mark David Lillibridge's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge