Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ronnie Chaiken is active.

Publication


Featured researches published by Ronnie Chaiken.


operating systems design and implementation | 2002

Farsite: federated, available, and reliable storage for an incompletely trusted environment

Atul Adya; William J. Bolosky; Miguel Castro; Gerald Cermak; Ronnie Chaiken; John R. Douceur; Jon Howell; Jacob R. Lorch; Marvin M. Theimer; Roger Wattenhofer

Farsite is a secure, scalable file system that logically functions as a centralized file server but is physically distributed among a set of untrusted computers. Farsite provides file availability and reliability through randomized replicated storage; it ensures the secrecy of file contents with cryptographic techniques; it maintains the integrity of file and directory data with a Byzantine-fault-tolerant protocol; it is designed to be scalable by using a distributed hint mechanism and delegation certificates for pathname translations; and it achieves good performance by locally caching file data, lazily propagating file updates, and varying the duration and granularity of content leases. We report on the design of Farsite and the lessons we have learned by implementing much of that design.


very large data bases | 2008

SCOPE: easy and efficient parallel processing of massive data sets

Ronnie Chaiken; Bob Jenkins; Per-Ake Larson; Bill Ramsey; Darren A. Shakib; Simon Weaver; Jingren Zhou

Companies providing cloud-scale services have an increasing need to store and analyze massive data sets such as search logs and click streams. For cost and performance reasons, processing is typically done on large clusters of shared-nothing commodity machines. It is imperative to develop a programming model that hides the complexity of the underlying system but provides flexibility by allowing users to extend functionality to meet a variety of requirements. In this paper, we present a new declarative and extensible scripting language, SCOPE (Structured Computations Optimized for Parallel Execution), targeted for this type of massive data analysis. The language is designed for ease of use with no explicit parallelism, while being amenable to efficient parallel execution on large clusters. SCOPE borrows several features from SQL. Data is modeled as sets of rows composed of typed columns. The select statement is retained with inner joins, outer joins, and aggregation allowed. Users can easily define their own functions and implement their own versions of operators: extractors (parsing and constructing rows from a file), processors (row-wise processing), reducers (group-wise processing), and combiners (combining rows from two inputs). SCOPE supports nesting of expressions but also allows a computation to be specified as a series of steps, in a manner often preferred by programmers. We also describe how scripts are compiled into efficient, parallel execution plans and executed on large clusters.


internet measurement conference | 2009

The nature of data center traffic: measurements & analysis

Srikanth Kandula; Sudipta Sengupta; Albert G. Greenberg; Parveen Patel; Ronnie Chaiken

We explore the nature of traffic in data centers, designed to support the mining of massive data sets. We instrument the servers to collect socket-level logs, with negligible performance impact. In a 1500 server operational cluster, we thus amass roughly a petabyte of measurements over two months, from which we obtain and report detailed views of traffic and congestion conditions and patterns. We further consider whether traffic matrices in the cluster might be obtained instead via tomographic inference from coarser-grained counter data.


very large data bases | 2012

SCOPE: parallel databases meet MapReduce

Jingren Zhou; Nicolas Bruno; Ming-Chuan Wu; Per-Ake Larson; Ronnie Chaiken; Darren A. Shakib

Companies providing cloud-scale data services have increasing needs to store and analyze massive data sets, such as search logs, click streams, and web graph data. For cost and performance reasons, processing is typically done on large clusters of tens of thousands of commodity machines. Such massive data analysis on large clusters presents new opportunities and challenges for developing a highly scalable and efficient distributed computation system that is easy to program and supports complex system optimization to maximize performance and reliability. In this paper, we describe a distributed computation system, Structured Computations Optimized for Parallel Execution (Scope), targeted for this type of massive data analysis. Scope combines benefits from both traditional parallel databases and MapReduce execution engines to allow easy programmability and deliver massive scalability and high performance through advanced optimization. Similar to parallel databases, the system has a SQL-like declarative scripting language with no explicit parallelism, while being amenable to efficient parallel execution on large clusters. An optimizer is responsible for converting scripts into efficient execution plans for the distributed computation engine. A physical execution plan consists of a directed acyclic graph of vertices. Execution of the plan is orchestrated by a job manager that schedules execution on available machines and provides fault tolerance and recovery, much like MapReduce systems. Scope is being used daily for a variety of data analysis and data mining applications over tens of thousands of machines at Microsoft, powering Bing, and other online services.


european conference on computer systems | 2006

The SMART way to migrate replicated stateful services

Jacob R. Lorch; Atul Adya; William J. Bolosky; Ronnie Chaiken; John R. Douceur; Jon Howell

Many stateful services use the replicated state machine approach for high availability. In this approach, a service runs on multiple machines to survive machine failures. This paper describes SMART, a new technique for changing the set of machines where such a service runs, i.e., migrating the service. SMART improves upon existing techniques in three important ways. First, SMART allows migrations that replace non-failed machines. Thus, SMART enables load balancing and lets an automated system replace failed machines. Such autonomic migration is an important step toward full autonomic operation, in which administrators play a minor role and need not be available twenty-four hours a day, seven days a week. Second, SMART can pipeline concurrent requests, a useful performance optimization. Third, prior published migration techniques are described in insufficient detail to admit implementation, whereas our description of SMART is complete. In addition to describing SMART, we also demonstrate its practicality by implementing it, evaluating our implementations performance, and using it to build a consistent, replicated, migratable file system. Our experiments demonstrate the performance advantage of pipelining concurrent requests, and show that migration has only a minor and temporary effect on performance.


international conference on data engineering | 2010

Incorporating partitioning and parallel plans into the SCOPE optimizer

Jingren Zhou; Per-Ake Larson; Ronnie Chaiken

Massive data analysis on large clusters presents new opportunities and challenges for query optimization. Data partitioning is crucial to performance in this environment. However, data repartitioning is a very expensive operation so minimizing the number of such operations can yield very significant performance improvements. A query optimizer for this environment must therefore be able to reason about data partitioning including its interaction with sorting and grouping. SCOPE is a SQL-like scripting language used at Microsoft for massive data analysis. A transformation-based optimizer is responsible for converting scripts into efficient execution plans for the Cosmos distributed computing platform. In this paper, we describe how reasoning about data partitioning is incorporated into the SCOPE optimizer. We show how relational operators affect partitioning, sorting and grouping properties and describe how the optimizer reasons about and exploits such properties to avoid unnecessary operations. In most optimizers, consideration of parallel plans is an afterthought done in a postprocessing step. Reasoning about partitioning enables the SCOPE optimizer to fully integrate consideration of parallel, serial and mixed plans into the cost-based optimization. The benefits are illustrated by showing the variety of plans enabled by our approach.


Archive | 2000

Mojo: A Dynamic Optimization System

Wen-Ke Chen; Sorin Lerner; Ronnie Chaiken; David Mitford Gillies; Redmond Wa


Archive | 2009

The Nature of Datacenter Traffic: Measurements & Analysis

Srikanth Kandula; Sudipta Sengupta; Albert G. Greenberg; Parveen Patel; Ronnie Chaiken


Archive | 2004

Lossless recovery for computer systems with map assisted state transfer

Atul Adya; Jacob R. Lorch; Ronnie Chaiken; William J. Bolosky


Archive | 1999

Instrumentation and optimization tools for heterogeneous programs

Ronnie Chaiken; Andrew James Edwards; John A. Lefor; Jiyang Liu; Ken B. Pierce; Amitabh Srivastava; Hoi H. Vo

Collaboration


Dive into the Ronnie Chaiken's collaboration.

Researchain Logo
Decentralizing Knowledge