Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ramakrishna Kotla is active.

Publication


Featured researches published by Ramakrishna Kotla.


symposium on operating systems principles | 2007

Zyzzyva: speculative byzantine fault tolerance

Ramakrishna Kotla; Lorenzo Alvisi; Michael Dahlin; Allen Clement; Edmund L. Wong

We present Zyzzyva, a protocol that uses speculation to reduce the cost and simplify the design of Byzantine fault tolerant state machine replication. In Zyzzyva, replicas respond to a clients request without first running an expensive three-phase commit protocol to reach agreement on the order in which the request must be processed. Instead, they optimistically adopt the order proposed by the primary and respond immediately to the client. Replicas can thus become temporarily inconsistent with one another, but clients detect inconsistencies, help correct replicas converge on a single total ordering of requests, and only rely on responses that are consistent with this total order. This approach allows Zyzzyva to reduce replication overheads to near their theoretical minimal.


symposium on operating systems principles | 2013

Consistency-based service level agreements for cloud storage

Douglas B. Terry; Vijayan Prabhakaran; Ramakrishna Kotla; Mahesh Balakrishnan; Marcos Kawazoe Aguilera; Hussam Abu-Libdeh

Choosing a cloud storage system and specific operations for reading and writing data requires developers to make decisions that trade off consistency for availability and performance. Applications may be locked into a choice that is not ideal for all clients and changing conditions. Pileus is a replicated key-value store that allows applications to declare their consistency and latency priorities via consistency-based service level agreements (SLAs). It dynamically selects which servers to access in order to deliver the best service given the current configuration and system conditions. In application-specific SLAs, developers can request both strong and eventual consistency as well as intermediate guarantees such as read-my-writes. Evaluations running on a worldwide test bed with geo-replicated data show that the system adapts to varying client-server latencies to provide service that matches or exceeds the best static consistency choice and server selection scheme.


dependable systems and networks | 2004

High throughput Byzantine fault tolerance

Ramakrishna Kotla; Michael Dahlin

This paper argues for a simple change to Byzantine fault tolerant (BFT) state machine replication libraries. Traditional BFT state machine replication techniques provide high availability and security but fail to provide high throughput. This limitation stems from the fundamental assumption of generalized state machine replication techniques that all replicas execute requests sequentially in the same total order to ensure consistency across replicas. We propose a high throughput Byzantine fault tolerant architecture that uses application-specific information to identify and concurrently execute independent requests. Our architecture thus provides a general way to exploit application parallelism in order to provide high throughput without compromising correctness. Although this approach is extremely simple, it yields dramatic practical benefits. When sufficient application concurrency and hardware resources exist, CBASE, our system prototype, provides orders of magnitude improvements in throughput over BASE, a traditional BFT architecture. CBASE-FS, a Byzantine fault tolerant file system that uses CBASE, achieves twice the throughput of BASE-FS for the IOZone micro-benchmarks even in a configuration with modest available hardware parallelism.


ACM Transactions on Computer Systems | 2009

Zyzzyva: Speculative Byzantine fault tolerance

Ramakrishna Kotla; Lorenzo Alvisi; Michael Dahlin; Allen Clement; Edmund L. Wong

A longstanding vision in distributed systems is to build reliable systems from unreliable components. An enticing formulation of this vision is Byzantine Fault-Tolerant (BFT) state machine replication, in which a group of servers collectively act as a correct server even if some of the servers misbehave or malfunction in arbitrary (“Byzantine”) ways. Despite this promise, practitioners hesitate to deploy BFT systems, at least partly because of the perception that BFT must impose high overheads. In this article, we present Zyzzyva, a protocol that uses speculation to reduce the cost of BFT replication. In Zyzzyva, replicas reply to a clients request without first running an expensive three-phase commit protocol to agree on the order to process requests. Instead, they optimistically adopt the order proposed by a primary server, process the request, and reply immediately to the client. If the primary is faulty, replicas can become temporarily inconsistent with one another, but clients detect inconsistencies, help correct replicas converge on a single total ordering of requests, and only rely on responses that are consistent with this total order. This approach allows Zyzzyva to reduce replication overheads to near their theoretical minima and to achieve throughputs of tens of thousands of requests per second, making BFT replication practical for a broad range of demanding services.


european conference on computer systems | 2009

Effective and efficient compromise recovery for weakly consistent replication

Prince Mahajan; Ramakrishna Kotla; Catherine C. Marshall; Venugopalan Ramasubramanian; Thomas L. Rodeheffer; Douglas B. Terry; Ted Wobber

Weakly consistent replication of data has become increasingly important both for loosely-coupled collections of personal devices and for large-scale infrastructure services. Unfortunately, automatic replication mechanisms are agnostic about the quality of the data they replicate. Inappropriate updates, whether malicious or simply the result of misuse, propagate automatically and quickly. The consequences may not be noticed until days later, when the corrupted data has been fully replicated, thereby deleting or overwriting all traces of the valid data. In this sort of situation, it can be hard or impossible to restore an entire distributed system to a clean state without losing data and disrupting users. Polygraph is a software layer that extends the functionality of weakly consistent replication systems to support compromise recovery. Its goal is to undo the direct and indirect effects of updates due to a source known after the fact to have been compromised. In restoring a clean replicated state, Polygraph expunges all data due to a compromise or derived from such data, retains as much uncompromised data as possible, and revives valid versions of subsequently compromised data. Our evaluation demonstrates that Polygraph is both effective, retaining uncompromised data, and efficient, re-replicating data only when necessary.


IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 2003

A high-performance architecture and BDD-based synthesis methodology for packet classification

Amit Prakash; Ramakrishna Kotla; Tanmoy Mandal; Adnan Aziz

Packet classification is a computationally intensive task that routers need to perform in order to implement basic functions such as next-hop lookup, as well as advanced features such as quality of service and security. Formally, a classifier examines each incoming packet, and determines which rules to apply to it. Semantically, the classifier is characterized by a function mapping the packet header to an integer encoding the action to be taken for that packet. The function itself is syntactically presented as a chain of if-then-else statements. Since the header consists of a fixed number of bits, it is natural to use logic synthesis to implement fast small classifiers in hardware. When doing this, there are two key issues that must be kept in mind: 1) these functions change over time, so the target architecture needs to be reconfigurable and 2) classification functions have a structure which should be exploited. We show that Internet Protocol forwarding, which is a special case of classification, can be performed by provably small circuits at very high speed by mapping the binary decision diagram (BDD) representation of the classification function to a cascaded array of lookup tables. This approach does not immediately carry over to general packet classification; the BDD for the classification function grows very large. We develop a solution based on partitioning to overcome this problem. We prove NP-completeness of optimal partitioning. We describe a heuristic for partition. The latency introduced by pipelining can be reduced by partially collapsing the BDD. We present an efficient algorithm based on dynamic programming to obtain an optimum grouping of variables that minimizes the total amount of memory required for a given number of levels.


usenix annual technical conference | 2007

SafeStore: a durable and practical storage system

Ramakrishna Kotla; Lorenzo Alvisi; Michael Dahlin


international conference of distributed computing and networking | 2015

Yesquel: Scalable SQL storage for Web applications

Marcos Kawazoe Aguilera; Joshua B. Leners; Ramakrishna Kotla; Michael Walfish


Communications of The ACM | 2008

Zyzzyva: speculative Byzantine fault tolerance

Ramakrishna Kotla; Allen Clement; Edmund L. Wong; Lorenzo Alvisi; Michael Dahlin


operating systems design and implementation | 2012

Pasture: secure offline data access using commodity trusted hardware

Ramakrishna Kotla; Tom Rodeheffer; Indrajit Roy; Patrick Stuedi; Benjamin Wester

Collaboration


Dive into the Ramakrishna Kotla's collaboration.

Top Co-Authors

Avatar

Michael Dahlin

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Lorenzo Alvisi

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Allen Clement

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Edmund L. Wong

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge