Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Srivatsan Ravi is active.

Publication


Featured researches published by Srivatsan Ravi.


international conference on principles of distributed systems | 2011

On the cost of concurrency in transactional memory

Srivatsan Ravi

Traditional techniques for synchronization are based on \emph{locking} that provides threads with exclusive access to shared data. \emph{Coarse-grained} locking typically forces threads to access large amounts of data sequentially and, thus, does not fully exploit hardware concurrency. Program-specific \emph{fine-grained} locking or \emph{non-blocking} (\emph{i.e.}, not using locks) synchronization, on the other hand, is a dark art to most programmers and trusted to the wisdom of a few computing experts. Thus, it is appealing to seek a middle ground between these two extremes: a synchronization mechanism that relieves the programmer of the overhead of reasoning about data conflicts that may arise from concurrent operations without severely limiting the programs performance. The \emph{Transactional Memory (TM)} abstraction is proposed as such a mechanism: it intends to combine an easy-to-use programming interface with an efficient utilization of the concurrent-computing abilities provided by multicore architectures. TM allows the programmer to \emph{speculatively} execute sequences of shared-memory operations as \emph{atomic transactions} with \emph{all-or-nothing} semantics: the transaction can either \emph{commit}, in which case it appears as executed sequentially, or \emph{abort}, in which case its update operations do not take effect. Thus, the programmer can design software having only sequential semantics in mind and let TM take care, at run-time, of resolving the conflicts in concurrent executions. Intuitively, we want TMs to allow for as much \emph{concurrency} as possible: in the absence of severe data conflicts, transactions should be able to progress in parallel. But what are the inherent costs associated with providing high degrees of concurrency in TMs? This is the central question of the thesis.


international conference on distributed computing systems | 2013

Safety of Deferred Update in Transactional Memory

Hagit Attiya; Sandeep Hans; Srivatsan Ravi

Transactional memory allows the user to declare sequences of instructions as speculative transactions that can either commit or abort. If a transaction commits, it appears to be executed sequentially, so that the committed transactions constitute a correct sequential execution. If a transaction aborts, none of its instructions can affect other transactions. The popular criterion of opacity requires that the views of aborted transactions must also be consistent with the global sequential order constituted by committed ones. This is believed to be important, since inconsistencies observed by an aborted transaction may cause a fatal irrecoverable error or waste of the system in an infinite loop. Intuitively, an opaque implementation must ensure that no intermediate view a transaction obtains before it commits or aborts can be affected by a transaction that has not started committing yet, so called deferred-update semantics. In this paper, we intend to grasp this intuition formally. We propose a variant of opacity that explicitly requires the sequential order to respect the deferred-update semantics. Unlike opacity, our property also ensures that a serialization of a history implies serializations of its prefixes. Finally, we show that our property is equivalent to opacity if we assume that no two transactions commit identical values on the same variable, and present a counter-example for scenarios when the “unique-write” assumption does not hold.


computer and communications security | 2017

Concurrency and Privacy with Payment-Channel Networks

Giulio Malavolta; Pedro Moreno-Sanchez; Aniket Kate; Matteo Maffei; Srivatsan Ravi

Permissionless blockchains protocols such as Bitcoin are inherently limited in transaction throughput and latency. Current efforts to address this key issue focus on off-chain payment channels that can be combined in a Payment-Channel Network (PCN) to enable an unlimited number of payments without requiring to access the blockchain other than to register the initial and final capacity of each channel. While this approach paves the way for low latency and high throughput of payments, its deployment in practice raises several privacy concerns as well as technical challenges related to the inherently concurrent nature of payments that have not been sufficiently studied so far. In this work, we lay the foundations for privacy and concurrency in PCNs, presenting a formal definition in the Universal Composability framework as well as practical and provably secure solutions. In particular, we present Fulgor and Rayo. Fulgor is the first payment protocol for PCNs that provides provable privacy guarantees for PCNs and is fully compatible with the Bitcoin scripting system. However, Fulgor is a blocking protocol and therefore prone to deadlocks of concurrent payments as in currently available PCNs. Instead, Rayo is the first protocol for PCNs that enforces non-blocking progress (i.e., at least one of the concurrent payments terminates). We show through a new impossibility result that non-blocking progress necessarily comes at the cost of weaker privacy. At the core of Fulgor and Rayo is Multi-Hop HTLC, a new smart contract, compatible with the Bitcoin scripting system, that provides conditional payments while reducing running time and communication overhead with respect to previous approaches. Our performance evaluation of Fulgor and Rayo shows that a payment with 10 intermediate users takes as few as 5 seconds, thereby demonstrating their feasibility to be deployed in practice.


parallel computing technologies | 2015

Progressive Transactional Memory in Time and Space

Srivatsan Ravi

Transactional memory TM allows concurrent processes to organize sequences of operations on shared data items into atomic transactions. A transaction may commit, in which case it appears to have executed sequentially or it may abort, in which case no data item is updated. The TM programming paradigm emerged as an alternative to conventional fine-grained locking techniques, offering ease of programming and compositionality. Though typically themselves implemented using locks, TMs hide the inherent issues of lock-based synchronization behind a nice transactional programming interface. In this paper, we explore inherent time and space complexity of lock-based TMs, with a focus of the most popular class of progressive lock-based TMs. We derive that a progressive TM might enforce a read-only transaction to perform a quadratic in the number of the data items it reads number of steps and access a linear number of distinct memory locations, closing the question of inherent cost of read validation in TMs. We then show that the total number of remote memory references RMRs that take place in an execution of a progressive TM in which n concurrent processes perform transactions on a single data item might reach


principles of distributed computing | 2012

Brief announcement: From sequential to concurrent: correctness and relative efficiency

Vincent Gramoli; Srivatsan Ravi


international middleware conference | 2016

Programming Scalable Cloud Services with AEON

Bo Sang; Gustavo Petri; Masoud Saeida Ardekani; Srivatsan Ravi; Patrick Eugster

\varOmega n \log n


international symposium on stabilization safety and security of distributed systems | 2017

Generalized Paxos Made Byzantine (and Less Complex)

Miguel Pires; Srivatsan Ravi; Rodrigo Rodrigues


european conference on parallel processing | 2017

A Concurrency-Optimal Binary Search Tree

Vitalii Aksenov; Vincent Gramoli; Anna Malova; Srivatsan Ravi

, which appears to be the first RMR complexity lower bound for transactional memory.


Journal of Parallel and Distributed Computing | 2017

Grasping the gap between blocking and non-blocking transactional memories

Srivatsan Ravi

How to turn a sequential implementation of a data structure (a hash table, a list, a tree, etc.) into a correct and, preferably, efficient concurrent one? What if we provide an environment in which a user can locally run the sequential code so that the resulting execution is globally correct. One way to do this is to use locks to make sure that critical parts of a sequential program can only be accessed in an exclusive mode. An implementation that grabs a lock on the whole data structure before executing a sequential operation imposes a serial order but ignores all the benefits provided by the multiprocessing power of modern machines. Efficient fine-grained locking requires lots of intelligence, since it must be based on good understanding of which parts of the sequential code to protect at what time. A more automated approach is to use transactional memory (TM) and treat each (sequential) operation as a transaction. If the transaction commits, the corresponding operation returns the response computed based on the values read in the course of the transaction. Otherwise, if the transaction aborts, the operation does not take effect. This approach promises to make use of the hardware concurrency at low intellectual cost. But does this simplicity bring a considerable efficiency degradation with respect to fine-grained locking? To tackle this question, we first define the meaning of a correct transformation of a sequential program into a concurrent one. More precisely, we model an execution of a


international symposium on distributed computing | 2015

Inherent Limitations of Hybrid Transactional Memory

Dan Alistarh; Justin Kopinsky; Srivatsan Ravi; Nir Shavit

Designing low-latency cloud-based applications that are adaptable to unpredictable workloads and efficiently utilize modern cloud computing platforms is hard. The actor model is a popular paradigm that can be used to develop distributed applications: actors encapsulate state and communicate with each other by sending events. Consistency is guaranteed if each event only accesses a single actor, thus eliminating potential data races and deadlocks. However it is nontrivial to provide consistency for concurrent events spanning across multiple actors. This paper addresses this problem by introducing AEON: a framework that provides the following properties: (i) Programmability: programmers only need to reason about sequential semantics when reasoning about concurrency resulting from multi-actor events; (ii) Scalability: AEON runtime protocol guarantees serializable and starvation-free execution of multi-actor events, while maximizing parallel execution; (iii) Elasticity: AEON supports fine-grained elasticity enabling the programmer to transparently migrate individual actors without violating the consistency or entailing significant performance overheads. Our empirical results show that it is possible to combine the best of all the above three worlds without compromising on the application performance.

Collaboration


Dive into the Srivatsan Ravi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Justin Kopinsky

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Nir Shavit

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dan Alistarh

Institute of Science and Technology Austria

View shared research outputs
Researchain Logo
Decentralizing Knowledge