Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mandana Vaziri is active.

Publication


Featured researches published by Mandana Vaziri.


international conference on software engineering | 2008

Dynamic detection of atomic-set-serializability violations

Christian Hammer; Julian Dolby; Mandana Vaziri; Frank Tip

Previously we presented atomic sets, memory locations that share some consistency property, and units of work, code fragments that preserve consistency of atomic sets on which they are declared. We also proposed atomic-set serializability as a correctness criterion for concurrent programs, stating that units of work must be serializable for each atomic set. We showed that a set of problematic data access patterns characterize executions that are not atomic-set serializable. Our criterion subsumes data races (single-location atomic sets) and serializability (all locations in one set). In this paper, we present a dynamic analysis for detecting violations of atomic-set serializability. The analysis can be implemented efficiently, and does not depend on any specific synchronization mechanism. We implemented the analysis and evaluated it on a suite of real programs and benchmarks. We found a number of known errors as well as several problems not previously reported.


programming language design and implementation | 2010

MemSAT: checking axiomatic specifications of memory models

Emina Torlak; Mandana Vaziri; Julian Dolby

Memory models are hard to reason about due to their complexity, which stems from the need to strike a balance between ease-of-programming and allowing compiler and hardware optimizations. In this paper, we present an automated tool, MemSAT, that helps in debugging and reasoning about memory models. Given an axiomatic specification of a memory model and a multi-threaded test program containing assertions, MemSAT outputs a trace of the program in which both the assertions and the memory model axioms are satisfied, if one can be found. The tool is fully automatic and is based on a SAT solver. If it cannot find a trace, it outputs a minimal subset of the memory model and program constraints that are unsatisfiable. We used MemSAT to check several existing memory models against their published test cases, including the current Java Memory Model by Manson et al. and a revised version of it by Sevcik and Aspinall. We found subtle discrepancies between what was expected and the actual results of test programs.


foundations of software engineering | 2007

Finding bugs efficiently with a SAT solver

Julian Dolby; Mandana Vaziri; Frank Tip

We present an approach for checking code against rich specifications, based on existing work that consists of encoding the program in a relational logic and using a constraint solver to find specification violations. We improve the efficiency of this approach with a new encoding of the program that effectively slices it at the logical level with respect to the specification. We also present new encodings for integer values and arrays, enabling the verification of realistic fragments of code that manipulate both. Our technique can handle integers of much larger ranges than previously possible, and permits large sparse arrays to be handled efficiently. We present a soundness proof for our slicing algorithm and a general condition under which relational formulae may be sliced. We implemented our technique and evaluated it by checking data structure invariants of several classes taken from the Java Collections Framework. We also checked for violations of Javas equality contract in a variety of open-source programs, and found several bugs.


european conference on object oriented programming | 2007

Declarative object identity using relation types

Mandana Vaziri; Frank Tip; Stephen J. Fink; Julian Dolby

Object-oriented languages define the identity of an object to be an address-based object identifier. The programmer may customize the notion of object identity by overriding the equals() and hashCode() methods following a specified contract. This customization often introduces latent errors, since the contract is unenforced and at times impossible to satisfy, and its implementation requires tedious and error-prone boilerplate code. Relation types are a programming model in which object identity is defined declaratively, obviating the need for equals() and hashCode() methods. This entails a stricter contract: identity never changes during an execution. We formalize the model as an adaptation of Featherweight Java, and implement it by extending Java with relation types. Experiments on a set of Java programs show that the majority of classes that override equals() can be refactored into relation types, and that most of the remainder are buggy or fragile.


european conference on object oriented programming | 2010

A type system for data-centric synchronization

Mandana Vaziri; Frank Tip; Julian Dolby; Christian Hammer; Jan Vitek

Data-centric synchronization groups fields of objects into atomic sets to indicate theymust be updated atomically. Each atomic set has associated units of work, code fragments that preserve the consistency of that atomic set.We present a type system for data-centric synchronization that enables separate compilation and supports atomic sets that span multiple objects, thus allowing recursive data structures to be updated atomically. The type system supports full encapsulation for more efficient code generation. We evaluate our proposal using AJ, which extends the Java programming language with data-centric synchronization. We report on the implementation of a compiler and on refactoring classes from standard libraries and a multi-threaded benchmark to use atomic sets. Our results suggest that data-centric synchronization enjoys low annotation overhead while preventing high-level data races.


verification model checking and abstract interpretation | 2008

Finding Concurrency-Related Bugs Using Random Isolation

Nicholas Kidd; Thomas W. Reps; Julian Dolby; Mandana Vaziri

This paper describes the methods used in Empire , a tool to detect concurrency-related bugs, namely atomic-set serializability violations in Java programs. The correctness criterion is based on atomic sets of memory locations, which share a consistency property, and units of work , which preserve consistency when executed sequentially. Empire checks that, for each atomic set, its units of work are serializable. This notion subsumes data races (single-location atomic sets), and serializability (all locations in one atomic set). To obtain a sound, finite model of locking behavior for use in Empire , we devised a new abstraction principle, random isolation , which allows strong updates to be performed on the abstract counterpart of each randomly-isolated object. This permits Empire to track the status of a Java lock, even for programs that use an unbounded number of locks. The advantage of random isolation is that properties proved about a randomly-isolated object can be generalized to all objects allocated at the same site. We ran Empire on eight programs from the ConTest benchmark suite, for which Empire detected numerous violations.


european conference on object oriented programming | 2014

Stream Processing with a Spreadsheet

Mandana Vaziri; Olivier Tardieu; Rodric M. Rabbah; Philippe Suter; Martin Hirzel

Continuous data streams are ubiquitous and represent such a high volume of data that they cannot be stored to disk, yet it is often crucial for them to be analyzed in real-time. Stream processing is a programming paradigm that processes these immediately, and enables continuous analytics. Our objective is to make it easier for analysts, with little programming experience, to develop continuous analytics applications directly. We propose enhancing a spreadsheet, a pervasive tool, to obtain a programming platform for stream processing. We present the design and implementation of an enhanced spreadsheet that enables visualizing live streams, live programming to compute new streams, and exporting computations to be run on a server where they can be shared with other users, and persisted beyond the life of the spreadsheet. We formalize our core language, and present case studies that cover a range of stream processing applications.


international conference on software engineering | 2013

Detecting deadlock in programs with data-centric synchronization

Daniel Marino; Christian Hammer; Julian Dolby; Mandana Vaziri; Frank Tip; Jan Vitek

Previously, we developed a data-centric approach to concurrency control in which programmers specify synchronization constraints declaratively, by grouping shared locations into atomic sets. We implemented our ideas in a Java extension called AJ, using Java locks to implement synchronization. We proved that atomicity violations are prevented by construction, and demonstrated that realistic Java programs can be refactored into AJ without significant loss of performance. This paper presents an algorithm for detecting possible deadlock in AJ programs by ordering the locks associated with atomic sets. In our approach, a type-based static analysis is extended to handle recursive data structures by considering programmer-supplied, compiler-verified lock ordering annotations. In an evaluation of the algorithm, all 10 AJ programs under consideration were shown to be deadlock-free. One program needed 4 ordering annotations and 2 others required minor refactorings. For the remaining 7 programs, no programmer intervention of any kind was required.


ACM Transactions on Programming Languages and Systems | 2012

A data-centric approach to synchronization

Julian Dolby; Christian Hammer; Daniel Marino; Frank Tip; Mandana Vaziri; Jan Vitek

Concurrency-related errors, such as data races, are frustratingly difficult to track down and eliminate in large object-oriented programs. Traditional approaches to preventing data races rely on protecting instruction sequences with synchronization operations. Such control-centric approaches are inherently brittle, as the burden is on the programmer to ensure that all concurrently accessed memory locations are consistently protected. Data-centric synchronization is an alternative approach that offloads some of the work on the language implementation. Data-centric synchronization groups fields of objects into atomic sets to indicate that these fields must always be updated atomically. Each atomic set has associated units of work, that is, code fragments that preserve the consistency of that atomic set. Synchronization operations are added automatically by the compiler. We present an extension to the Java programming language that integrates annotations for data-centric concurrency control. The resulting language, called AJ, relies on a type system that enables separate compilation and supports atomic sets that span multiple objects and that also supports full encapsulation for more efficient code generation. We evaluate our proposal by refactoring classes from standard libraries, as well as a number of multithreaded benchmarks, to use atomic sets. Our results suggest that data-centric synchronization is easy to use and enjoys low annotation overhead, while successfully preventing data races. Moreover, experiments on the SPECjbb benchmark suggest that acceptable performance can be achieved with a modest amount of tuning.


International Journal on Software Tools for Technology Transfer | 2011

Finding concurrency-related bugs using random isolation

Nicholas Kidd; Thomas W. Reps; Julian Dolby; Mandana Vaziri

This paper concerns automatically verifying safety properties of concurrent programs. In our work, the safety property of interest is to check for multi-location data races in concurrent Java programs, where a multi-location data race arises when a program is supposed to maintain an invariant over multiple data locations, but accesses/updates are not protected correctly by locks. The main technical challenge that we address is how to generate a program model that retains (at least some of) the synchronization operations of the concrete program, when the concrete program uses dynamic memory allocation. Static analysis of programs typically begins with an abstraction step that generates an abstract program that operates on a finite set of abstract objects. In the presence of dynamic memory allocation, the finite number of abstract objects of the abstract program must represent the unbounded number of concrete objects that the concrete program may allocate, and thus by the pigeon-hole principle some of the abstract objects must be summary objects—they represent more than one concrete object. Because abstract summary objects represent multiple concrete objects, the program analyzer typically must perform weak updates on the abstract state of a summary object, where a weak update accumulates information. Because weak updates accumulate rather than overwrite, the analyzer is only able to determine weak judgements on the abstract state, i.e., that some property possibly holds, and not that it definitely holds. The problem with weak judgements is that determining whether an interleaved execution respects program synchronization requires the ability to determine strong judgements, i.e., that some lock is definitely held, and thus the analyzer needs to be able to perform strong updates—an overwrite of the abstract state—to enable strong judgements. We present the random-isolation abstraction as a new principle for enabling strong updates of special abstract objects. The idea is to associate with a program allocation site two abstract objects,

Collaboration


Dive into the Mandana Vaziri's collaboration.

Researchain Logo
Decentralizing Knowledge