Sasa Misailovic
Massachusetts Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sasa Misailovic.
foundations of software engineering | 2011
Stelios Sidiroglou-Douskos; Sasa Misailovic; Henry Hoffmann; Martin C. Rinard
Many modern computations (such as video and audio encoders, Monte Carlo simulations, and machine learning algorithms) are designed to trade off accuracy in return for increased performance. To date, such computations typically use ad-hoc, domain-specific techniques developed specifically for the computation at hand. Loop perforation provides a general technique to trade accuracy for performance by transforming loops to execute a subset of their iterations. A criticality testing phase filters out critical loops (whose perforation produces unacceptable behavior) to identify tunable loops (whose perforation produces more efficient and still acceptably accurate computations). A perforation space exploration algorithm perforates combinations of tunable loops to find Pareto-optimal perforation policies. Our results indicate that, for a range of applications, this approach typically delivers performance increases of over a factor of two (and up to a factor of seven) while changing the result that the application produces by less than 10%.
international conference on software engineering | 2010
Sasa Misailovic; Stylianos Sidiroglou; Henry Hoffmann; Martin C. Rinard
Many computations exhibit a trade off between execution time and quality of service. A video encoder, for example, can often encode frames more quickly if it is given the freedom to produce slightly lower quality video. A developer attempting to optimize such computations must navigate a complex trade-off space to find optimizations that appropriately balance quality of service and performance. We present a new quality of service profiler that is designed to help developers identify promising optimization opportunities in such computations. In contrast to standard profilers, which simply identify time-consuming parts of the computation, a quality of service profiler is designed to identify subcomputations that can be replaced with new (and potentially less accurate) subcomputations that deliver significantly increased performance in return for acceptably small quality of service losses. Our quality of service profiler uses loop perforation (which transforms loops to perform fewer iterations than the original loop) to obtain implementations that occupy different points in the performance/quality of service trade-off space. The rationale is that optimizable computations often contain loops that perform extra iterations, and that removing iterations, then observing the resulting effect on the quality of service, is an effective way to identify such optimizable subcomputations. Our experimental results from applying our implemented quality of service profiler to a challenging set of benchmark applications show that it can enable developers to identify promising optimization opportunities and deliver successful optimizations that substantially increase the performance with only small quality of service losses.
conference on object-oriented programming systems, languages, and applications | 2014
Sasa Misailovic; Michael Carbin; Sara Achour; Zichao Qi; Martin C. Rinard
The accuracy of an approximate computation is the distance between the result that the computation produces and the corresponding fully accurate result. The reliability of the computation is the probability that it will produce an acceptably accurate result. Emerging approximate hardware platforms provide approximate operations that, in return for reduced energy consumption and/or increased performance, exhibit reduced reliability and/or accuracy. We present Chisel, a system for reliability- and accuracy-aware optimization of approximate computational kernels that run on approximate hardware platforms. Given a combined reliability and/or accuracy specification, Chisel automatically selects approximate kernel operations to synthesize an approximate computation that minimizes energy consumption while satisfying its reliability and accuracy specification. We evaluate Chisel on five applications from the image processing, scientific computing, and financial analysis domains. The experimental results show that our implemented optimization algorithm enables Chisel to optimize our set of benchmark kernels to obtain energy savings from 8.7% to 19.8% compared to the fully reliable kernel implementations while preserving important reliability guarantees.
static analysis symposium | 2011
Sasa Misailovic; Daniel M. Roy; Martin C. Rinard
The standard approach to program transformation involves the use of discrete logical reasoning to prove that the transformation does not change the observable semantics of the program. We propose a new approach that, in contrast, uses probabilistic reasoning to justify the application of transformations that may change, within probabilistic accuracy bounds, the result that the program produces. Our new approach produces probabilistic guarantees of the form P(|D| ≥ B) ≤ e, e ∈ (0, 1), where D is the difference between the results that the transformed and original programs produce, B is an acceptability bound on the absolute value of D, and e is the maximum acceptable probability of observing large |D|. We show how to use our approach to justify the application of loop perforation (which transforms loops to execute fewer iterations) to a set of computational patterns.
programming language design and implementation | 2012
Michael Carbin; Deokhwan Kim; Sasa Misailovic; Martin C. Rinard
Approximate program transformations such as skipping tasks [29, 30], loop perforation [21, 22, 35], reduction sampling [38], multiple selectable implementations [3, 4, 16, 38], dynamic knobs [16], synchronization elimination [20, 32], approximate function memoization [11],and approximate data types [34] produce programs that can execute at a variety of points in an underlying performance versus accuracy tradeoff space. These transformed programs have the ability to trade accuracy of their results for increased performance by dynamically and nondeterministically modifying variables that control their execution. We call such transformed programs relaxed programs because they have been extended with additional nondeterminism to relax their semantics and enable greater flexibility in their execution. We present language constructs for developing and specifying relaxed programs. We also present proof rules for reasoning about properties [28] which the program must satisfy to be acceptable. Our proof rules work with two kinds of acceptability properties: acceptability properties [28], which characterize desired relationships between the values of variables in the original and relaxed programs, and unary acceptability properties, which involve values only from a single (original or relaxed) program. The proof rules support a staged reasoning approach in which the majority of the reasoning effort works with the original program. Exploiting the common structure that the original and relaxed programs share, relational reasoning transfers reasoning effort from the original program to prove properties of the relaxed program. We have formalized the dynamic semantics of our target programming language and the proof rules in Coq and verified that the proof rules are sound with respect to the dynamic semantics. Our Coq implementation enables developers to obtain fully machine-checked verifications of their relaxed programs.
european conference on object oriented programming | 2011
Michael Carbin; Sasa Misailovic; Michael W. Kling; Martin C. Rinard
Infinite loops can make applications unresponsive. Potential problems include lost work or output, denied access to application functionality, and a lack of responses to urgent events. We present Jolt, a novel system for dynamically detecting and escaping infinite loops. At the users request, Jolt attaches to an application to monitor its progress. Specifically, Jolt records the program state at the start of each loop iteration. If two consecutive loop iterations produce the same state, Jolt reports to the user that the application is in an infinite loop. At the users option, Jolt can then transfer control to a statement following the loop, thereby allowing the application to escape the infinite loop and ideally continue its productive execution. The immediate goal is to enable the application to execute long enough to save any pending work, finish any in-progress computations, or respond to any urgent events. We evaluated Jolt by applying it to detect and escape eight infinite loops in five benchmark applications. Jolt was able to detect seven of the eight infinite loops (the eighth changes the state on every iteration). We also evaluated the effect of escaping an infinite loop as an alternative to terminating the application. In all of our benchmark applications, escaping an infinite loop produced a more useful output than terminating the application. Finally, we evaluated how well escaping from an infinite loop approximated the correction that the developers later made to the application. For two out of our eight loops, escaping the infinite loop produced the same output as the corrected version of the application.
symposium on principles of programming languages | 2012
Zeyuan Allen Zhu; Sasa Misailovic; Jonathan A. Kelner; Martin C. Rinard
Despite the fact that approximate computations have come to dominate many areas of computer science, the field of program transformations has focused almost exclusively on traditional semantics-preserving transformations that do not attempt to exploit the opportunity, available in many computations, to acceptably trade off accuracy for benefits such as increased performance and reduced resource consumption. We present a model of computation for approximate computations and an algorithm for optimizing these computations. The algorithm works with two classes of transformations: substitution transformations (which select one of a number of available implementations for a given function, with each implementation offering a different combination of accuracy and resource consumption) and sampling transformations (which randomly discard some of the inputs to a given reduction). The algorithm produces a (1+ε) randomized approximation to the optimal randomized computation (which minimizes resource consumption subject to a probabilistic accuracy specification in the form of a maximum expected error or maximum error variance).
ACM Transactions in Embedded Computing Systems | 2013
Sasa Misailovic; Deokhwan Kim; Martin C. Rinard
We present QuickStep, a novel system for parallelizing sequential programs. Unlike standard parallelizing compilers (which are designed to preserve the semantics of the original sequential computation), QuickStep is instead designed to generate (potentially nondeterministic) parallel programs that produce acceptably accurate results acceptably often. The freedom to generate parallel programs whose output may differ (within statistical accuracy bounds) from the output of the sequential program enables a dramatic simplification of the compiler, a dramatic increase in the range of applications that it can parallelize, and a significant expansion in the range of parallel programs that it can legally generate. Results from our benchmark set of applications show that QuickStep can automatically generate acceptably accurate and efficient parallel programs---the automatically generated parallel versions of five of our six benchmark applications run between 5.0 and 7.8 times faster on eight cores than the original sequential versions. These applications and parallelizations contain features (such as the use of modern object-oriented programming constructs or desirable parallelizations with infrequent but acceptable data races) that place them inherently beyond the reach of standard approaches.
software engineering for adaptive and self managing systems | 2015
Antonio Filieri; Martina Maggio; Konstantinos Angelopoulos; Nicolás D'Ippolito; Ilias Gerostathopoulos; Andreas B. Hempel; Henry Hoffmann; Pooyan Jamshidi; Evangelia Kalyvianaki; Cristian Klein; Filip Krikava; Sasa Misailovic; Alessandro Vittorio Papadopoulos; Suprio Ray; Amir Molzam Sharifloo; Stepan Shevtsov; Mateusz Ujma; Thomas Vogel
The software engineering community has proposed numerous approaches for making software self-adaptive. These approaches take inspiration from machine learning and control theory, constructing software that monitors and modifies its own behavior to meet goals. Control theory, in particular, has received considerable attention as it represents a general methodology for creating adaptive systems. Control-theoretical software implementations, however, tend to be ad hoc. While such solutions often work in practice, it is difficult to understand and reason about the desired properties and behavior of the resulting adaptive software and its controller. This paper discusses a control design process for software systems which enables automatic analysis and synthesis of a controller that is guaranteed to have the desired properties and behavior. The paper documents the process and illustrates its use in an example that walks through all necessary steps for self-adaptive controller synthesis.
foundations of software engineering | 2007
Sasa Misailovic; Aleksandar Milicevic; Nemanja Petrovic; Sarfraz Khurshid; Darko Marinov
We present novel algorithms for parallel testing of code that takes structurally complex test inputs. The algorithms build on the Korat algorithm for constraint-based generation of structurally complex test inputs. Given an imperative predicate that specifies the desired structural constraints and a finitization that bounds the desired input size, Korat performs a systematic search to generate all test inputs (within the bounds) that satisfy the constraints. We present how to generate test inputs with a parallel search in Korat and how to execute test inputs in parallel, both off-line (when the inputs are saved on disk) and on-line (when execution immediately follows generation). The inputs that Korat generates enable bounded-exhaustive testing that checks the code under test exhaustively for all inputs within the given bounds. We also describe a novel methodology for reducing the number of equivalent inputs that Korat can generate. Our development of parallel Korat and the methodology for reducing equivalent inputs are motivated by testing an application developed at Google. The experimental results on running parallel Korat across up to 1024 machines on the Googles infrastructure show that parallel test generation and execution can achieve significant speedup, up to 543.55 times.