Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ben Liblit is active.

Publication


Featured researches published by Ben Liblit.


programming language design and implementation | 2005

Scalable statistical bug isolation

Ben Liblit; Mayur Naik; Alice X. Zheng; Alex Aiken; Michael I. Jordan

We present a statistical debugging algorithm that isolates bugs in programs containing multiple undiagnosed bugs. Earlier statistical algorithms that focus solely on identifying predictors that correlate with program failure perform poorly when there are multiple bugs. Our new technique separates the effects of different bugs and identifies predictors that are associated with individual bugs. These predictors reveal both the circumstances under which bugs occur as well as the frequencies of failure modes, making it easier to prioritize debugging efforts. Our algorithm is validated using several case studies, including examples in which the algorithm identified previously unknown, significant crashing bugs in widely used systems.


programming language design and implementation | 2003

Bug isolation via remote program sampling

Ben Liblit; Alex Aiken; Alice X. Zheng; Michael I. Jordan

We propose a low-overhead sampling infrastructure for gathering information from the executions experienced by a programs user community. Several example applications illustrate ways to use sampled instrumentation to isolate bugs. Assertion-dense code can be transformed to share the cost of assertions among many users. Lacking assertions, broad guesses can be made about predicates that predict program errors and a process of elimination used to whittle these down to the true bug. Finally, even for non-deterministic bugs such as memory corruption, statistical modeling based on logistic regression allows us to identify program behaviors that are strongly correlated with failure and are therefore likely places to look for the error.


Concurrency and Computation: Practice and Experience | 1998

Titanium: a high‐performance Java dialect

Katherine A. Yelick; Luigi Semenzato; Geoff Pike; Carleton Miyamoto; Ben Liblit; Arvind Krishnamurthy; Paul N. Hilfinger; Susan L. Graham; Phillip Colella; Alex Aiken

Titanium is a language and system for high-performance parallel scientific computing. Titanium uses Java as its base, thereby leveraging the advantages of that language and allowing us to focus attention on parallel computing issues. The main additions to Java are immutable classes, multidimensional arrays, an explicitly parallel SPMD model of computation with a global address space, and zone-based memory management. We discuss these features and our design approach, and report progress on the development of Titanium, including our current driving application: a three-dimensional adaptive mesh refinement parallel Poisson solver.


international conference on software engineering | 2009

HOLMES: Effective statistical debugging via efficient path profiling

Trishul M. Chilimbi; Ben Liblit; Krishna Kumar Mehra; Aditya V. Nori; Kapil Vaswani

Statistical debugging aims to automate the process of isolating bugs by profiling several runs of the program and using statistical analysis to pinpoint the likely causes of failure. In this paper, we investigate the impact of using richer program profiles such as path profiles on the effectiveness of bug isolation. We describe a statistical debugging tool called HOLMES that isolates bugs by finding paths that correlate with failure. We also present an adaptive version of HOLMES that uses iterative, bug-directed profiling to lower execution time and space overheads. We evaluate HOLMES using programs from the SIR benchmark suite and some large, real-world applications. Our results indicate that path profiles can help isolate bugs more precisely by providing more information about the context in which bugs occur. Moreover, bug-directed profiling can efficiently isolate bugs with low overheads, providing a scalable and accurate alternative to sparse random sampling.


architectural support for programming languages and operating systems | 2006

Supporting nested transactional memory in logTM

Michelle J. Moravan; Jayaram Bobba; Kevin E. Moore; Luke Yen; Mark D. Hill; Ben Liblit; Michael M. Swift; David A. Wood

Nested transactional memory (TM) facilitates software composition by letting one module invoke another without either knowing whether the other uses transactions. Closed nested transactions extend isolation of an inner transaction until the toplevel transaction commits. Implementations may flatten nested transactions into the top-level one, resulting in a complete abort on conflict, or allow partial abort of inner transactions. Open nested transactions allow a committing inner transaction to immediately release isolation, which increases parallelism and expressiveness at the cost of both software and hardware complexity.This paper extends the recently-proposed flat Log-based Transactional Memory (LogTM) with nested transactions. Flat LogTM saves pre-transaction values in a log, detects conflicts with read (R) and write (W) bits per cache block, and, on abort, invokes a software handler to unroll the log. Nested LogTM supports nesting by segmenting the log into a stack of activation records and modestly replicating R/W bits. To facilitate composition with nontransactional code, such as language runtime and operating system services, we propose escape actions that allow trusted code to run outside the confines of the transactional memory system.


programming language design and implementation | 2011

Automated atomicity-violation fixing

Guoliang Jin; Linhai Song; Wei Zhang; Shan Lu; Ben Liblit

Fixing software bugs has always been an important and time-consuming process in software development. Fixing concurrency bugs has become especially critical in the multicore era. However, fixing concurrency bugs is challenging, in part due to non-deterministic failures and tricky parallel reasoning. Beyond correctly fixing the original problem in the software, a good patch should also avoid introducing new bugs, degrading performance unnecessarily, or damaging software readability. Existing tools cannot automate the whole fixing process and provide good-quality patches. We present AFix, a tool that automates the whole process of fixing one common type of concurrency bug: single-variable atomicity violations. AFix starts from the bug reports of existing bug-detection tools. It augments these with static analysis to construct a suitable patch for each bug report. It further tries to combine the patches of multiple bugs for better performance and code readability. Finally, AFixs run-time component provides testing customized for each patch. Our evaluation shows that patches automatically generated by AFix correctly eliminate six out of eight real-world bugs and significantly decrease the failure probability in the other two cases. AFix patches never introduce new bugs and usually have similar performance to manually-designed patches.


international conference on machine learning | 2006

Statistical debugging: simultaneous identification of multiple bugs

Alice X. Zheng; Michael I. Jordan; Ben Liblit; Mayur Naik; Alex Aiken

We describe a statistical approach to software debugging in the presence of multiple bugs. Due to sparse sampling issues and complex interaction between program predicates, many generic off-the-shelf algorithms fail to select useful bug predictors. Taking inspiration from bi-clustering algorithms, we propose an iterative collective voting scheme for the program runs and predicates. We demonstrate successful debugging results on several real world programs and a large debugging benchmark suite.


international symposium on software testing and analysis | 2007

Statistical debugging using compound boolean predicates

Piramanayagam Arumuga Nainar; Ting Chen; Jake Rosin; Ben Liblit

Statistical debugging uses dynamic instrumentation and machine learning to identify predicates on program state that are strongly predictive of program failure. Prior approaches have only considered simple, atomic predicates such as the directions of branches or the return values of function calls. We enrich the predicate vocabulary by adding complex Boolean formulae derived from these simple predicates. We draw upon three-valued logic, static program structure, and statistical estimation techniques to efficiently sift through large numbers of candidate Boolean predicate formulae. We present qualitative and quantitative evidence that complex predicates are practical, precise, and informative. Furthermore, we demonstrate that our approach is robust in the face of incomplete data provided by the sparse random sampling that typifies postdeployment statistical debugging.


static analysis symposium | 2001

Estimating the Impact of Scalable Pointer Analysis on Optimization

Manuvir Das; Ben Liblit; Manuel Fähndrich; Jakob Rehof

This paper addresses the following question: Do scalable control-flow-insensitive pointer analyses provide the level of precision required to make them useful in compiler optimizations? We first describe alias frequency, a metric that measures the ability of a pointer analysis to determine that pairs of memory accesses in C programs cannot be aliases. We believe that this kind of information is useful for a variety of optimizations, while remaining independent of a particular optimization. We show that control-flow and context insensitive analyses provide the same answer as the best possible pointer analysis on at least 95% of all statically generated alias queries. In order to understand the potential run-time impact of the remaining 5% queries, we weight the alias queries by dynamic execution counts obtained from profile data. Flow-insensitive pointer analyses are accurate on at least 95% of the weighted alias queries as well. We then examine whether scalable pointer analyses are inaccurate on the remaining 5% alias queries because they are context-insensitive. To this end, we have developed a new context-sensitive pointer analysis that also serves as a general engine for tracing the flow of values in C programs. To our knowledge, it is the first technique for performing context-sensitive analysis with subtyping that scales to millions of lines of code. We find that the new algorithm does not identify fewer aliases than the context-insensitive analysis.


european conference on machine learning | 2007

Statistical Debugging Using Latent Topic Models

David Andrzejewski; Anne Mulhern; Ben Liblit; Xiaojin Zhu

Statistical debugging uses machine learning to model program failures and help identify root causes of bugs. We approach this task using a novel Delta-Latent-Dirichlet-Allocation model. We model execution traces attributed to failed runs of a program as being generated by two types of latent topics: normal usage topics and bug topics. Execution traces attributed to successful runs of the same program, however, are modeled by usage topics only. Joint modeling of both kinds of traces allows us to identify weak bug topics that would otherwise remain undetected. We perform model inference with collapsed Gibbs sampling. In quantitative evaluations on four real programs, our model produces bug topics highly correlated to the true bugs, as measured by the Rand index. Qualitative evaluation by domain experts suggests that our model outperforms existing statistical methods for bug cause identification, and may help support other software tasks not addressed by earlier models.

Collaboration


Dive into the Ben Liblit's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Katherine A. Yelick

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Alice X. Zheng

University of California

View shared research outputs
Top Co-Authors

Avatar

Cindy Rubio-González

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Geoff Pike

University of California

View shared research outputs
Top Co-Authors

Avatar

Peter Ohmann

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Shan Lu

University of Chicago

View shared research outputs
Top Co-Authors

Avatar

Thomas W. Reps

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Barton P. Miller

University of Wisconsin-Madison

View shared research outputs
Researchain Logo
Decentralizing Knowledge