Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Patrick Lam is active.

Publication


Featured researches published by Patrick Lam.


foundations of software engineering | 2008

Finding programming errors earlier by evaluating runtime monitors ahead-of-time

Eric Bodden; Patrick Lam; Laurie J. Hendren

Runtime monitoring allows programmers to validate, for instance, the proper use of application interfaces. Given a property specification, a runtime monitor tracks appropriate runtime events to detect violations and possibly execute recovery code. Although powerful, runtime monitoring inspects only one program run at a time and so may require many program runs to find errors. Therefore, in this paper, we present ahead-of-time techniques that can (1) prove the absence of property violations on all program runs, or (2) flag locations where violations are likely to occur. Our work focuses on tracematches, an expressive runtime monitoring notation for reasoning about groups of correlated objects. We describe a novel flow-sensitive static analysis for analyzing monitor states. Our abstraction captures both positive information (a set of objects could be in a particular monitor state) and negative information (the set is known not to be in a state). The analysis resolves heap references by combining the results of three points-to and alias analyses. We also propose a machine learning phase to filter out likely false positives. We applied a set of 13 tracematches to the DaCapo benchmark suite and SciMark2. Our static analysis rules out all potential points of failure in 50% of the cases, and 75% of false positives on average. Our machine learning algorithm correctly classifies the remaining potential points of failure in all but three of 461 cases. The approach revealed defects and suspicious code in three benchmark programs.


runtime verification | 2010

Clara: a framework for partially evaluating finite-state runtime monitors ahead of time

Eric Bodden; Patrick Lam; Laurie J. Hendren

Researchers have developed a number of runtime verification tools that generate runtime monitors in the form of AspectJ aspects. In this work, we present CLARA, a novel framework to statically optimize such monitoring aspects with respect to a given program under test. CLARA uses a sequence of increasingly precise static analyses to automatically convert a monitoring aspect into a residual runtime monitor. The residual monitor only watches events triggered by program locations that the analyses failed to prove safe at compile time. In two-thirds of the cases in our experiments, the static analysis succeeds on all locations, proving that the program fulfills the stated properties, and completely obviating the need for runtime monitoring. In the remaining cases, the residual runtime monitor is usually much more efficient than a full monitor, yet still captures all property violations at runtime.


IEEE Transactions on Industrial Informatics | 2010

Time-Aware Instrumentation of Embedded Software

Sebastian Fischmeister; Patrick Lam

Software instrumentation is a key technique in many stages of the development process. It is particularly important for debugging embedded systems. Instrumented programs produce data traces which enable the developer to locate the origins of misbehaviors in the system under test. However, producing data traces incurs runtime overhead in the form of additional computation resources for capturing and copying the data. The instrumentation may therefore interfere with the systems timing and perturb its behavior. In this work, we propose an instrumentation technique for applications with temporal constraints, specifically targeting background/foreground or cyclic executive systems. Our framework permits reasoning about space and time and enables the composition of software instrumentations. In particular, we propose a definition for trace reliability, which enables us to instrument real-time applications which aggressively push their time budgets. Using the framework, we present a method with low perturbation by optimizing the number of insertion points and trace buffer size with respect to code size and time budgets. Finally, we apply the theory to two concrete case studies: we instrument the OpenEC firmware for the keyboard controller of the One Laptop Per Child project, as well as an implementation of a flash file system.


Journal of Logic and Computation | 2010

Collaborative Runtime Verification with Tracematches

Eric Bodden; Laurie J. Hendren; Patrick Lam; Ondrej Lhoták; Nomair A. Naeem

Perfect pre-deployment test coverage is notoriously difficult to achieve for large applications. Given enough end users, however, many more test cases will be encountered during an applications deployment than during testing. The use of runtime verification after deployment would enable developers to detect unexpected situations. Unfortunately, the prohibitive performance cost of runtime monitors prevents their use in deployed code. In this work, we study the feasibility of collaborative runtime verification, a verification approach which can distribute the burden of runtime verification among multiple users and over multiple runs. Each user executes a partially instrumented program and therefore suffers only a fraction of the instrumentation overhead. We focus on runtime verification using tracematches. Tracematches are a specification formalism that allows users to specify runtime verification properties via regular expressions with free variables over the dynamic execution trace. We propose two techniques for soundly partitioning the instrumentation required for tracematches: spatial partitioning, where different copies of a program monitor different program points for violations, and temporal partitioning, where monitoring is switched on and off over time. We evaluate the relative impact of partitioning on a users runtime overhead by applying each partitioning technique to a collection of benchmarks that would otherwise incur significant instrumentation overhead. Our results show that spatial partitioning almost completely eliminates runtime overhead (for any particular benchmark copy) on many of our test cases, and that temporal partitioning scales well and provides runtime verification on a ‘pay as you go’ basis.


conference on object oriented programming systems languages and applications | 2015

SATCheck: SAT-directed stateless model checking for SC and TSO

Brian Demsky; Patrick Lam

Writing low-level concurrent code is well known to be challenging and error prone. The widespread deployment of multi-core hardware and the shift towards using low-level concurrent data structures has moved the problem into the mainstream. Finding bugs in such code may require finding a specific bug-revealing thread interleaving out of a huge space of parallel executions. Model-checking is a powerful technique for exhaustively testing code. However, scaling model checking presents a significant challenge. In this paper we present a new and more scalable technique for model checking concurrent code, based on concrete execution. Our technique observes concrete behaviors, builds a model of these behaviors, encodes the model in SAT, and leverages SAT solver technology to find executions that reveal new behaviors. It then runs the new execution, incorporates the newly observed behavior, and repeats the process until it has explored all reachable behaviors. We have implemented a prototype of our approach in the SATCheck tool. Our tool supports both the Total Store Ordering (TSO) and Sequentially Consistent (SC) memory models. We evaulate SATCheck by testing several concurrent data structure implementations and comparing its performance to the original DPOR stateless model checking algorithm implemented in CDSChecker, the source DPOR algorithm implemented in Nidhugg, and CheckFence. Our experiments show that SATCheck scales better than previous approaches while at the same time operating on concrete executions.


Empirical Software Engineering | 2014

Correlations between bugginess and time-based commit characteristics

Jon Eyolfson; Lin Tan; Patrick Lam

Modern software is often developed over many years with hundreds of thousands of commits. Commit metadata is a rich source of time-based characteristics, including the commit’s time of day and the commit frequency and seniority of its author. The “bugginess” of a commit is also a critical property of that commit. In this paper, we investigate the correlation between a commit’s time-based characteristics and its “bugginess”; such results can be useful for software developers and software engineering researchers. For instance, developers or code reviewers might be well-advised to thoroughly verify commits that are more likely to be buggy. In this paper, we study the correlation between a commit’s bugginess and the time of day of the commit, the day of week of the commit, the commit frequency and seniority of the commit authors, and whether or not the developers have marked a commit as a “stable” commit. We survey three widely-used open source projects: the Linux kernel, PostgreSQL, and the Xorg server. Our main findings include: (1) commits between midnight and 4 AM (referred to as late-night commits) are significantly buggier and commits between 8 AM and noon are less buggy, implying that developers may want to double-check their own late-night commits; (2) daily-committing developers produce less-buggy commits, indicating that we may want to promote the practice of daily-committing developers reviewing other developers’ commits; (3) the bugginess of commits versus day-of-week varies for different software projects; and (4) stable commits are significantly less buggy than commits in general.


real time technology and applications symposium | 2009

On Time-Aware Instrumentation of Programs

Sebastian Fischmeister; Patrick Lam

Software instrumentation is a key technique in many stages of the development process. It is of particular importance for debugging embedded systems. Instrumented programs produce data traces which enable the developer to locate the origins of misbehaviours in the system under test. However, producing data traces incurs runtime overhead in the form of additional computation resources for capturing and copying the data. The instrumentation may therefore interfere with the systems timing and perturb its behavior. In the worst case, this perturbation leads to new system behaviours that prevent the developer from locating the original misbehaviours.In this work, we propose an instrumentation technique for applications with temporal constraints, specifically targetting background/foreground systems. Our framework permits reasoning about space and time for software instrumentations. In particular, we propose a definition for trace reliability, which enables us to instrument real-time applications which aggressively push their time budgets. Using the framework, we present a method with low perturbation by optimizing the number of insertion points and trace buffer size for code size and time budgets. Finally, we apply the theory to a concrete case study and instrument the OpenEC firmware for the keyboard controller of the One Laptop Per Child project.


mining software repositories | 2014

Finding patterns in static analysis alerts: improving actionable alert ranking

Quinn Hanam; Lin Tan; Reid Holmes; Patrick Lam

Static analysis (SA) tools that find bugs by inferring programmer beliefs (e.g., FindBugs) are commonplace in todays software industry. While they find a large number of actual defects, they are often plagued by high rates of alerts that a developer would not act on (unactionable alerts) because they are incorrect, do not significantly affect program execution, etc. High rates of unactionable alerts decrease the utility of static analysis tools in practice. We present a method for differentiating actionable and unactionable alerts by finding alerts with similar code patterns. To do so, we create a feature vector based on code characteristics at the site of each SA alert. With these feature vectors, we use machine learning techniques to build an actionable alert prediction model that is able to classify new SA alerts. We evaluate our technique on three subject programs using the FindBugs static analysis tool and the Faultbench benchmark methodology. For a developer inspecting the top 5% of all alerts for three sample projects, our approach is able to identify 57 of 211 actionable alerts, which is 38 more than the FindBugs priority measure. Combined with previous actionable alert identification techniques, our method finds 75 actionable alerts in the top 5%, which is four more actionable alerts (a 6% improvement) than previous actionable alert identification techniques.


international conference on software engineering | 2010

Views: object-inspired concurrency control

Brian Demsky; Patrick Lam

We present views, a new approach to controlling concurrency. Fine-grained locking is often necessary to increase concurrency. Correctly implementing fine-grained locking with todays concurrency primitives can be challenging---race conditions often plague programs with sophisticated locking schemes. Views ease the task of implementing sophisticated locking schemes and provide static checks to automatically detect many data races. Views consist of view declarations that describe which views of an object may be simultaneously held by different threads, which object fields may be accessed through a given view, and which methods can be called through a given view. A set of view annotations specify which code regions hold a view of an object. Our view compiler performs simple static checks which eliminate many data races. We have ported three benchmark applications to use views: portions of Vuze, a BitTorrent client; Mailpuccino, a graphical e-mail client; and TupleSoup, a database. Our experience indicates that views are easy to use, make implementing sophisticated locking schemes simple, and can help eliminate concurrency bugs.


ACM Transactions on Programming Languages and Systems | 2012

Partially Evaluating Finite-State Runtime Monitors Ahead of Time

Eric Bodden; Patrick Lam; Laurie J. Hendren

Finite-state properties account for an important class of program properties, typically related to the order of operations invoked on objects. Many library implementations therefore include manually written finite-state monitors to detect violations of finite-state properties at runtime. Researchers have recently proposed the explicit specification of finite-state properties and automatic generation of monitors from the specification. However, runtime monitoring only shows the presence of violations, and typically cannot prove their absence. Moreover, inserting a runtime monitor into a program under test can slow down the program by several orders of magnitude. In this work, we therefore present a set of four static whole-program analyses that partially evaluate runtime monitors at compile time, with increasing cost and precision. As we show, ahead-of-time evaluation can often evaluate the monitor completely statically. This may prove that the program cannot violate the property on any execution or may prove that violations do exist. In the remaining cases, the partial evaluation converts the runtime monitor into a residual monitor. This monitor only receives events from program locations that the analyses failed to prove irrelevant. This makes the residual monitor much more efficient than a full monitor, while still capturing all property violations at runtime. We implemented the analyses in Clara, a novel framework for the partial evaluation of AspectJ-based runtime monitors, and validated our approach by applying Clara to finite-state properties over several large-scale Java programs. Clara proved that most of the programs never violate our example properties. Some programs required monitoring, but in those cases Clara could often reduce the monitoring overhead to below 10%. We observed that several programs did violate the stated properties.

Collaboration


Dive into the Patrick Lam's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eric Bodden

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eric Bodden

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lin Tan

University of Waterloo

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brian Demsky

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge