Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Carlos Pacheco is active.

Publication


Featured researches published by Carlos Pacheco.


Science of Computer Programming | 2007

The Daikon system for dynamic detection of likely invariants

Michael D. Ernst; Jeff H. Perkins; Philip J. Guo; Stephen McCamant; Carlos Pacheco; Matthew S. Tschantz; Chen Xiao

Daikon is an implementation of dynamic detection of likely invariants; that is, the Daikon invariant detector reports likely program invariants. An invariant is a property that holds at a certain point or points in a program; these are often used in assert statements, documentation, and formal specifications. Examples include being constant (x=a), non-zero (x 0), being in a range (a@?x@?b), linear relationships (y=ax+b), ordering (x@?y), functions from a library (x=fn(y)), containment (x@?y), sortedness (xissorted), and many more. Users can extend Daikon to check for additional invariants. Dynamic invariant detection runs a program, observes the values that the program computes, and then reports properties that were true over the observed executions. Dynamic invariant detection is a machine learning technique that can be applied to arbitrary data. Daikon can detect invariants in C, C++, Java, and Perl programs, and in record-structured data sources; it is easy to extend Daikon to other applications. Invariants can be useful in program understanding and a host of other applications. Daikons output has been used for generating test cases, predicting incompatibilities in component integration, automating theorem proving, repairing inconsistent data structures, and checking the validity of data streams, among other tasks. Daikon is freely available in source and binary form, along with extensive documentation, at http://pag.csail.mit.edu/daikon/.


international conference on software engineering | 2007

Feedback-Directed Random Test Generation

Carlos Pacheco; Shuvendu K. Lahiri; Michael D. Ernst; Thomas Ball

We present a technique that improves random test generation by incorporating feedback obtained from executing test inputs as they are created. Our technique builds inputs incrementally by randomly selecting a method call to apply and finding arguments from among previously-constructed inputs. As soon as an input is built, it is executed and checked against a set of contracts and filters. The result of the execution determines whether the input is redundant, illegal, contract-violating, or useful for generating more inputs. The technique outputs a test suite consisting of unit tests for the classes under test. Passing tests can be used to ensure that code contracts are preserved across program changes; failing tests (that violate one or more contract) point to potential errors that should be corrected. Our experimental results indicate that feedback-directed random test generation can outperform systematic and undirected random test generation, in terms of coverage and error detection. On four small but nontrivial data structures (used previously in the literature), our technique achieves higher or equal block and predicate coverage than model checking (with and without abstraction) and undirected random generation. On 14 large, widely-used libraries (comprising 780KLOC), feedback-directed random test generation finds many previously-unknown errors, not found by either model checking or undirected random generation.


symposium on operating systems principles | 2009

Automatically patching errors in deployed software

Jeff H. Perkins; Sunghun Kim; Samuel Larsen; Saman P. Amarasinghe; Jonathan Bachrach; Michael Carbin; Carlos Pacheco; Frank Sherwood; Stelios Sidiroglou; Greg Sullivan; Weng-Fai Wong; Yoav Zibin; Michael D. Ernst; Martin C. Rinard

We present ClearView, a system for automatically patching errors in deployed software. ClearView works on stripped Windows x86 binaries without any need for source code, debugging information, or other external information, and without human intervention. ClearView (1) observes normal executions to learn invariants thatcharacterize the applications normal behavior, (2) uses error detectors to distinguish normal executions from erroneous executions, (3) identifies violations of learned invariants that occur during erroneous executions, (4) generates candidate repair patches that enforce selected invariants by changing the state or flow of control to make the invariant true, and (5) observes the continued execution of patched applications to select the most successful patch. ClearView is designed to correct errors in software with high availability requirements. Aspects of ClearView that make it particularly appropriate for this context include its ability to generate patches without human intervention, apply and remove patchesto and from running applications without requiring restarts or otherwise perturbing the execution, and identify and discard ineffective or damaging patches by evaluating the continued behavior of patched applications. ClearView was evaluated in a Red Team exercise designed to test its ability to successfully survive attacks that exploit security vulnerabilities. A hostile external Red Team developed ten code injection exploits and used these exploits to repeatedly attack an application protected by ClearView. ClearView detected and blocked all of the attacks. For seven of the ten exploits, ClearView automatically generated patches that corrected the error, enabling the application to survive the attacks and continue on to successfully process subsequent inputs. Finally, the Red Team attempted to make Clear-View apply an undesirable patch, but ClearViews patch evaluation mechanism enabled ClearView to identify and discard both ineffective patches and damaging patches.


european conference on object oriented programming | 2005

Eclat: automatic generation and classification of test inputs

Carlos Pacheco; Michael D. Ernst

This paper describes a technique that selects, from a large set of test inputs, a small subset likely to reveal faults in the software under test. The technique takes a program or software component, plus a set of correct executions — say, from observations of the software running properly, or from an existing test suite that a user wishes to enhance. The technique first infers an operational model of the softwares operation. Then, inputs whose operational pattern of execution differs from the model in specific ways are suggestive of faults. These inputs are further reduced by selecting only one input per operational pattern. The result is a small portion of the original inputs, deemed by the technique as most likely to reveal faults. Thus, the technique can also be seen as an error-detection technique. The paper describes two additional techniques that complement test input selection. One is a technique for automatically producing an oracle (a set of assertions) for a test input from the operational model, thus transforming the test input into a test case. The other is a classification-guided test input generation technique that also makes use of operational models and patterns. When generating inputs, it filters out code sequences that are unlikely to contribute to legal inputs, improving the efficiency of its search for fault-revealing inputs. We have implemented these techniques in the Eclat tool, which generates unit tests for Java classes. Eclats input is a set of classes to test and an example program execution—say, a passing test suite. Eclats output is a set of JUnit test cases, each containing a potentially fault-revealing input and a set of assertions at least one of which fails. In our experiments, Eclat successfully generated inputs that exposed fault-revealing behavior; we have used Eclat to reveal real errors in programs. The inputs it selects as fault-revealing are an order of magnitude as likely to reveal a fault as all generated inputs.


conference on object oriented programming systems languages and applications | 2007

Randoop: feedback-directed random testing for Java

Carlos Pacheco; Michael D. Ernst

R<scp>ANDOOP</scp> for Java generates unit tests for Java code using feedback-directed random test generation. Below we describe R<scp>ANDOOP</scp>s input, output, and test generation algorithm. We also give an overview of RANDOOPs annotation-based interface for specifying configuration parameters that affect R<scp>ANDOOP</scp>s behavior and output.


international symposium on software testing and analysis | 2008

Finding errors in .net with feedback-directed random testing

Carlos Pacheco; Shuvendu K. Lahiri; Thomas Ball

We present a case study in which a team of test engineers at Microsoft applied a feedback-directed random testing tool to a critical component of the .NET architecture. Due to its complexity and high reliability requirements, the component had already been tested by 40 test engineers over five years, using manual testing and many automated testing techniques. Nevertheless, the feedback-directed random testing tool found errors in the component that eluded previous testing, and did so two orders of magnitude faster than a typical test engineer (including time spent inspecting the results of the tool). The tool also led the test team to discover errors in other testing and analysis tools, and deficiencies in previous best-practice guidelines for manual testing. Finally, we identify challenges that random testing faces for continued effectiveness, including an observed decrease in the techniques error detection rate over time.


automated software engineering | 2006

An Empirical Comparison of Automated Generation and Classification Techniques for Object-Oriented Unit Testing

Marcelo d'Amorim; Carlos Pacheco; Tao Xie; Darko Marinov; Michael D. Ernst

Testing involves two major activities: generating test inputs and determining whether they reveal faults. Automated test generation techniques include random generation and symbolic execution. Automated test classification techniques include ones based on uncaught exceptions and violations of operational models inferred from manually provided tests. Previous research on unit testing for object-oriented programs developed three pairs of these techniques: model-based random testing, exception-based random testing, and exception-based symbolic testing. We develop a novel pair, model-based symbolic testing. We also empirically compare all four pairs of these generation and classification techniques. The results show that the pairs are complementary (i.e., reveal faults differently), with their respective strengths and weaknesses


Archive | 2006

Finding the needles in the haystack: Generating legal test inputs for object-oriented programs

Shay Artzi; Michael D. Ernst; Adam Kiezun; Carlos Pacheco; Jeff H. Perkins


Archive | 2009

Directed random testing

Daniel Jackson; Carlos Pacheco


symposium on operating systems principles | 2009

Self-defending software: Automatically patching security vulnerabilities

Jeff H. Perkins; Sunghun Kim; Sam Larsen; Saman P. Amarasinghe; Jonathan Bachrach; Michael Carbin; Carlos Pacheco; Frank Sherwood; Stelios Sidiroglou; Greg Sullivan; Weng-Fai Wong; Yoav Zibin; Michael D. Ernst; Martin C. Rinard

Collaboration


Dive into the Carlos Pacheco's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeff H. Perkins

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Greg Sullivan

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jonathan Bachrach

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Martin C. Rinard

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Carbin

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Saman P. Amarasinghe

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge