Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Saff is active.

Publication


Featured researches published by David Saff.


international symposium on software reliability engineering | 2003

Reducing wasted development time via continuous testing

David Saff; Michael D. Ernst

Testing is often performed frequently during development to ensure software reliability by catching regression errors quickly. However, stopping frequently to test also wastes time by holding up development progress. User studies on real development projects indicate that these two sources of wasted time account for 10-15% of development time. These measurements use a novel technique for computing the wasted extra development time incurred by a delay in discovering a regression error. We present a model of developer behavior that infers developer beliefs from developer behavior, and that predicts developer behavior in new environments - in particular, when changing testing methodologies or tools to reduce wasted time. Changing test ordering or reporting reduces wasted time by 4-41% in our case study. Changing the frequency with which tests are run can reduce wasted time by 31-82% (but developers cannot know the ideal frequency except after the fact). We introduce and evaluate a new technique, continuous testing, that uses spare CPU resources to continuously run tests in the background, providing rapid feedback about test failures as as source code is edited. Continuous testing reduced wasted time by 92-98%, a substantial improvement over the other approaches. We have integrated continuous testing into two development environments, and are beginning user studies to evaluate its efficacy. We believe it has the potential to reduce the cost and improve the efficacy of testing and, as a result, to improve the reliability of delivered systems.


Electronic Notes in Theoretical Computer Science | 2004

Continuous Testing in Eclipse

David Saff; Michael D. Ernst

Continuous testing uses excess cycles on a developers workstation to continuously run regression tests in the background, providing rapid feedback about test failures as source code is edited. It is intended to reduce the time and energy required to keep code well-tested, and to prevent regression errors from persisting uncaught for long periods of time.This paper reports on the design and implementation of a continuous testing feature for Java de- velopment in the Eclipse development environment. Our challenge was to generate and display a new kind of feedback (asynchronous notification of test failures) in a way that effectively reuses Eclipses extensible architecture and fits the expectations of Eclipse users without interfering with their current work habits. We present the design principles we pursued in solving this challenge: present and future reuse, consistent experience, minimal distraction, and testability. These prin- ciples, and how our plug-in and Eclipse succeeded and failed in accomplishing them, should be of interest to other Eclipse extenders looking to implement new kinds of developer feedback.The continuous testing plug-in is publicly available at http://pag.csail.mit.edu/~saff/continuoustesting.html.


workshop on program analysis for software tools and engineering | 2004

Mock object creation for test factoring

David Saff; Michael D. Ernst

Test factoring creates fast, focused unit tests from slow system-wide tests; each new unit test exercises only a subset of the functionality exercised by the system tests. Augmenting a test suite with factored unit tests, and prioritizing the tests, should catch errors earlier in a test run.One way to factor a test is to introduce mock objects. If a test exercises a component A, which is designed to issue queries against or mutate another component B, the implementation of B can be replaced by a mock. The mock has two purposes: it checks that As calls to B are as expected, and it simulates Bs behavior in response. Given a system test for A and B, and a record of As and Bs behavior when the system test is run, we would like to automatically generate unit tests for A in which B is mocked. The factored tests can isolate bugs in A from bugs in B and, if B is slow or expensive, improve test performance or cost.This paper motivates test factoring with an illustrative example, proposes a simple procedure for automatically generating mock objects for factored tests, and gives examples of how the procedure can be extended to produce more robust factored tests.


conference on object oriented programming systems languages and applications | 2007

Theory-infected: or how i learned to stop worrying and love universal quantification

David Saff

Writing developer tests as software is built can provide peace of mind. As the software grows, running the tests can prove that everything still works as the developer envisioned it. But what about the behavior the developer failed to envision? Although verifying a few well-picked scenarios is often enough, experienced developers know bugs can often lurk even in well-tested code, when correct but untested inputs provoke obviously wrong responses. This leads to worry. We suggest writing Theories alongside developer tests, to specify desired universal behaviors. We will demonstrate how writing theories affects test-driven development, how new features in JUnit can verify theories against hand-picked inputs, and how a new tool, Theory Explorer, can search for new inputs, leading to a new, less worrysome approach to development.


international conference on software engineering | 2005

Continuous testing in eclipse

David Saff; Michael D. Ernst

Continuous testing uses excess cycles on a developers workstation to continuously run regression tests in the background, providing rapid feedback about test failures as code is edited. It reduces the time and energy required to keep code well-tested, and it prevents regression errors from persisting uncaught for long periods of time.


international conference on software engineering | 2005

Test factoring: focusing test suites for the task at hand

David Saff; Michael D. Ernst

Frequent execution of a test suite during software maintenance can catch regression errors early, indicate whether progress is being made, and improve productivity. However, if the test suite takes a long time to produce feedback, the developer is slowed down, and the benefit of frequent testing is reduced. After a program is edited, ideally, only changed code would be tested. Any time spent executing previously tested, unchanged parts of the code is wasted. For a large test suite containing many small unit tests, test selection and prioritization can be effective. Test selection runs only those tests that are possibly affected by the most recent change, and test prioritization can run first the tests that are most likely to reveal a recently-introduced error.


conference on object oriented programming systems languages and applications | 2007

From developer's head to developer tests: characterization, theories, and preventing one more bug

David Saff

Unit testing frameworks like JUnit are a popular and effective way to prevent developer bugs. We are investigating two ways of building on these frameworks to prevent more bugs with less effort. First, theories are developer-written statements of correct behavior over a large set of inputs, which can be automatically verified. Second, characterization tools summarize observations over a large number of directed executions, which can be checked by developers, and added to the test suite if they specify intended behavior. We outline a toolset that gives developers the freedom to use either or both of these techniques, and frame further research into their usefulness.


international symposium on software testing and analysis | 2004

An experimental evaluation of continuous testing during development

David Saff; Michael D. Ernst


automated software engineering | 2005

Automatic test factoring for java

David Saff; Shay Artzi; Jeff H. Perkins; Michael D. Ernst


workshop on program analysis for software tools and engineering | 2004

Automatic mock object creation for test factoring

David Saff; Michael D. Ernst

Collaboration


Dive into the David Saff's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeff H. Perkins

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge