Gregory M. Kapfhammer
Allegheny College
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gregory M. Kapfhammer.
international symposium on software testing and analysis | 2006
Kristen R. Walcott; Mary Lou Soffa; Gregory M. Kapfhammer; Robert S. Roos
Regression test prioritization is often performed in a time constrained execution environment in which testing only occurs for a fixed time period. For example, many organizations rely upon nightly building and regression testing of their applications every time source code changes are committed to a version control repository. This paper presents a regression test prioritization technique that uses a genetic algorithm to reorder test suites in light of testing time constraints. Experiment results indicate that our prioritization approach frequently yields higher average percentage of faults detected (APFD) values, for two case study applications, when basic block level coverage is used instead of method level coverage. The experiments also reveal fundamental trade offs in the performance of time-aware prioritization. This paper shows that our prioritization technique is appropriate for many regression testing environments and explains how the baseline approach can be extended to operate in additional time constrained testing circumstances.
foundations of software engineering | 2003
Gregory M. Kapfhammer; Mary Lou Soffa
Although a software application always executes within a particular environment, current testing methods have largely ignored these environmental factors. Many applications execute in an environment that contains a database. In this paper, we propose a family of test adequacy criteria that can be used to assess the quality of test suites for database-driven applications. Our test adequacy criteria use dataflow information that is associated with the entities in a relational database. Furthermore, we develop a unique representation of a database-driven application that facilitates the enumeration of database interaction associations. These associations can reflect an applications definition and use of database entities at multiple levels of granularity. The usage of a tool to calculate intraprocedural database interaction associations for two case study applications indicates that our adequacy criteria can be computed with an acceptable time and space overhead.
automated software engineering | 2011
René Just; Franz Schweiggert; Gregory M. Kapfhammer
Mutation analysis is an effective, yet often time-consuming and difficult-to-use method for the evaluation of testing strategies. In response to these and other challenges, this paper presents MAJOR, a fault seeding and mutation analysis tool that is integrated into the Java Standard Edition compiler as a non-invasive enhancement for use in any Java-based development environment. MAJOR reduces the mutant generation time and enables efficient mutation analysis. It has already been successfully applied to large applications with up to 373,000 lines of code and 406,000 mutants. Moreover, MAJORs domain specific language for specifying and adapting mutation operators also makes it extensible. Due to its ease-of-use, efficiency, and extensibility, MAJOR is an ideal platform for the study and application of mutation analysis.
acm symposium on applied computing | 2009
Adam M. Smith; Gregory M. Kapfhammer
Software developers use testing to gain and maintain confidence in the correctness of a software system. Automated reduction and prioritization techniques attempt to decrease the time required to detect faults during test suite execution. This paper uses the Harrold Gupta Soffa, delayed greedy, traditional greedy, and 2-optimal greedy algorithms for both test suite reduction and prioritization. Even though reducing and reordering a test suite is primarily done to ensure that testing is cost-effective, these algorithms are normally configured to make greedy choices with coverage information alone. This paper extends these algorithms to greedily reduce and prioritize the tests by using both test cost (e.g., execution time) and the ratio of code coverage to test cost. An empirical study with eight real world case study applications shows that the ratio greedy choice metric aids a test suite reduction method in identifying a smaller and faster test suite. The results also suggest that incorporating test cost during prioritization allows for an average increase of 17% and a maximum improvement of 141% for a time sensitive evaluation metric called coverage effectiveness.
international conference on software testing verification and validation | 2012
René Just; Gregory M. Kapfhammer; Franz Schweiggert
Mutation analysis is an unbiased and powerful method for assessing input values and test oracles. However, in comparison to other techniques, such as those that rely on code coverage, it is a computationally-expensive and time-consuming method, especially for large software systems. This high cost is due, in part, to the fact that many mutation operators generate redundant mutants that may both misrepresent the mutation score and increase the runtime of the mutation analysis process. After showing how the conditional operator replacement(COR) mutation operator can be defined in a redundant-free manner, this paper uses four real-world programs, ranging in size from 3,000 to nearly 40,000 lines of code, to show the prevalence of redundant mutants. Focusing on the conditional operator replacement (COR)and relational operator replacement (ROR) mutation operators that create 41% of all mutants in the chosen programs, the case study reveals that the removal of redundant mutants reduces the runtime of mutation analysis by up to 34%. Additional empirical results show that redundant mutants can lead to a mutation score that is misleadingly overestimated by as much as 10%. Overall, this paper convincingly demonstrates that it is possible to improve the effectiveness and efficiency of a mutation analysis system by identifying and removing redundant mutants.
automated software engineering | 2007
Sara Alspaugh; Kristen R. Walcott; Michael Belanich; Gregory M. Kapfhammer; Mary Lou Soffa
Regression testing is frequently performed in a time constrained environment. This paper explains how 0/1 knapsack solvers (e.g., greedy, dynamic programming, and the core algorithm) can identify a test suite reordering that rapidly covers the test requirements and always terminates within a specified testing time limit. We conducted experiments that reveal fundamental trade-offs in the (i) time and space costs that are associated with creating a reordered test suite and (ii) quality of the resulting prioritization. We find knapsack-based prioritizers that ignore the overlap in test case coverage incur a low time overhead and a moderate to high space overhead while creating prioritizations exhibiting a minor to modest decrease in effectiveness. We also find that the most sophisticated 0/1 knapsack solvers do not always identify the most effective prioritization, suggesting that overlap-aware prioritizers with a higher time overhead are useful in certain testing contexts.
international conference on software testing verification and validation | 2013
Gregory M. Kapfhammer; Phil McMinn; Chris J. Wright
There has been much attention to testing applications that interact with database management systems, and the testing of individual database management systems themselves. However, there has been very little work devoted to testing arguably the most important artefact involving an application supported by a relational database - the underlying schema. This paper introduces a search-based technique for generating database table data with the intention of exercising the integrity constraints placed on table columns. The development of a schema is a process open to flaws like any stage of application development. Its cornerstone nature to an application means that defects need to be found early in order to prevent knock-on effects to other parts of a project and the spiralling bug-fixing costs that may be incurred. Examples of such flaws include incomplete primary keys, incorrect foreign keys, and omissions of NOT NULL declarations. Using mutation analysis, this paper presents an empirical study evaluating the effectiveness of our proposed technique and comparing it against a popular tool for generating table data, DBMonster. With competitive or faster data generation times, our method outperforms DBMonster in terms of both constraint coverage and mutation score.
automated software engineering | 2007
Adam M. Smith; Joshua Geiger; Gregory M. Kapfhammer; Mary Lou Soffa
This paper presents a tool that (i) constructs tree-based models of a programs behavior during testing and (ii) employs these trees while reordering and reducing a test suite. Using either a dynamic call tree or a calling context tree, the test reduction component identifies a subset of the original tests that covers the same call tree paths. The prioritization technique reorders a test suite so that it covers the call tree paths more rapidly than the initial test ordering. In support of program and test suite understanding, the tool also visualizes the call trees and the coverage relationships. For a chosen case study application, the experimental results show that call tree construction only increases testing time by 13%. In comparison to the original test suite, the experiments show that (i) a prioritized suite achieves coverage much faster and (ii) a reduced test suite contains 45% fewer tests and consumes 82% less time
international symposium on software reliability engineering | 2012
René Just; Gregory M. Kapfhammer; Franz Schweiggert
Mutation analysis is a powerful and unbiased technique to assess the quality of input values and test oracles. However, its application domain is still limited due to the fact that it is a time consuming and computationally expensive method, especially when used with large and complex software systems. Addressing these challenges, this paper makes several contributions to significantly improve the efficiency of mutation analysis. First, it investigates the decrease in generated mutants by applying a reduced, yet sufficient, set of mutants for replacing conditional (COR) and relational (ROR) operators. The analysis of ten real-world applications, with 400,000 lines of code and more than 550,000 generated mutants in total, reveals a reduction in the number of mutants created of up to 37% and more than 25% on average. Yet, since the isolated use of non-redundant mutation operators does not ensure that mutation analysis is efficient and scalable, this paper also presents and experimentally evaluates an optimized workflow that exploits the redundancies and runtime differences of test cases to reorder and split the corresponding test suite. Using the same ten open-source applications, an empirical study convincingly demonstrates that the combination of non-redundant operators and prioritization leveraging information about the runtime and mutation coverage of tests reduces the total cost of mutation analysis further by as much as 65%.
reliability and maintainability symposium | 2002
Jennifer M. Haddox; Gregory M. Kapfhammer
In this paper we present an approach to mitigating software risk by understanding and testing third party, or commercial-off-the-shelf (COTS), software components. Our approach, based on the notion of software wrapping, gives system integrators an improved understanding of how a COTS component behaves within a particular system. Our approach to wrapping allows the data flowing into and out of the component at the public interface level to be intercepted. Using our wrapping approach, developers can apply testing techniques such as fault injection, data collection and, assertion checking to components whose source code is unavailable. We have created a methodology for using software wrapping in conjunction with data collection, fault injection, and assertion checking to test the interaction between a component and the rest of the application. The methodology seeks to identify locations in the program where the systems interaction with COTS components could be problematic. Furthermore, we have developed a prototype that implements,our methodology for Java applications. The goal of this process is to allow the developers to identify scenarios where the interaction between COTS software and the system could result in system failure. We believe that the technology we have developed is an important step towards easing the process of using COTS components in the building and maintenance of software systems.