Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amie L. Souter is active.

Publication


Featured researches published by Amie L. Souter.


international symposium on software testing and analysis | 1998

All-du-path coverage for parallel programs

Cheer-Sun D. Yang; Amie L. Souter; Lori L. Pollock

One significant challenge in bringing the power of parallel machines to application programmers is providing them with a suite of software tools similar to the tools that sequential programmers currently utilize. In particular, automatic or semi-automatic testing tools for parallel programs are lacking. This paper describes our work in automatic generation of all-du-paths for testing parallel programs. Our goal is to demonstrate that, with some extension, sequential test data adequacy criteria are still applicable to parallel program testing. The concepts and algorithms in this paper have been incorporated as the foundation of our DELaware PArallel Software Testing Aid, della pasta.


automated software engineering | 2004

A scalable approach to user-session based testing of Web applications through concept analysis

Sreedevi Sampath; Valentin Mihaylov; Amie L. Souter; Lori L. Pollock

The continuous use of the Web for daily operations by businesses, consumers, and government has created a great demand for reliable Web applications. One promising approach to testing the functionality of Web applications leverages user-session data collected by Web servers. This approach automatically generates test cases based on real user profiles. The key contribution of This work is the application of concept analysis for clustering user sessions for test suite reduction. Existing incremental concept analysis algorithms can be exploited to avoid collecting large user-session data sets and thus provide scalability. We have completely automated the process from user session collection and reduction through replay. Our incremental test suite update algorithm coupled with our experimental study indicate that concept analysis provides a promising means for incrementally updating reduced test suites in response to newly captured user sessions with some loss in fault detection capability and practically no coverage loss.


international conference on software maintenance | 2005

An empirical comparison of test suite reduction techniques for user-session-based testing of Web applications

Sara Sprenkle; Sreedevi Sampath; Emily Gibson; Lori L. Pollock; Amie L. Souter

Automated cost-effective test strategies are needed to provide reliable, secure, and usable Web applications. As a software maintainer updates an application, test cases must accurately reflect usage to expose faults that users are most likely to encounter. User-session-based testing is an automated approach to enhancing an initial test suite with real user data, enabling additional testing during maintenance as well as adding test data that represents usage as operational profiles evolve. Test suite reduction techniques are critical to the cost effectiveness of user-session-based testing because a key issue is the cost of collecting, analyzing, and replaying the large number of test cases generated from user-session data. We performed an empirical study comparing the test suite size, program coverage, fault detection capability, and costs of three requirements-based reduction techniques and three variations of concept analysis reduction applied to two Web applications. The statistical analysis of our results indicates that concept analysis-based reduction is a cost-effective alternative to requirements-based approaches.


international workshop on dynamic analysis | 2005

An exploration of statistical models for automated test case generation

Jessica Sant; Amie L. Souter; Lloyd G. Greenwald

In this paper, we develop methods that use logged user data to build models of a web application. Logged user data captures dynamic behavior of an application that can be useful for addressing the challenging problems of testing web applications. Our approach automatically builds statistical models of user sessions and automatically derives test cases from these models. We provide several alternative modeling approaches based on statistical machine learning methods. We investigate the effectiveness of the test suites generated from our methods by performing a preliminary study that evaluates the generated test cases. The results of this study demonstrate that our techniques are able to generate test cases that achieve high coverage and accurately model user behavior. This study provides insights into improving our methods and motivates a larger study with a more diverse set of applications and testing metrics.


international conference on software maintenance | 2004

Composing a framework to automate testing of operational Web-based software

Sreedevi Sampath; Valentin Mihaylov; Amie L. Souter; Lori L. Pollock

Low reliability in Web-based applications can result in detrimental effects for business, government, and consumers as they become increasingly dependent on the Internet for routine operations. A short time to market, large user community, demand for continuous availability, and frequent updates motivate automated, cost-effective testing strategies. To investigate the practical tradeoffs of different automated strategies for key components of the Web-based software testing process, we have designed a framework for Web-based software testing that focuses on scalability and evolving the test suite automatically as the applications operational profile changes. We have developed an initial prototype that not only demonstrates how existing tools can be used together but provides insight into the cost effectiveness of the overall approach. This paper describes the testing framework, discusses the issues in building and reusing tools in an integrated manner, and presents a case study that exemplifies the usability, costs, and scalability of the approach.


IEEE Transactions on Software Engineering | 2003

The construction of contextual def-use associations for object-oriented systems

Amie L. Souter; Lori L. Pollock

This paper describes a program representation and algorithms for realizing a novel structural testing methodology that not only focuses on addressing the complex features of object-oriented languages, but also incorporates the structure of object-oriented software into the approach. The testing methodology is based on the construction of contextual def-use associations, which provide context to each definition and use of an object. Testing based on contextual def-use associations can provide increased test coverage by identifying multiple unique contextual def-use associations for the same context-free association. Such a testing methodology promotes more thorough and focused testing of the manipulation of objects in object-oriented programs. This paper presents a technique for the construction of contextual def-use associations, as well as detailed examples illustrating their construction, an analysis of the cost of constructing contextual def-use associations with this approach, and a description of a prototype testing tool that shows how the theoretical contributions of this work can be useful for structural test coverage.


international symposium on software testing and analysis | 2000

OMEN: A strategy for testing object-oriented software

Amie L. Souter; Lori L. Pollock

This paper presents a strategy for structural testing of object-oriented software systems with possibly unknown clients and unknown information about invoked methods. By exploiting the combined points-to and escape analysis developed for compiler optimization, our testing paradigm does not require a whole program representation to be in memory simultaneously for testing analysis. Potential effects from outside the component under test are easily identified and reported to the tester. As client and server methods become known, the graph representation of object relationships is easily extended, allowing the computation of test tuples to be performed in a demand-driven manner, without requiring unnecessary computation of test tuples based on predictions of potential clients.


international conference on software maintenance | 2001

Incremental call graph reanalysis for object-oriented software maintenance

Amie L. Souter; Lori L. Pollock

A programs call graph is an essential underlying structure for performing the various interprocedural analyses used in software development tools for object-oriented software systems. For interactive software development tools and software maintenance activities, the call graph needs to remain fairly precise and be updated quickly in response to software changes. The paper presents incremental algorithms for updating a call graph that has been initially constructed using the Cartesian Product Algorithm, which computes a highly precise call graph in the presence of dynamically dispatched message sends. Templates are exploited to reduce unnecessary reanalysis as software component changes occur. The preliminary empirical results from our implementation within a Java environment are encouraging. Significant time savings were observed for the incremental algorithm in comparison with an exhaustive analysis, with no loss in precision.


workshop on program analysis for software tools and engineering | 1999

Inter-class def-use analysis with partial class representations

Amie L. Souter; Lori L. Pollock; Dixie Hisley

Object-oriented program design promotes the reuse of code not only through inheritance and polymorphism, but also through building server classes which can be used by many different client classes. Research on static analysis of object-oriented software has focused on addressing the new features of classes, inheritance, polymorphism, and dynamic binding. This paper demonstrates how exploiting the nature of object-oriented design principles can enable development of scalable static analyses. We present an algorithm for computing def-use information for a single classs manipulation of objects of other classes, which requires that only partial representations of server classes be constructed. This information is useful for data flow testing and debugging.


tools and algorithms for construction and analysis of systems | 2001

TATOO: Testing and Analysis Tool for Object- Oriented Software

Amie L. Souter; Tiffany Wong; Stacey A. Shindo; Lori L. Pollock

Testing is a critical component of the software development process and is required to ensure the reliability, robustness and usability of software. Tools that systematically aid in the testing process are crucial to the development of reliable software. This paper describes a code-based testing and analysis tool for object-oriented software. TATOO provides a systematic approach to testing tailored towards object behavior, and particularly for class integration testing. The underlying program analysis subsystem exploits combined points-to and escape analysis developed for compiler optimization to address the software testing issues.

Collaboration


Dive into the Amie L. Souter's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lori Pollock

University of Wisconsin-Madison

View shared research outputs
Researchain Logo
Decentralizing Knowledge