Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Emily Gibson is active.

Publication


Featured researches published by Emily Gibson.


automated software engineering | 2005

Automated replay and failure detection for web applications

Sara Sprenkle; Emily Gibson; Sreedevi Sampath; Lori L. Pollock

User-session-based testing of web applications gathers user sessions to create and continually update test suites based on real user input in the field. To support this approach during maintenance and beta testing phases, we have built an automated framework for testing web-based software that focuses on scalability and evolving the test suite automatically as the applications operational profile changes. This paper reports on the automation of the replay and oracle components for web applications, which pose issues beyond those in the equivalent testing steps for traditional, stand-alone applications. Concurrency, nondeterminism, dependence on persistent state and previous user sessions, a complex application infrastructure, and a large number of output formats necessitate developing different replay and oracle comparator operators, which have tradeoffs in fault detection effectiveness, precision of analysis, and efficiency. We have designed, implemented, and evaluated a set of automated replay techniques and oracle comparators for user-session-based testing of web applications. This paper describes the issues, algorithms, heuristics, and an experimental case study with user sessions for two web applications. From our results, we conclude that testers performing user-session-based testing should consider their expectations for program coverage and fault detection when choosing a replay and oracle technique.


IEEE Transactions on Software Engineering | 2007

Applying Concept Analysis to User-Session-Based Testing of Web Applications

Sreedevi Sampath; Sara Sprenkle; Emily Gibson; Lori L. Pollock; Amie Souter Greenwald

The continuous use of the Web for daily operations by businesses, consumers, and the government has created a great demand for reliable Web applications. One promising approach to testing the functionality of Web applications leverages the user-session data collected by Web servers. User-session-based testing automatically generates test cases based on real user profiles. The key contribution of this paper is the application of concept analysis for clustering user sessions and a set of heuristics for test case selection. Existing incremental concept analysis algorithms are exploited to avoid collecting and maintaining large user-session data sets and to thus provide scalability. We have completely automated the process from user session collection and test suite reduction through test case replay. Our incremental test suite update algorithm, coupled with our experimental study, indicates that concept analysis provides a promising means for incrementally updating reduced test suites in response to newly captured user sessions with little loss in fault detection capability and program coverage.


international conference on software maintenance | 2005

An empirical comparison of test suite reduction techniques for user-session-based testing of Web applications

Sara Sprenkle; Sreedevi Sampath; Emily Gibson; Lori L. Pollock; Amie L. Souter

Automated cost-effective test strategies are needed to provide reliable, secure, and usable Web applications. As a software maintainer updates an application, test cases must accurately reflect usage to expose faults that users are most likely to encounter. User-session-based testing is an automated approach to enhancing an initial test suite with real user data, enabling additional testing during maintenance as well as adding test data that represents usage as operational profiles evolve. Test suite reduction techniques are critical to the cost effectiveness of user-session-based testing because a key issue is the cost of collecting, analyzing, and replaying the large number of test cases generated from user-session data. We performed an empirical study comparing the test suite size, program coverage, fault detection capability, and costs of three requirements-based reduction techniques and three variations of concept analysis reduction applied to two Web applications. The statistical analysis of our results indicates that concept analysis-based reduction is a cost-effective alternative to requirements-based approaches.


international symposium on software testing and analysis | 2006

A case study of automatically creating test suites from web application field data

Sara Sprenkle; Emily Gibson; Sreedevi Sampath; Lori L. Pollock

Creating effective test cases is a difficult problem, especially for web applications. To comprehensively test a web applications functionality, test cases must test complex application state dependencies and concurrent user interactions. Rather than creating test cases manually or from a static model, field data provides an inexpensive alternative to creating such sophisticated test cases. An existing approach to using field data in testing web applications is user-session-based testing. Previous user-session-based testing approaches ignore state dependences from multi-user interactions. In this paper, we propose strategies for leveraging web application field data to automatically create test cases that test various levels of multi-user interaction and state dependencies. Results from out preliminary case study of a publicly deployed web application show that these test case creation mechanisms are a promising testing strategy for web applications.


international symposium on software testing and analysis | 2006

Integrating customized test requirements with traditional requirements in web application testing

Sreedevi Sampath; Sara Sprenkle; Emily Gibson; Lori L. Pollock

Existing test suite reduction techniques employed for testing web applications have either used traditional program coverage-based requirements or usage-based requirements. In this paper, we explore three different strategies to integrate the use of program coverage-based requirements and usage-based requirements in relation to test suite reduction for web applications. We investigate the use of usage-based test requirements for comparison of test suites that have been reduced based on program coverage-based test requirements. We examine the effectiveness of a test suite reduction process based on a combination of both usage-based and program coverage-based requirements. Finally, we modify a popular test suite reduction algorithm to replace part of its test selection process with selection based on usage-based test requirements. Our case study suggests that integrating program coverage-based and usage-based test requirements has a positive impact on the effectiveness of the resulting test suites.


Software Engineering Research and Practice | 2004

Design and Evaluation of an Automated Aspect Mining Tool

David C. Shepherd; Emily Gibson; Lori L. Pollock


international symposium on software reliability engineering | 2006

Web Application Testing with Customized Test Requirements - An Experimental Comparison Study

Sreedevi Sampath; Sara Sprenkle; Emily Gibson; Lori L. Pollock


international workshop on dynamic analysis | 2005

Analyzing clusters of web application user sessions

Sreedevi Sampath; Sara Sprenkle; Emily Gibson; Lori L. Pollock; Amie L. Souter


Archive | 2005

Coverage Criteria for Testing Web Applications

Sreedevi Sampath; Emily Gibson; Sara Sprenkle; Lori L. Pollock


SERP | 2004

Automated mining of desirable aspects

David Sheperd; Emily Gibson; Lori L. Pollock

Collaboration


Dive into the Emily Gibson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lori Pollock

University of Wisconsin-Madison

View shared research outputs
Researchain Logo
Decentralizing Knowledge