Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anneliese Amschler Andrews is active.

Publication


Featured researches published by Anneliese Amschler Andrews.


Software and Systems Modeling | 2005

Testing Web applications by modeling with FSMs

Anneliese Amschler Andrews; A. Jefferson Offutt; Roger T. Alexander

Researchers and practitioners are still trying to find effective ways to model and test Web applications. This paper proposes a system-level testing technique that combines test generation based on finite state machines with constraints. We use a hierarchical approach to model potentially large Web applications. The approach builds hierarchies of Finite State Machines (FSMs) that model subsystems of the Web applications, and then generates test requirements as subsequences of states in the FSMs. These subsequences are then combined and refined to form complete executable tests. The constraints are used to select a reduced set of inputs with the goal of reducing the state space explosion otherwise inherent in using FSMs. The paper illustrates the technique with a running example of a Web-based course student information system and introduces a prototype implementation to support the technique.


Software Testing, Verification & Reliability | 2003

Test adequacy criteria for UML design models

Anneliese Amschler Andrews; Sudipto Ghosh; Gerald Craig

Systematic design testing, in which executable models of behaviours are tested using inputs that exercise scenarios, can help reveal flaws in designs before they are implemented in code. In this paper a technique for testing executable forms of UML (Unified Modelling Language) models is described and test adequacy criteria based on UML model elements are proposed. The criteria can be used to define test objectives for UML designs. The UML design test criteria are based on the same premise underlying code test criteria: coverage of relevant building blocks of models is highly likely to uncover faults. The test adequacy criteria proposed in this paper are based on building blocks for UML class and interaction diagrams. Class diagram criteria are used to determine the object configurations on which tests are run, while interaction diagram criteria are used to determine the sequences of messages that should be tested. Copyright


Empirical Software Engineering | 2002

An Empirical Method for Selecting Software Reliability Growth Models

Catherine Stringfellow; Anneliese Amschler Andrews

Estimating remaining defects (or failures) in software can help test managers make release decisions during testing. Several methods exist to estimate defect content, among them a variety of software reliability growth models (SRGMs). SRGMs have underlying assumptions that are often violated in practice, but empirical evidence has shown that many are quite robust despite these assumption violations. The problem is that, because of assumption violations, it is often difficult to know which models to apply in practice. We present an empirical method for selecting SRGMs to make release decisions. The method provides guidelines on how to select among the SRGMs to decide on the best model to use as failures are reported during the test phase. The method applies various SRGMs iteratively during system test. They are fitted to weekly cumulative failure data and used to estimate the expected remaining number of failures in software after release. If the SRGMs pass proposed criteria, they may then be used to make release decisions. The method is applied in a case study using defect reports from system testing of three releases of a large medical record system to determine how well it predicts the expected total number of failures.


IEEE Software | 2006

What do we know about defect detection methods? [software testing]

Per Runeson; Carina Andersson; Thomas Thelin; Anneliese Amschler Andrews; Tomas Berling

A survey of defect detection studies comparing inspection and testing techniques yields practical recommendations: use inspections for requirements and design defects, and use testing for code. Evidence-based software engineering can help software practitioners decide which methods to use and for what purpose. EBSE involves defining relevant questions, surveying and appraising avail able empirical evidence, and integrating and evaluating new practices in the target environment. This article helps define questions regarding defect detection techniques and presents a survey of empirical studies on testing and inspection techniques. We then interpret the findings in terms of practical use. The term defect always relates to one or more underlying faults in an artifact such as code. In the context of this article, defects map to single faults


workshop on program comprehension | 2003

Understanding change-proneness in OO software through visualization

James M. Bieman; Anneliese Amschler Andrews; Helen J. Yang

During software evolution, adaptive, and corrective maintenance are common reasons for changes. Often such changes cluster around key components. It is therefore important to analyze the frequency of changes to individual classes, but, more importantly, to also identify and show related changes in multiple classes. Frequent changes in clusters of classes may be due to their importance, due to the underlying architecture or due to chronic problems. Knowing where those change-prone clusters are can help focus attention, identify targets for re-engineering and thus provide product-based information to steer maintenance processes. This paper describes a method to identify and visualize classes and class interactions that are the most change-prone. The method was applied to a commercial embedded, real-time software system. It is object-oriented software that was developed using design patterns.


International conference on the unified modeling language | 2003

Rigorous testing by merging structural and behavioral UML representations

Orest Pilskalns; Anneliese Amschler Andrews; Sudipto Ghosh

Error detection and correction in the design phase can reduce total costs and time to market. Yet, testing of design models usually consists of walk-throughs and inspections both of which lack the rigor of systematic testing. Test adequacy criteria for UML models help define necessary objectives during the process of test creation. These test criteria require coverage of various parts of UML models, such as structural (Class Diagram) and behavioral (Sequence Diagram) views. Test criteria are specific to a particular UML view. Test cases on the other hand should cover parts of multiple views. To understand testing needs better, it is useful to be able to observe the effect of tests on both Class Diagrams and Sequence Diagrams. We propose a new graph that encapsulates the many paths that exist between objects via their method calls as a directed acyclic graph (OMDAG). We also introduce the object method execution table (OMET) that captures both execution sequence and associated attribute values by merging the UML views. The merging process is defined in an algorithm that generates and executes tests.


international conference on software maintenance | 2006

Regression Testing UML Designs

Orest Pilskalns; Gunay Uyan; Anneliese Amschler Andrews

As model driven architectures (MDAs) gain in popularity, several techniques that test the UML models have been proposed. These techniques aim at early detection and correction of faults to reduce the overall cost of correcting them later in the software life-cycle. Recently, Pilskalns et al., 2003 proposed an approach to test the UML design models to check for inconsistencies. They create an aggregate model which merges information from class diagrams, sequence diagrams and OCL statements, then generate test cases to identify inconsistencies. Since designs change often in the early stages of the software life-cycle, we need a regression testing approach that can be performed on the UML model. By classifying design changes, and then further classifying the test cases, we provide a set of rules about how to reuse part of the existing test cases, and generate new ones to ensure all affected parts of the system are tested adequately. The approach is a safe and efficient selective retest strategy. A case-study is reported to demonstrate the benefits


Software Quality Journal | 2001

Assessing Project Success Using Subjective Evaluation Factors

Claes Wohlin; Anneliese Amschler Andrews

Project evaluation is essential to understand and assess the key aspects of a project that make it either a success or failure. The latter is influenced by a large number of factors, and many times it is hard to measure them objectively. This paper addresses this by introducing a new method for identifying and assessing key project characteristics, which are crucial for a projects success. The method consists of a number of well-defined steps, which are described in detail. The method is applied to two case studies from different application domains and continents. It is concluded that patterns are possible to detect from the data sets. Further, the analysis of the two data sets shows that the proposed method using subjective factors is useful, since it provides an increased understanding, insight and assessment of which project factors might affect project success.


international conference on engineering of complex computer systems | 2005

A tool-supported approach to testing UML design models

Trung T. Dinh-Trong; Nilesh Kawane; Sudipto Ghosh; Anneliese Amschler Andrews

For model driven development approaches to succeed, there is a need for model validation techniques. This paper presents an approach to testing designs described by UML class diagrams, interaction diagrams, and activity diagrams. A UML design model under test is transformed into an executable form. Test infrastructure is added to the executable form to carry out tests. During testing, object configurations are created, modified and observed. In this paper, we identify the structural and behavioral characteristics that need to be observed during testing. We describe a prototype tool that (1) transforms UML design models into executable forms with test infrastructure, (2) executes tests, and (3) reports failures.


Empirical Software Engineering | 2003

Prioritizing and Assessing Software Project Success Factors and Project Characteristics using Subjective Data

Claes Wohlin; Anneliese Amschler Andrews

This paper presents a method for analyzing the impact software project factors have on project success as defined by project success factors that have been prioritized. It is relatively easy to collect measures of project attributes subjectively (i.e., based on expert judgment). Often Likert scales are used for that purpose. It is much harder to identify whether and how a large number of such ranked project factors influence project success, and to prioritize their influence on project success. At the same time, it is desirable to use the knowledge of project personnel effectively. Given a prioritization of project goals, it is shown how some key project characteristics can be related to project success. The method is applied in a case study consisting of 46 projects. For each project, six success factors and 27 project attributes were measured. Successful projects show common characteristics. Using this knowledge can lead to better control and software project management and to an increased likelihood of project success.

Collaboration


Dive into the Anneliese Amschler Andrews's collaboration.

Top Co-Authors

Avatar

Orest Pilskalns

Washington State University Vancouver

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sudipto Ghosh

Colorado State University

View shared research outputs
Top Co-Authors

Avatar

Claes Wohlin

Blekinge Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge