Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Saul London is active.

Publication


Featured researches published by Saul London.


international symposium on software reliability engineering | 1997

A study of effective regression testing in practice

W.E. Wong; Joseph Robert Horgan; Saul London; Hiralal Agrawal

The purpose of regression testing is to ensure that changes made to software, such as adding new features or modifying existing features, have not adversely affected features of the software that should not change. Regression testing is usually performed by running some, or all, of the test cases created to test modifications in previous versions of the software. Many techniques have been reported on how to select regression tests so that the number of test cases does not grow too large as the software evolves. Our proposed hybrid technique combines modification, minimization and prioritization-based selection using a list of source code changes and the execution traces from test cases run on previous versions. This technique seeks to identify a representative subset of all test cases that may result in different output behavior on the new software version. We report our experience with a tool called ATAC (Automatic Testing Analysis tool in C) which implements this technique.


international conference on software maintenance | 1993

Incremental regression testing

Hiralal Agrawal; Joseph Robert Horgan; Edward W. Krauser; Saul London

The purpose of regression testing is to ensure that bug fixes and new functionality introduced in a new version of a software do not adversely affect the correct functionality inherited from the previous version. Efficient methods of selecting small subsets of regression test sets that can be used to ensure correct functionality are explored.<<ETX>>


international conference on software engineering | 1995

Effect of test set minimization on fault detection effectiveness

W. Eric Wong; Joseph Robert Horgan; Saul London; Aditya P. Mathur

Size and code coverage are important attributes of a set of tests. When a program P is executed on elements of the test set T, we can observe the fault detecting capability of T for P. We can also observe the degree to which T induces code coverage on P according to some coverage criterion. We would like to know whether it is the size of T or the coverage of T on P which determines the fault detection effectiveness of T for P. To address this issue we ask the following question: While keeping coverage constant, what is the effect on fault detection of reducing the size of a test set? We report results from an empirical study using the block and all-uses criteria as the coverage measures.


IEEE Computer | 1994

Achieving software quality with testing coverage measures

Joseph Robert Horgan; Saul London; Michael R. Lyu

Coverage testing helps the tester create a thorough set of tests and gives a measure of test completeness. The concepts of coverage testing are well-described in the literature. However, there are few tools that actually implement these concepts for standard programming languages, and their realistic use on large-scale projects is rare. In this article, we describe the uses of a dataflow coverage-testing tool for C programs-called ATAC for Automatic Test Analysis for C/sup 3/-in measuring, controlling,and understanding the testing process. We present case studies of two real-world software projects using ATAC. The first study involves 12 program versions developed by a university/industry fault-tolerant software project for a critical automatic-flight-control system. The second study involves a Bellcore project of 33 program modules. These studies indicate that coverage analysis of programs during testing not only gives a clear measure of testing quality but also reveals important aspects of software structure. Understanding the structure of a program, as revealed in coverage testing, can be a significant component in confident assessment of overall software quality.<<ETX>>


international symposium on software testing and analysis | 1991

Data flow coverage and the C language

Joseph Robert Horgan; Saul London

This paper reviews some of the difficulties and decisions in implementing data flow coverage criteria for the C language as realized in ATAC, a data flow coverage testing tool for C. We also address a particular, language independent problem with the concept of the all-du-paths data flow coverage criterion and suggest a solution.


IEEE Computer | 1998

Mining system tests to aid software maintenance

Hiralal Agrawal; James L. Alberi; Joseph Robert Horgan; J. Jenny Li; Saul London; W.E. Wong; Sudipto Ghosh; N. Wilde

Maintainers can use information from test analysis tools to help them understand, debug, and retest programs. The authors describe techniques and tools in Bellcores /spl chi/Suds, a system for understanding and diagnosing software bugs, including Y2K problems in legacy applications.


international symposium on software reliability engineering | 1994

Effect of test set size and block coverage on the fault detection effectiveness

W.E. Wong; Joseph Robert Horgan; Saul London; Aditya P. Mathur

Size and code coverage are two important attributes that characterize a set of tests. When a program P is executed on elements of a test set T, we can observe the fault-detecting capacity of T for P. We can also observe the degree to which T induces code coverage on P according to some coverage criterion. We would like to know whether it is the size of T or the coverage of T on P which determines the fault detection effectiveness (FDE) of T for P. We found that there is little or no reduction in the FDE of a test set when its size is reduced while the all-uses coverage is kept constant. These data suggest, indirectly, that coverage is more correlated than the size with the FDE. To further investigate this suggestion, we report an empirical study to compare the statistical correlation between (1) FDE and coverage, and (2) FDE and the size. Results from our experiments indicate that the correlation between FDE and block coverage is higher than that between FDE and size.<<ETX>>


international symposium on software reliability engineering | 1993

A coverage analysis tool for the effectiveness of software testing

Michael R. Lyu; Joseph Robert Horgan; Saul London

We describe a software testing and analysis tool, called ATAC (Automatic Test Analysis for C), which is developed as a research instrument at Bellcore to measure the effectiveness of testing data. The design, functionality, and usage of ATAC are presented in this paper. Furthermore, to demonstrate the capability and applicability of ATAC, we obtain the 12 program versions of a critical industrial application developed in a recent university/industry N-Version Software project, and use the ATAC tool to analyze and compare coverage of the testing conducted in the program versions. Preliminary results from this investigation show that ATAC could be a powerful testing tool to provide testing metrics and quality control guidance for the certification of high quality software components or systems. It can also assist software reliability researchers and practitioners in searching for the missing link between structure-based testing schemes and software reliability.


Archive | 1992

Atac: a data flow coverage testing tool for c

Joseph Robert Horgan; Saul London


Archive | 1994

Effect of Test Set Minimization on the Fault Detection Effectiveness of the All-Uses Criterion

W. Eric Wong; Joseph Robert Horgan; Saul London

Collaboration


Dive into the Saul London's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ashish Jain

Telcordia Technologies

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

W. Eric Wong

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sudipto Ghosh

Colorado State University

View shared research outputs
Top Co-Authors

Avatar

Michael R. Lyu

The Chinese University of Hong Kong

View shared research outputs
Researchain Logo
Decentralizing Knowledge