Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lawrence G. Votta is active.

Publication


Featured researches published by Lawrence G. Votta.


IEEE Transactions on Software Engineering | 1995

Comparing detection methods for software requirements inspections: a replicated experiment

Adam A. Porter; Lawrence G. Votta; Victor R. Basili

Software requirements specifications (SRS) are often validated manually. One such process is inspection, in which several reviewers independently analyze all or part of the specification and search for faults. These faults are then collected at a meeting of the reviewers and author(s). Usually, reviewers use Ad Hoc or Checklist methods to uncover faults. These methods force all reviewers to rely on nonsystematic techniques to search for a wide variety of faults. We hypothesize that a Scenario-based method, in which each reviewer uses different, systematic techniques to search for different, specific classes of faults, will have a significantly higher success rate. We evaluated this hypothesis using a 3/spl times/2/sup 4/ partial factorial, randomized experimental design. Forty eight graduate students in computer science participated in the experiment. They were assembled into sixteen, three-person teams. Each team inspected two SRS using some combination of Ad Hoc, Checklist or Scenario methods. For each inspection we performed four measurements: (1) individual fault detection rate, (2) team fault detection rate, (3) percentage of faults first identified at the collection meeting (meeting gain rate), and (4) percentage of faults first identified by an individual, but never reported at the collection meeting (meeting loss rate). The experimental results are that (1) the Scenario method had a higher fault detection rate than either Ad Hoc or Checklist methods, (2) Scenario reviewers were more effective at detecting the faults their scenarios are designed to uncover, and were no less effective at detecting other faults than both Ad Hoc or Checklist reviewers, (3) Checklist reviewers were no more effective than Ad Hoc reviewers, and (4) Collection meetings produced no net improvement in the fault detection rate-meeting gains were offset by meeting losses. >


IEEE Software | 1994

People, organizations, and process improvement

Dewayne E. Perry; Nancy A. Staudenmayer; Lawrence G. Votta

In their efforts to determine how technology affects the software development process, researchers often overlook organizational and social issues. The authors report on two experiments to discover how developers spend their time. They describe how noncoding activities can use up development time and how even a reluctance to use e-mail can influence the development process. The first experiment was to see how programmers thought they spent their time by having them fill out a modified time card reporting their activities, which we called a time diary. In the second experiment, we used direct observation to calibrate and validate the use of time diaries, which helped us evaluate how time was actually being used.<<ETX>>


foundations of software engineering | 1993

Does every inspection need a meeting

Lawrence G. Votta

At each step in large software development, reviewers carry out inspections to detect faults. These inspections are usually followed by a meeting to collect the faults that have been discovered. However, we have found that these inspection meetings are not as beneficial as managers and developers think they are. Even worse, they cost much more in terms of products development interval and developers time than anyone realizes.Analysis of the inspection and collection process leads us to make the following suggestions. First, at the least, the number of participants required at each inspection meeting should be minimized. Second, we propose two alternative fault collection methods, either of which would eliminate the inspection meetings altogether: (a) collect faults by deposition (small face-to-face meetings of two or three persons), or (b) collect faults using verbal or written media (telephone, electronic mail, or notes).We believe that such a change in procedure would increase efficiency by reducing production times without sacrificing product quality.


IEEE Transactions on Software Engineering | 2001

A controlled experiment in maintenance: comparing design patterns to simpler solutions

Lutz Prechelt; Barbara Unger; Walter F. Tichy; Peter Brössler; Lawrence G. Votta

Software design patterns package proven solutions to recurring design problems in a form that simplifies reuse. We are seeking empirical evidence whether using design patterns is beneficial. In particular, one may prefer using a design pattern even if the actual design problem is simpler than that solved by the pattern, i.e., if not all of the functionality offered by the pattern is actually required. Our experiment investigates software maintenance scenarios that employ various design patterns and compares designs with patterns to simpler alternatives. The subjects were professional software engineers. In most of our nine maintenance tasks, we found positive effects from using a design pattern: either its inherent additional flexibility was achieved without requiring more maintenance time or maintenance time was reduced compared to the simpler alternative. In a few cases, we found negative effects: the alternative solution was less error-prone or required less maintenance time. Overall, we conclude that, unless there is a clear reason to prefer the simpler solution, it is probably wise to choose the flexibility provided by the design pattern because unexpected new requirements often appear. We identify several questions for future empirical research.


Empirical Software Engineering | 1998

Comparing Detection Methods For Software Requirements Inspections: A Replication Using Professional Subjects

Adam A. Porter; Lawrence G. Votta

Software requirements specifications (SRS) are often validated manually. One such process is inspection, in which several reviewers independently analyze all or part of the specification and search for faults. These faults are then collected at a meeting of the reviewers and author(s).Usually, reviewers use Ad Hoc or Checklist methods to uncover faults. These methods force all reviewers to rely on nonsystematic techniques to search for a wide variety of faults. We hypothesize that a Scenario-based method, in which each reviewer uses different, systematic techniques to search for different, specific classes of faults, will have a significantly higher success rate.In previous work we evaluated this hypothesis using 48 graduate students in computer science as subjects.We now have replicated this experiment using 18 professional developers from Lucent Technologies as subjects. Our goals were to (1) extend the external credibility of our results by studying professional developers, and to (2) compare the performances of professionals with that of the graduate students to better understand how generalizable the results of the less expensive student experiments were.For each inspection we performed four measurements: (1) individual fault detection rate, (2) team fault detection rate, (3) percentage of faults first identified at the collection meeting (meeting gain rate), and (4) percentage of faults first identified by an individual, but never reported at the collection meeting (meeting loss rate).For both the professionals and the students the experimental results are that (1) the Scenario method had a higher fault detection rate than either Ad Hoc or Checklist methods, (2) Checklist reviewers were no more effective than Ad Hoc reviewers, (3) Collection meetings produced no net improvement in the fault, and detection rate—meeting gains were offset by meeting losses,Finally, although specific measures differed between the professional and student populations, the outcomes of almost all statistical tests were identical. This suggests that the graduate students provided an adequate model of the professional population and that the much greater expense of conducting studies with professionals may not always be required.


ACM Transactions on Software Engineering and Methodology | 1998

Understanding the sources of variation in software inspections

Adam A. Porter; Harvey P. Siy; Audris Mockus; Lawrence G. Votta

In a previous experiment, we determined how various changes in three structural elements of the software inspection process (team size and the number and sequencing of sessions) altered effectiveness and interval. Our results showed that such changes did not significantly influence the defect detection rate, but that certain combinations of changes dramatically increased the inspection interval. We also observed a large amount of unexplained variance in the data, indicating that other factors must be affecting inspection performance. The nature and extent of these other factors now have to be determined to ensure that they had not biased our earlier results. Also, identifying these other factors might suggest additional ways to improve the efficiency of inspections. Acting on the hypothesis that the “inputs” into the inspection process (reviewers, authors, and code units) were significant sources of variation, we modeled their effects on inspection performance. We found that they were responsible for much more variation in detect detection than was process structure. This leads us to conclude that better defect detection techniques, not better process structures, are the key to improving inspection effectiveness. The combined effects of process inputs and process structure on the inspection interval accounted for only a small percentage of the variance in inspection interval. Therefore, there must be other factors which need to be identified.


international conference on software engineering | 1998

Parallel changes in large scale software development: an observational case study

Dewayne E. Perry; Harvey P. Siy; Lawrence G. Votta

An essential characteristic of large scale software development is parallel development by teams of developers. How this parallel development is structured and supported has a profound effect on both the quality and timeliness of the product. We conduct an observational case study in which me collect and analyze the change and configuration management history of a legacy system to delineate the boundaries of, and to understand the nature of, the problems encountered in parallel development. The results of our studies are: 1) that the degree of parallelism is very high-higher than considered by tool builders; 2) there are multiple levels of parallelism and the data for some important aspects are uniform and consistent for all levels and 3) the tails of the distributions are long, indicating the tail, rather than the mean, must receive serious attention in providing solutions for these problems.


international conference on software engineering | 1994

An experiment to assess different defect detection methods for software requirements inspections

Adam A. Porter; Lawrence G. Votta

Software requirements specifications (SRS) are usually validated by inspections, in which several reviewers read all or part of the specification and search for defects. We hypothesize that different methods for conducting these searches may have significantly different rates of success. Using a controlled experiment, we show that a scenario-based detection method, in which each reviewer executes a specific procedure to discover a particular class of defects has a higher defect detection rate than either ad hoc or checklist methods. We describe the design, execution and analysis of the experiment so others may reproduce it and test our results for different kinds of software developments and different populations of software engineers.<<ETX>>


IEEE Transactions on Software Engineering | 1993

Assessing software designs using capture-recapture methods

S.A. Vander Wiel; Lawrence G. Votta

The number of faults not discovered by the design review can be estimated by using capture-recapture methods. Since these methods were developed for wildlife population estimation, the assumptions used to derive them do not match design review applications. The authors report on a Monte Carlo simulation to study the effects of broken assumptions on maximum likelihood estimators (MLEs) and jackknife estimators (JEs) of faults remaining. It is found that the MLE performs satisfactorily if faults are classified into a small number of homogeneous groups. Without grouping, the MLE can perform poorly, but it generally does better than the JE. >


international conference on software engineering | 1995

Experimental software engineering: a report on the state of the art

Lawrence G. Votta; Adam A. Porter

The goal of this session is to make the software engineering community aware of the opportunities that exist to pursue such an experimental approach. In the remainder of the essay, we describe an emerging model for empirical work and the language for discussing it. We then focus on the current state of experimental software engineering, the road blocks barring effective progress, and what developers and researchers can do to remove them.

Collaboration


Dive into the Lawrence G. Votta's collaboration.

Top Co-Authors

Avatar

Dewayne E. Perry

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Harvey P. Siy

University of Nebraska Omaha

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carlos Puchol

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Jonathan E. Cook

New Mexico State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge