Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Norman F. Schneidewind is active.

Publication


Featured researches published by Norman F. Schneidewind.


IEEE Transactions on Software Engineering | 1992

Methodology for validating software metrics

Norman F. Schneidewind

A comprehensive metrics validation methodology is proposed that has six validity criteria, which support the quality functions assessment, control, and prediction, where quality functions are activities conducted by software organizations for the purpose of achieving project quality goals. Six criteria are defined and illustrated: association, consistency, discriminative power, tracking, predictability, and repeatability. The author shows that nonparametric statistical methods such as contingency tables play an important role in evaluating metrics against the validity criteria. Examples emphasizing the discriminative power validity criterion are presented. A metrics validation process is defined that integrates quality factors, metrics, and quality functions. >


Journal of Software Maintenance and Evolution: Research and Practice | 1999

Towards an Ontology of software maintenance

Barbara A. Kitchenham; Guilherme Horta Travassos; Anneliese von Mayrhauser; Frank Niessink; Norman F. Schneidewind; Janice Singer; Shingo Takada; Risto Vehvilainen; Hongji Yang

SUMMARY We suggest that empirical studies of maintenance are difficult to understand unless the context of the study is fully defined. We developed a preliminary ontology to identify a number of factors that influence maintenance. The purpose of the ontology is to identify factors that would affect the results of empirical studies. We present the ontology in the form of a UML model. Using the maintenance factors included in the ontology, we define two common maintenance scenarios and consider the industrial issues associated with them. Copyright


IEEE Transactions on Software Engineering | 1979

An Experiment in Software Error Data Collection and Analysis

Norman F. Schneidewind; Heinz-Michael Hoffmann

The propensity to make programming errors and the rates of error detection and correction are dependent on program complexity. Knowledge of these relationships can be used to avoid errorprone structures in software design and to devise a testing strategy which is based on anticipated difficulty of error detection and correction. An experiment in software error data collection and analysis was conducted in order to study these relationships under conditions where the error data could be carefully defined and collected. Several complexity measures which can be defined in terms of the directed graph representation of a program, such as cyclomatic number, were analyzed with respect to the following error characteristics: errors found, time between error detections, and error correction time. Signifiant relationships were found between complexity measures and error charateristics. The meaning of directed grph structural properties in terms of the complexity of the programming and testing tasks was examined.


international conference on reliable software technologies | 1997

Reliability modeling for safety-critical software

Norman F. Schneidewind

Software reliability predictions can increase trust in the reliability of safety critical software such as the NASA Space Shuttle Primary Avionics Software System (Shuttle flight software). This objective was achieved using a novel approach to integrate software-safety criteria, risk analysis, reliability prediction, and stopping rules for testing. This approach applies to other safety-critical software. The authors cover only the safety of the software in a safety-critical system. The hardware and human-operator components of such systems are not explicitly modeled nor are the hardware and operator-induced software failures. The concern is with reducing the risk of all failures attributed to software. Thus, safety refers to software-safety and not to system-safety. By improving the software reliability, where the reliability measurements and predictions are directly related to mission and crew safety, they contribute to system safety. Software reliability models provide one of several tools that software managers of the Shuttle flight software are using to assure that the software meets required safety goals. Other tools are inspections, software reviews, testing, change control boards, and perhaps most important-experience and judgement.


IEEE Transactions on Software Engineering | 1993

Software reliability model with optimal selection of failure data

Norman F. Schneidewind

The possibility of obtaining more accurate predictions of future failures by excluding or giving lower weight to the earlier failure counts is suggested. Although data aging techniques such as moving average and exponential smoothing are frequently used in other fields, such as inventory control, the author did not find use of data aging in the various models surveyed. A model that includes the concept of selecting a subset of the failure data is the Schneidewind nonhomogeneous Poisson process (NHPP) software reliability model. In order to use the concept of data aging, there must be a criterion for determining the optimal value of the starting failure count interval. Four criteria for identifying the optimal starting interval for estimating model parameters are evaluated The first two criteria treat the failure count interval index as a parameter by substituting model functions for data vectors and optimizing on functions obtained from maximum likelihood estimation techniques. The third uses weighted least squares to maintain constant variance in the presence of the decreasing failure rate assumed by the model. The fourth criterion is the familiar mean square error. It is shown that significantly improved reliability predictions can be obtained by using a subset of the failure data. The US Space Shuttle on-board software is used as an example. >


the international conference | 1975

Analysis of error processes in computer software

Norman F. Schneidewind

A non-homogeneous Poisson process is used to model the occurrence of errors detected during functional testing of command and control software. The parameters of the detection process are estimated by using a combination of maximum likelihood and weighted least squares methods. Once parameter estimates are obtained, forecasts can be made of cumulative number of detected errors. Forecasting equations of cumulative corrected errors, errors detected but not corrected, and the time required to detect or correct a specified number of errors, are derived from the detected error function. The various forecasts provide decision aids for managing software testing activities. Naval Tactical Data System software error data are used to evaluate several variations of the forecasting methodology and to test the accuracy of the forecasting equations. Because of changes which take place in the actual detected error process, it was found that recent error observations are more representative of future error occurrences than are early observations. Based on a limited test of the model, acceptable accuracy was obtained when using the preferred forecasting method.


IEEE Transactions on Software Engineering | 1999

Measuring and evaluating maintenance process using reliability, risk, and test metrics

Norman F. Schneidewind

In analyzing the stability of a software maintenance process, it is important that it is not treated in isolation from the reliability and risk of deploying the software that result from applying the process. Furthermore, we need to consider the efficiency of the test effort that is a part of the process and a determinate of reliability and risk of deployment. The relationship between product quality and process capability and maturity has been recognized as a major issue in software engineering based on the premise that improvements in the process will lead to higher-quality products. To this end, we have been investigating an important facet of process capability-stability-as defined and evaluated by trend, change and shape metrics, across releases and within a release. Our integration of product and process measurement serves the dual purpose of using metrics to assess and predict reliability and risk and to evaluate process stability. We use the NASA Space Shuttle flight software to illustrate our approach.


ieee international software metrics symposium | 2001

Investigation of logistic regression as a discriminant of software quality

Norman F. Schneidewind

Investigates the possibility that logistic regression functions (LRFs), when used in combination with Boolean discriminant functions (BDFs), which we had previously developed, would improve the quality classification ability of BDFs when used alone; this was found to be the case. When the union of a BDF and LRF was used to classify quality, the predictive accuracy of quality and inspection cost was improved over that of using either function alone for the Space Shuttle. Also, the LRFs proved useful for ranking the quality of modules in a build. The significance of these results is that very high-quality classification accuracy (1.25% error) can be obtained while reducing the inspection cost incurred in achieving high quality. This is particularly important for safety-critical systems. Because the methods are general and not particular to the Shuttle, they could be applied to other domains. A key part of the LRF development was a method for identifying the critical value (i.e. threshold) that could discriminate between high and low quality, and at the same time constrain the cost of inspection to a reasonable value.


IEEE Software | 1992

Applying reliability models to the space shuttle

Norman F. Schneidewind; Ted Keller

The experience of a team that evaluated many reliability models and tried to validate them for the on-board system software of the National Aeronautics and Space Administrations (NASAs) space shuttle is presented. It is shown that three separate but related functions comprise an integrated reliability program: prediction, control, and assessment. The application of the reliability model and the allocation of test resources as part of a testing strategy are discussed.<<ETX>>


international symposium on software reliability engineering | 1997

Software metrics model for integrating quality control and prediction

Norman F. Schneidewind

A model is developed that is used to validate and apply metrics for quality control and quality prediction, with the objective of using metrics as early indicators of software quality problems. Metrics and quality factor data from the Space Shuttle flight software are used as an example. Our approach is to integrate quality control and prediction in a single model and to validate metrics with respect to a quality factor. Boolean discriminant functions (BDFs) were developed for use in the quality control and quality prediction process. BDFs provide good accuracy for classifying low quality software because they include additional information for discriminating quality: critical values. Critical values are threshold values of metrics that are used to either accept or reject modules when the modules are inspected during the quality control process. A series of nonparametric statistical methods is also used in the method presented. It is important to perform a marginal analysis when making a decision about how many metrics to use in the quality control and prediction process. We found that certain metrics are dominant in their effects on classifying quality and that additional metrics are not needed to accurately classify quality. This effect is called dominance. Related to the property of dominance is the property of concordance, which is the degree to which a set of metrics produces the same result in classifying software quality. A high value of concordance implies that additional metrics will not make a significant contribution to accurately classifying quality; hence, these metrics are redundant.

Collaboration


Dive into the Norman F. Schneidewind's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge