Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Carol S. Smidts is active.

Publication


Featured researches published by Carol S. Smidts.


IEEE Transactions on Software Engineering | 2003

A ranking of software engineering measures based on expert opinion

Ming Li; Carol S. Smidts

This research proposes a framework based on expert opinion elicitation, developed to select the software engineering measures which are the best software reliability indicators. The current research is based on the top 30 measures identified in an earlier study conducted by Lawrence Livermore National Laboratory. A set of ranking criteria and their levels were identified. The score of each measure for each ranking criterion was elicited through expert opinion and then aggregated into a single score using multiattribute utility theory. The basic aggregation scheme selected was a linear additive scheme. A comprehensive sensitivity analysis was carried out. The sensitivity analysis included: variation of the ranking criteria levels, variation of the weights, variation of the aggregation schemes. The top-ranked measures were identified. Use of these measures in each software development phase can lead to a more reliable quantitative prediction of software reliability.


IEEE Transactions on Reliability | 1998

Software reliability modeling: an approach to early reliability prediction

Carol S. Smidts; Martin A. Stutzke; R.W. Stoddard

Models for predicting software reliability in the early phases of development are of paramount importance since they provide early identification of cost overruns, software development process issues, optimal development strategies, etc. A few models geared towards early reliability prediction, applicable to well defined domains, have been developed during the 1990s. However, many questions related to early prediction are still open, and more research in this area is needed, particularly for developing a generic approach to early reliability prediction. This paper presents an approach to predicting software reliability based on a systematic identification of software process failure modes and their likelihoods. A direct consequence of the approach and its supporting data collection efforts is the identification of weak areas in the software development process. A Bayes framework for the quantification of software process failure mode probabilities can be useful since it allows use of historical data that are only partially relevant to the software at hand. The key characteristics of the approach should apply to other software-development life-cycles and phases. However, it is unclear how difficult the implementation of the approach would be, and how accurate the predictions would be. Further research will help answer these questions.


Reliability Engineering & System Safety | 1999

The Event Sequence Diagram framework for dynamic Probabilistic Risk Assessment

S. Swaminathan; Carol S. Smidts

Abstract Dynamic methodologies have become fairly established in academia. Their superiority over classical methods like Event Tree/Fault Tree techniques has been demonstrated. Despite this, dynamic methodologies have not enjoyed the support of the industry. One of the primary reasons for the lack of acceptance in the industry is that there is no easy way to qualitatively represent dynamic scenarios. This paper proposes to extend current Event Sequence Diagrams (ESDs) to allow modeling of dynamic situations. Under the proposed ESD representation, ESDs can be used in combination with dynamic methodology computational algorithms which will solve the underlying probabilistic dynamics equations. Once engineers are able to translate their knowledge of the system dynamics and accident evolution into simple ESDs, usage of dynamic methodologies will become more popular.


Reliability Engineering & System Safety | 1997

The IDA cognitive model for the analysis of nuclear power plant operator response under accident conditions. Part I: problem solving and decision making model

Carol S. Smidts; Song-Hua Shen; Ali Mosleh

Abstract This paper is the first of a series of papers describing IDA which is a cognitive model for analysing the behaviour of nuclear power plant operators under accident conditions. The domain of applicability of the model is a relatively constrained environment where behaviour is significantly influenced by high levels of training and explicit requirement to follow written procedures. IDA consists of a model for individual operator behaviour and a model for control room operating crew expanded from the individual model. The model and its derivatives such as an error taxonomy and data collection approach has been designed with ultimate objective of becoming a quantitative method for human reliability analysis (HRA) in probabilistic risk assessment (PRA). The present paper gives a description of the main components of IDA such as memory structure, goals, and problem solving and decision making strategies. It also identifies factors that are at the origin of transitions between goals or between strategies. These factors cover the effects of external conditions and psychological state of the operator. The description is generic at first and then made specific to the nuclear power plant environment and more precisely to abnormal conditions.


Reliability Engineering & System Safety | 1999

The mathematical formulation for the event sequence diagram framework

S. Swaminathan; Carol S. Smidts

Abstract The Event Sequence Diagram (ESD) framework allows modeling of dynamic situations of interest to PRA analysts. A qualitative presentation of the framework was given in an earlier article. The mathematical formulation for the components of the ESD framework is described in this article. The formulation was derived from the basic probabilistic dynamics equations. For tackling certain dynamic non-Markovian situations, the probabilistic dynamics framework was extended. The mathematical treatment of dependencies among fault trees in a multi layered ESD framework is also presented.


IEEE Transactions on Reliability | 2001

A stochastic model of fault introduction and removal during software development

Martin A. Stutzke; Carol S. Smidts

Two broad categories of human error occur during software development: (1) development errors made during requirements analysis, design, and coding activities; (2) debugging errors made during attempts to remove faults identified during software inspections and dynamic testing. This paper describes a stochastic model that relates the software failure intensity function to development and debugging error occurrence throughout all software life-cycle phases. Software failure intensity is related to development and debugging errors because data on development and debugging errors are available early in the software life-cycle and can be used to create early predictions of software reliability. Software reliability then becomes a variable which can be controlled up front, viz, as early as possible in the software development life-cycle. The model parameters were derived based on data reported in the open literature. A procedure to account for the impact of influencing factors (e.g., experience, schedule pressure) on the parameters of this stochastic model is suggested. This procedure is based on the success likelihood methodology (SLIM). The stochastic model is then used to study the introduction and removal of faults and to calculate the consequent failure intensity value of a small-software developed using a waterfall software development.


Nuclear Engineering and Design | 1997

A methodology for collection and analysis of human error data based on a cognitive model: IDA

Song-Hua Shen; Carol S. Smidts; Ali Mosleh

Abstract This paper presents a model-based human error taxonomy and data collection. The underlying model, IDA (described in two companion papers), is a cognitive model of behavior developed for analysis of the actions of nuclear power plant operating crew during abnormal situations. The taxonomy is established with reference to three external reference points (i.e. plant status, procedures, and crew) and four reference points internal to the model (i.e. information collected, diagnosis, decision, action). The taxonomy helps the analyst: (1) recognize errors as such; (2) categorize the error in terms of generic characteristics such as ‘error in selection of problem solving strategies’ and (3) identify the root causes of the error. The data collection methodology is summarized in post event operator interview and analysis summary forms. The root cause analysis methodology is illustrated using a subset of an actual event. Statistics, which extract generic characteristics of error prone behaviors and error prone situations are presented. Finally, applications of the human error data collection are reviewed. A primary benefit of this methodology is to define better symptom-based and other auxiliary procedures with associated training to minimize or preclude certain human errors. It also helps in design of control rooms, and in assessment of human error probabilities in the probabilistic risk assessment framework.


Archive | 1994

Probabilistic Dynamics : The Mathematical and Computing Problems Ahead

Jacques Devooght; Carol S. Smidts

The methodology of probabilistic dynamics viewed as a continuous event tree theory is reviewed and other existing methods are shown to be particular cases corresponding to definite assumptions. Prospects for improvement of related numerical algorithms are examined.


international symposium on software reliability engineering | 2000

Ranking software engineering measures related to reliability using expert opinion

Ming Li; Carol S. Smidts; R. W. Brill

The field of software engineering measurement appears to the unfamiliar eye as a chaotic environment lacking unifying principles and rigor. The number of software engineering measures developed over the years is stupefying and keeps increasing. Software engineering measures relate to multiple aspects of the software development process and product. Software development organizations typically select a small number of such software engineering measures to manage their development processes and products. The research presented in this paper is an attempt to help software development organizations identify the software engineering measures that are best predictors of software reliability. The research is based on the top 30 measures identified in an earlier study carried out by Lawrence Livermore National Laboratory (J.D. Lawrence et al., Technical Report UCRL-ID-136035, 1998). The set of ranking criteria was modified to fit the needs of the study. The score of each measure for each ranking criterion was elicited through expert opinion and then aggregated into a single score using multi-attribute utility theory. The basic aggregation scheme selected was a linear additive scheme. A comprehensive sensitivity analysis was carried out. The sensitivity analysis included variation of levels, variation of weights and variation of aggregation schemes.


international symposium on software reliability engineering | 1996

Software reliability models: an approach to early reliability prediction

Carol S. Smidts; Robert Stoddard; Martin A. Stutzke

Software reliability prediction models are of paramount importance since they provide early identification of cost overruns, software development process issues, optimal development strategies, etc. Existing prediction models were developed mostly during the past 5 to 10 years and, hence, have become obsolete. Furthermore, they are not based on a deep knowledge and understanding of the software development process. This limits their predictive power. This paper presents an approach to the prediction of software reliability based on a systematic identification of software process failure modes and their likelihoods. A direct consequence of the approach and its supporting data collection efforts is the identification of weak areas in the software development process. A Bayesian framework for the quantification of software process failure mode probabilities is recommended since it allows usage of historical data that are only partially relevant to the software at hand. The approach is applied to the requirements analysis phase.

Collaboration


Dive into the Carol S. Smidts's collaboration.

Top Co-Authors

Avatar

Ali Mosleh

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Boyuan Li

Ohio State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge