Martin L. Shooman
New York University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Martin L. Shooman.
IEEE Transactions on Reliability | 1976
Martin L. Shooman; M.I. Bolsky
This paper reports the results of an experiment whose objectives were to: 1. Develop and utilize a set of terms for describing possible types of errors, their nature, and their frequency; 2. Perform a pilot study to determine if data of the type reported in this paper could be collected; 3. Investigate the error density and its correspondence to predictions from previous data reported; and 4. Develop data on how resources are expended in debugging. A program of approximately 4k machine instructions was chosen. Programmers were asked to fill out for each error, in addition to the regular Trouble Report (TR)/Correction Report (CR) form, a special Supplementary TR/CR form for the purposes of this experiment. Both the regular and the Supplementary forms were divided into two sections. Each form consisted of a single sheet; the upper half of each form was the Trouble Report (TR), and the lower half of each form was the Correction Report (CR). Sixty-three regular and Supplementary TR/CR forms were completed during the Test and Integration phase of the program; these forms represented a little over 1.5% of the total number of machine instructions of the program (in good agreement with the 1% to 2% range noted in previous studies). A large fraction of the errors was found by hand processing (without the aid of a computer), which was much cheaper than machine testing.
IEEE Transactions on Reliability | 1970
Martin Messinger; Martin L. Shooman
In many modern complex systems the problem of achieving high reliability leads to the use of interchangeable modular components accompanied by a stock of spare parts. This paper examines, compares, and assesses several of the techniques presented in the literature for allocating the numbers of spares of each part type to be stocked in order to maximize the system reliability subject to constraints on resources (i.e., weight, volume, cost, etc.). The problem of optimum spares allocation is complicated since resources are consumed in a discrete fashion and the expression for the system reliability is a nonlinear transcendental function. The classical dynamic programming algorithm produces all optimal spares allocations; however, the solution can become computationally intractable even with the aid of a modern high-speed digital computer. In the case of multiple constraints the time problem is vastly exacerbated. In such a case one must turn to a procedure that yields a near-optimal solution in a reasonable amount of computer time. Two approximate methods discussed in this paper are the incremental reliability per pound algorithm and the Lagrange multiplier algorithm. These algorithms are readily adaptable to handle multiple constraints. Computer programs are developed for each of the three optimization algorithms and are utilized to obtain the spares allocation for a few systems. The optimization theory presented is directly applicable to series or parallel systems. A concluding example illustrates how this can be extended to certain series-parallel systems.
IEEE Transactions on Reliability | 1984
Martin L. Shooman
It is noted that modern complex computer-controlled systems frequently fail due to latent software design errors encountered as the software processes various input combinations during operation. Probabilistic models for such errors and their frequency of occurrence lead to software reliability functions and mean time between software error metrics. The author reviews the progress made in this field since 1970 and focuses on the successes which have been achieved with existing models. Future progress is seen as depending heavily on establishment of a database of software reliability information. This is necessary so that early, more accurate, and widespread use can be made of the proven prediction models which now exist. For creating an adequate database a number of requirements are deliniated and discussed.
Sigplan Notices | 1975
Martin L. Shooman; M. I. Bolsky
In order to develop some basic information on software errors, an experiment in collecting data on types and frequencies of such errors was conducted at Bell Laboratories. The paper reports the results of this experiment, whose objectives were to: (1) Develop and utilize a set of terms for describing possible types of errors, their nature, and their frequency; (2) Perform a pilot study to determine if data of the type reported in this paper could be collected; (3) Investigate the error density and its correspondence to predictions from previous data reported; (4) Develop data on how resources are expended in debugging. A program of approximately 4K machine instructions (final size) was chosen. Programmers were asked to fill out for each error, in addition to the regular Trouble Report/Correction Report (TR/CR) form, a special Supplementary TR/CR form for the purposes of this experiment. Sixty-three TR/CR and Supplementary forms were completed during the Test and Integration phase of the program. In general, the data collected were felt to be accurate enough for the purposes of the analyses presented. The 63 forms represented a little over 1-1/2% of the total number of machine instructions of the program. (In good agreement with the 1% to 2% range noted in previous studies.) It was discovered that a large percentage of the errors was found by hand processing (without the aid of a computer). This method was found to be much cheaper than techniques involving machine testing.
IEEE Transactions on Reliability | 1976
Martin L. Shooman; Ashok K. Trivedi
A many-state Markov model has been developed for the purpose of providing performance criteria for computer software. The model provides estimates and closed form predictions of the availability and of the most probable number of errors that will have been corrected at a given time in the operation of a large software package. The model is based on constant rates for error-occurrence ¿ and error cortection ¿. An interesting application case is when ¿ and ¿ are functions of the state of debugging achieved. This case is discussed and solved numerically. Extensions and modifications of the basic model are briefly discussed.
the international conference | 1975
Ashok K. Trivedi; Martin L. Shooman
A many-state Markov model has been developed for the purpose of providing various performance criteria for computer software. The software system under consideration is assumed to be fairly large, of the order of 105 words of code, so that statistical deductions become meaningful, and is assumed to initially contain an unknown number of unknown bugs. The model provides estimates and predictions of the most probable number of errors that will have been corrected at a given time t in the operation of this software package based on preliminary modeling of the error occurrence rate λ as well as the error correction policy μ. The model also provides predictions for the availability A(t) and for the reliability R(t) of the system. The differential equations corresponding to the Markov model are solved for the case when λ and μ are constant using an exact (closed-form) solution. The numerical solution is also obtained for this case for verification and demonstrative purposes. The more interesting and important case, from an applications point of view, is that when λ and μ are not constant, but rather functions of the state of debugging achieved. This case is solved numerically only, since the exact solution is cumbersome. It is also demonstrated that the numerical solution is superior to the so-called exact solution. Finally, some extensions and modifications of the basic Markov model are briefly discussed.
technical symposium on computer science education | 1983
Martin L. Shooman
It has become abundantly clear to all that during the last two decades of the twentieth century and long into the twenty first, software will be both the heart and the binding force of all our large technological developments. Two decades ago large software systems began to be born. Within the last decade, leaders in industry, government, and the universities have realized that software can represent up to 90% of the cost of large computer projects. During this time period, the term Software Engineering has emerged, which can be defined as: Software Engineering: The collection of analysis, design, test, documentation, and management techniques needed to produce timely software within budgeted cost. One of the major challenges facing computer science departments is how to teach software engineering to the large number of B.S. and M.S. students who are now studying Computer Science.
international symposium on software reliability engineering | 1991
Martin L. Shooman
A discussion is given on a new micro model which allows reliability estimation to begin at the module test phase, continue during integration testing and carry over to field deployment. The model first decomposes the structure of the software into a set of execution paths. The failure rate of the software system is related to the frequency and time of path traversal, and the probability of encountering an error during traversal. A second stage of decomposition is necessary to relate the path reliability to the module reliabilities. In the second decomposition the failure probabilities are expressed by combinatorial expressions involving the probabilities of failure of the individual modules. Since the basic model decomposes the structure into execution paths the model can be used to apportion reliabilities and test efforts among the various execution paths. The optimum allocation is computed for a particular effort model and applied to a numerical example.<<ETX>>
IEEE Transactions on Reliability | 1968
Martin L. Shooman
The generalized stress-strength model which is prevalent in current literature is perhaps the closest that analysts have come to a general physical model. To obtain a failure density function and associated hazard function one must assume a certain probability distribution for the part strength and a particular amplitude distribution and frequency of occurrence distribution for the part stress. If one assumes a normal strength distribution and Poisson distributed stress occurrence times with normally distributed amplitudes, then this leads to an exponential failure density function and a constant hazard. Such a model is probably best suited for situations in which the part generally lasts a long time and only seems to fail when on occasion a large stress occurs. In many situations the failure of parts seems to fit a different pattern. The part is operated at nearly a constant stress level; however, the part strength gradually deteriorates with time. As time goes on the rate of deterioration should increase sharply as wear-out is reached and cause an increase in hazard. A probabilistic model which fits this hypothesis is a constantly applied stress and a Rayleigh distributed part strength. The parameter of the Rayleigh distribution is allowed to increase in an exponential fashion with time which produces the strength deterioration effect. Basically the failure rate turns out to depend on the square of the applied stress; however, if the strength deterioration rate is allowed to be a function of the input stress, other behaviors are predicted.
IEEE Transactions on Reliability | 1970
Martin L. Shooman
Many practicing engineers model their systems using reliability diagrams, while others use fault-tree analysis. The theoretical equivalence of the two techniques is described. System reliability can be expressed in two ways: probability of success and probability of failure approach, in terms of the tie-sets (forward paths) of a reliability diagram. Similarly, one can write two other expressions in terms of the cut-sets of the system reliability diagram. If one uses the fault-tree analysis approach, the probability of failure is written in terms of element failures by applying the rules of symbolic logic (union and intersection). This equation is identical with the tie-set probability of failure equation. Also by applying DeMorgans logic theorem to the fault-tree probability of the failure equation, one obtains the tie-set probability of success equation. Thus the two techniques are shown to be identical. The choice between the techniques is a matter of convenience and familiarity.