Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bev Littlewood is active.

Publication


Featured researches published by Bev Littlewood.


Applied statistics | 1973

A Bayesian Reliability Growth Model for Computer Software

Bev Littlewood; J. L. Verrall

A Bayesian reliability growth model is presented which includes special features designed to reproduce special properties of the growth in reliability of an item of computer software (program). The model treats the situation where the program is sufficiently complete to work for continuous time periods between failures, and gives a repair rule for the action of the programmer at such failures. Analysis is based entirely upon the length of the periods of working between repairs and failures, and does not attempt to take account of the internal structure of the program. Methods of inference about the parameters of the model are discussed.


IEEE Transactions on Software Engineering | 1986

Evaluation of competing software reliability predictions

Abdallah A. Abdel-Ghaly; P. Y. Chan; Bev Littlewood

Different software reliability models can produce very different answers when called on to predict future reliability in a reliability growth context. Users need to know which, if any, of the competing predictions are trustworthy. Some techniques are presented which form the basis of a partial solution to this problem. Rather than attempting to decide which model is generally best, the approach adopted allows a user to decide on the most appropriate model for each application.


Communications of The ACM | 1993

Validation of ultrahigh dependability for software-based systems

Bev Littlewood; Lorenzo Strigini

Modern society depends on computers for a number of critical tasks in which failure can have very high costs. As a consequence, high levels of dependability (reliability, safety, etc.) are required from such computers, including their software. Whenever a quantitative approach to risk is adopted, these requirements must be stated in quantitative terms, and a rigorous demonstration of their being attained is necessary. For software used in the most critical roles, such demonstrations are not usually supplied. The fact is that the dependability requirements often lie near the limit of the current state of the art, or beyond, in terms not only of the ability to satisfy them, but also, and more often, of the ability to demonstrate that they are satisfied in the individual operational products (validation). We discuss reasons why such demonstrations cannot usually be provided with the means available: reliability growth models, testing with stable reliability, structural dependability modelling, as well as more informal arguments based on good engineering practice. We state some rigorous arguments about the limits of what can be validated with each of such means. Combining evidence from these different sources would seem to raise the levels that can be validated; yet this improvement is not such as to solve the problem. It appears that engineering practice must take into account the fact that no solution exists, at present, for the validation of ultra-high dependability in systems relying on complex software.


IEEE Transactions on Reliability | 1981

Stochastic Reliability-Growth: A Model for Fault-Removal in Computer-Programs and Hardware-Designs

Bev Littlewood

An assumption commonly made in early models of software reliability is that the failure rate of a program is a constant multiple of the (unknown) number of faults remaining. This implies that all faults contribute the same amount to the failure rate of the program. The assumption is challenged and an alternative proposed. The suggested model results in earlier fault-fixes having a greater effect than later ones (the faults which make the greatest contribution to the overall failure rate tend to show themselves earlier, and so are fixed earlier), and the DFR property between fault fixes (assurance about programs increases during periods of failure-free operation, as well as at fault fixes). The model is tractable and allows a variety of reliability measures to be calculated. Predictions of total execution time to achieve a target reliability, and total number of fault fixes to target reliability, are obtained. The model might also apply to hardware reliability growth resulting from the elimination of design errors.


IEEE Transactions on Software Engineering | 1989

Conceptual modeling of coincident failures in multiversion software

Bev Littlewood; Douglas R. Miller

Work by D.E. Eckhardt and L.D. Lee (1985), shows that independently developed program versions fail dependently. The authors show that there is a precise duality between input choice and program choice in this model and consider a generalization in which different versions can be developed using diverse methodologies. The use of diverse methodologies is shown to decrease the probability of the simultaneous failure of several versions. Indeed, it is theoretically possible to obtain versions which exhibit better than independent failure behavior. The authors formalize the notion of methodological diversity by considering the sequence of decision outcomes that constitute a methodology. They show that diversity of decision implies likely diversity of behavior for the different versions developed under such forced diversity. For certain one-out-of-n systems the authors obtain an optimal method for allocating diversity between versions. For two-out-of-three systems there seem to be no simple optimality results which do not depend on constraints which cannot be verified in practice. >


IEEE Transactions on Reliability | 1979

Software Reliability Model for Modular Program Structure

Bev Littlewood

The paper treats a modular program in which transfers of control between modules follow a semi-Markov process. Each module is failure-prone, and the different failure processes are assumed to be Poisson. The transfers of control between modules (interfaces) are themselves subject to failure. The overall failure process of the program is described, and an asymptotic Poisson process approximation is given for the case when the individual modules and interfaces are very reliable. A simple formula gives the failure rate of the overall program (and hence mean time between failures) under this limiting condition. The remainder of the paper treats the consequences of failures. Each failure results in a cost, represented by a random variable with a distribution typical of the type of failure. The quantity of interest is the total cost of running the program for a time t, and a simple approximating distribution is given for large t. The parameters of this limiting distribution are functions only of the means and variances of the underlying distributions, and are thus readily estimable. A calculation of program availability is given as an example of the cost process. There follows a brief discussion of methods of estimating the parameters of the model, with suggestions of areas in which it might be used.


IEEE Transactions on Software Engineering | 1980

Theories of Software Reliability: How Good Are They and How Can They Be Improved?

Bev Littlewood

An examination of the assumptions used in early bug-counting models of software reliability shows them to be deficient. Suggestions are made to improve modeling assumptions and examples are given of mathematical implementations. Model verification via real-life data is discussed and minimum requirements are presented. An example shows how these requirements may be satisfied in practice. It is suggested that current theories are only the first step along what threatens to be a long road.


ACM Computing Surveys | 2001

Modeling software design diversity: a review

Bev Littlewood; Peter Popov; Lorenzo Strigini

Design diversity has been used for many years now as a means of achieving a degree of fault tolerance in software-based systems. While there is clear evidence that the approach can be expected to deliver some increase in reliability compared to a single version, there is no agreement about the extent of this. More importantly, it remains difficult to evaluate exactly how reliable a particular diverse fault-tolerant system is. This difficulty arises because assumptions of independence of failures between different versions have been shown to be untenable: assessment of the actual level of dependence present is therefore needed, and this is difficult. In this tutorial, we survey the modeling issues here, with an emphasis upon the impact these have upon the problem of assessing the reliability of fault-tolerant systems. The intended audience is one of designers, assessors, and project managers with only a basic knowledge of probabilities, as well as reliability experts without detailed knowledge of software, who seek an introduction to the probabilistic issues in decisions about design diversity.


IEEE Transactions on Software Engineering | 1990

Recalibrating software reliability models

Sarah Brocklehurst; P. Y. Chan; Bev Littlewood; John Snell

There is no universally applicable software reliability growth model which can be trusted to give accurate predictions of reliability in all circumstances. A technique of analyzing predictive accuracy called the u-plot allows a user to estimate the relationship between the predicted reliability and the true reliability. It is shown how this can be used to improve reliability predictions in a very general way by a process of recalibration. Simulation results show that the technique gives improved reliability predictions in a large proportion of cases. However, a user does not need to trust the efficacy of recalibration, since the new reliability estimates produced by the technique are truly predictive and their accuracy in a particular application can be judged using the earlier methods. The generality of this approach suggests its use whenever a software reliability model is used. Indeed, although this work arose from the need to address the poor performance of software reliability models, it is likely to have applicability in other areas such as reliability growth modeling for hardware. >


international conference on software engineering | 2000

Software reliability and dependability: a roadmap

Bev Littlewood; Lorenzo Strigini

Softwares increasing role creates both requirements for being able to trust it more than before, and for more people to know how much they can trust their software. A sound engineering approach requires both techniques for producing reliability and sound assessment of the achieved results. Different parts of industry and society face different challenges: the need for education and cultural changes in some areas, the adaptation of known scientific results to practical use in others, and in others still the need to confront inherently hard problems of prediction and decision-making, both to clarify the limits of current understanding and to push them back. We outline the specific difficulties in applying a sound engineering approach to software reliability engineering, some of the current trends and problems and a set of issues that we therefore see as important in an agenda for research in software dependability.

Collaboration


Dive into the Bev Littlewood's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Popov

City University London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Norman E. Fenton

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Martin Neil

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Xingyu Zhao

City University London

View shared research outputs
Top Co-Authors

Avatar

Erland Jonsson

Chalmers University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge