Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Norman E. Fenton is active.

Publication


Featured researches published by Norman E. Fenton.


IEEE Transactions on Software Engineering | 2000

Quantitative analysis of faults and failures in a complex software system

Norman E. Fenton; Niclas Ohlsson

The authors describe a number of results from a quantitative study of faults and failures in two releases of a major commercial software system. They tested a range of basic software engineering hypotheses relating to: the Pareto principle of distribution of faults and failures; the use of early fault data to predict later fault and failure data; metrics for fault prediction; and benchmarking fault data. For example, we found strong evidence that a small number of modules contain most of the faults discovered in prerelease testing and that a very small number of modules contain most of the faults discovered in operation. We found no evidence to support previous claims relating module size to fault density nor did we find evidence that popular complexity metrics are good predictors of either fault-prone or failure-prone modules. We confirmed that the number of faults discovered in prerelease testing is an order of magnitude greater than the number discovered in 12 months of operational use. The most important result was strong evidence of a counter-intuitive relationship between pre- and postrelease faults; those modules which are the most fault-prone prerelease are among the least fault-prone postrelease, while conversely, the modules which are most fault-prone postrelease are among the least fault-prone prerelease. This observation has serious ramifications for the commonly used fault density measure. Our results provide data-points in building up an empirical picture of the software development process.


IEEE Transactions on Software Engineering | 1995

Towards a framework for software measurement validation

Barbara A. Kitchenham; Shari Lawrence Pfleeger; Norman E. Fenton

In this paper we propose a framework for validating software measurement. We start by defining a measurement structure model that identifies the elementary component of measures and the measurement process, and then consider five other models involved in measurement: unit definition models, instrumentation models, attribute relationship models, measurement protocols and entity population models. We consider a number of measures from the viewpoint of our measurement validation framework and identify a number of shortcomings; in particular we identify a number of problems with the construction of function points. We also compare our view of measurement validation with ideas presented by other researchers and identify a number of areas of disagreement. Finally, we suggest several rules that practitioners and researchers can use to avoid measurement problems, including the use of measurement vectors rather than artificially contrived scalars.


IEEE Transactions on Software Engineering | 1994

Software measurement: a necessary scientific basis

Norman E. Fenton

Software measurement, like measurement in any other discipline, must adhere to the science of measurement if it is to gain widespread acceptance and validity. The observation of some very simple, but fundamental, principles of measurement can have an extremely beneficial effect on the subject. Measurement theory is used to highlight both weaknesses and strengths of software metrics work, including work on metrics validation. We identify a problem with the well-known Weyuker properties (E.J. Weyuker, 1988), but also show that a criticism of these properties by J.C. Cherniavsky and C.H. Smith (1991) is invalid. We show that the search for general software complexity measures is doomed to failure. However, the theory does help us to define and validate measures of specific complexity attributes. Above all, we are able to view software measurement in a very wide perspective, rationalising and relating its many diverse activities. >


international conference on software engineering | 2000

Software metrics: roadmap

Norman E. Fenton; Martin Neil

Software metrics as a subject area is over 30 years old, but it has barely penetrated into mainstream software engineering. A key reason for this is that most software metrics activities have not addressed their most important requirement: to provide information to support quantitative managerial decision-making during the software lifecycle. Good support for decision-making implies support for risk assessment and reduction. Yet traditional metrics approaches, often driven by regression-based models for cost estimation and defects prediction, provide little support for managers wishing to use measurement to analyse and minimise risk. The future for software metrics lies in using relatively simple existing metrics to build management decision-support tools that combine different aspects of software development and testing and enable managers to make many kinds of predictions, assessments and trade-offs during the software life-cycle. Our recommended approach is to handle the key factors largely missing from the usual metrics approaches, namely: causality, uncertainty, and combining different (often subjective) evidence. Thus the way forward for software metrics research lies in causal modelling (we propose using Bayesian nets), empirical software engineering, and multi-criteria decision aids.


IEEE Software | 1994

Science and substance: a challenge to software engineers

Norman E. Fenton; Shari Lawrence Pfleeger; Robert L. Glass

For 25 years, software researchers have proposed improving software development and maintenance with new practices whose effectiveness is rarely, if ever, backed up by hard evidence. We suggest several ways to address the problem, and we challenge the community to invest in being more scientific.<<ETX>>


Journal of Systems and Software | 1999

Software metrics: success, failures and new directions

Norman E. Fenton; Martin Neil

The history of software metrics is almost as old as the history of software engineering. Yet, the extensive research and literature on the subject has had little impact on industrial practice. This is worrying given that the major rationale for using metrics is to improve the software engineering decision making process from a managerial and technical perspective. Industrial metrics activity is invariably based around metrics that have been around for nearly 30 years (notably Lines of Code or similar size counts, and defects counts). While such metrics can be considered as massively successful given their popularity, their limitations are well known, and mis-applications are still common. The major problem is in using such metrics in isolation. We argue that it is possible to provide genuinely improved management decision support systems based on such simplistic metrics, but only by adopting a less isolationist approach. Specifically, we feel it is important to explicitly model: (a) cause and eAect relationships and (b) uncertainty and combination of evidence. Our approach uses Bayesian Belief nets, which are increasingly seen as the best means of handling decisionmaking under uncertainty. The approach is already having an impact in Europe. ” 1999 Elsevier Science Inc. All rights reserved.


Knowledge Engineering Review | 2000

Building large-scale Bayesian networks

Martin Neil; Norman E. Fenton; Lars Nielson

Bayesian networks (BNs) model problems that involve uncertainty. A BN is a directed graph, whose nodes are the uncertain variables and whose edges are the causal or influential links between the variables. Associated with each node is a set of conditional probability functions that model the uncertain relationship between the node and its parents. The benefits of using BNs to model uncertain domains are well known, especially since the recent breakthroughs in algorithms and tools to implement them. However, there have been serious problems for practitioners trying to use BNs to solve realistic problems. This is because, although the tools make it possible to execute large-scale BNs efficiently, there have been no guidelines on building BNs. Specifically, practitioners face two significant barriers. The first barrier is that of specifying the graph structure such that it is a sensible model of the types of reasoning being applied. The second barrier is that of eliciting the conditional probability values. In this paper we concentrate on this first problem. Our solution is based on the notion of generally applicable “building blocks”, called idioms, which serve solution patterns. These can then in turn be combined into larger BNs, using simple combination rules and by exploiting recent ideas on modular and object oriented BNs (OOBNs). This approach, which has been implemented in a BN tool, can be applied in many problem domains. We use examples to illustrate how it has been applied to build large-scale BNs for predicting software safety. In the paper we review related research from the knowledge and software engineering literature. This provides some context to the work and supports our argument that BN knowledge engineers require the same types of processes, methods and strategies enjoyed by systems and software engineers if they are to succeed in producing timely, quality and cost-effective BN decision support solutions.


Information & Software Technology | 2007

Predicting software defects in varying development lifecycles using Bayesian nets

Norman E. Fenton; Martin Neil; William Marsh; Peter Hearty; David Marquez; Paul Krause; Rajat Mishra

An important decision in software projects is when to stop testing. Decision support tools for this have been built using causal models represented by Bayesian Networks (BNs), incorporating empirical data and expert judgement. Previously, this required a custom BN for each development lifecycle. We describe a more general approach that allows causal models to be applied to any lifecycle. The approach evolved through collaborative projects and captures significant commercial input. For projects within the range of the models, defect predictions are very accurate. This approach enables decision-makers to reason in a way that is not possible with regression-based models.


IEEE Software | 1997

Implementing effective software metrics programs

Tracy Hall; Norman E. Fenton

Increasingly organisations are foregoing an ad hoc approach to metrics in favor of complete metrics programs. The authors identify consensus requirements for metric program success and examine how programs in two organisations measured up.


IEEE Transactions on Knowledge and Data Engineering | 2007

Using Ranked Nodes to Model Qualitative Judgments in Bayesian Networks

Norman E. Fenton; Martin Neil; Jose Galan Caballero

Although Bayesian Nets (BNs) are increasingly being used to solve real world risk problems, their use is still constrained by the difficulty of constructing the node probability tables (NPTs). A key challenge is to construct relevant NPTs using the minimal amount of expert elicitation, recognising that it is rarely cost-effective to elicit complete sets of probability values. We describe a simple approach to defining NPTs for a large class of commonly occurring nodes (called ranked nodes). The approach is based on the doubly truncated Normal distribution with a central tendency that is invariably a type of weighted function of the parent nodes. In extensive real-world case studies we have found that this approach is sufficient for generating the NPTs of a very large class of nodes. We describe one such case study for validation purposes. The approach has been fully automated in a commercial tool, called AgenaRisk, and is thus accessible to all types of domain experts. We believe this work represents a useful contribution to BN research and technology since its application makes the difference between being able to build realistic BN models and not.

Collaboration


Dive into the Norman E. Fenton's collaboration.

Top Co-Authors

Avatar

Martin Neil

London South Bank University

View shared research outputs
Top Co-Authors

Avatar

Anthony Constantinou

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

William Marsh

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Mark Freestone

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Artemis Igoumenou

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Constantinos Kallis

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Jenny Shaw

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Laura Bui

St Bartholomew's Hospital

View shared research outputs
Top Co-Authors

Avatar

Mary Davoren

Queen Mary University of London

View shared research outputs
Researchain Logo
Decentralizing Knowledge