Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Niclas Ohlsson is active.

Publication


Featured researches published by Niclas Ohlsson.


IEEE Transactions on Software Engineering | 2000

Quantitative analysis of faults and failures in a complex software system

Norman E. Fenton; Niclas Ohlsson

The authors describe a number of results from a quantitative study of faults and failures in two releases of a major commercial software system. They tested a range of basic software engineering hypotheses relating to: the Pareto principle of distribution of faults and failures; the use of early fault data to predict later fault and failure data; metrics for fault prediction; and benchmarking fault data. For example, we found strong evidence that a small number of modules contain most of the faults discovered in prerelease testing and that a very small number of modules contain most of the faults discovered in operation. We found no evidence to support previous claims relating module size to fault density nor did we find evidence that popular complexity metrics are good predictors of either fault-prone or failure-prone modules. We confirmed that the number of faults discovered in prerelease testing is an order of magnitude greater than the number discovered in 12 months of operational use. The most important result was strong evidence of a counter-intuitive relationship between pre- and postrelease faults; those modules which are the most fault-prone prerelease are among the least fault-prone postrelease, while conversely, the modules which are most fault-prone postrelease are among the least fault-prone prerelease. This observation has serious ramifications for the commonly used fault density measure. Our results provide data-points in building up an empirical picture of the software development process.


IEEE Transactions on Software Engineering | 1996

Predicting fault-prone software modules in telephone switches

Niclas Ohlsson; Hans Alberg

An empirical study was carried out at Ericsson Telecom AB to investigate the relationship between several design metrics and the number of function test failure reports associated with software modules. A tool, ERIMET, was developed to analyze the design documents automatically. Preliminary results from the study of 130 modules showed that: based on fault and design data one can satisfactorily build, before coding has started, a prediction model for identifying the most fault-prone modules. The data analyzed show that 20 percent of the most fault-prone modules account for 60 percent of all faults. The prediction model built in this paper would have identified 20 percent of the modules accounting for 47 percent of all faults. At least four design measures can alternatively be used as predictors with equivalent performance. The size (with respect to the number of lines of code) used in a previous prediction model was not significantly better than these four measures. The Alberg diagram introduced in this paper offers a way of assessing a predictor based on historical data, which is a valuable complement to linear regression when prediction data is ordinal. Applying the method described in this paper makes it possible to use measures at the design phase to predict the most fault-prone modules.


Software Quality Journal | 1998

Application of multivariate analysis for software fault prediction

Niclas Ohlsson; M. Zhao; Mary E. Helander

Prediction of fault-prone modules provides one way to support software quality engineering through improved scheduling and project control. The primary goal of our research was to develop and refine techniques for early prediction of fault-prone modules. The objective of this paper is to review and improve an approach previously examined in the literature for building prediction models, i.e. principal component analysis (PCA) and discriminant analysis (DA). We present findings of an empirical study at Ericsson Telecom AB for which the previous approach was found inadequate for predicting the most fault-prone modules using software design metrics. Instead of dividing modules into fault-prone and not-fault-prone, modules are categorized into several groups according to the ordered number of faults. It is shown that the first discriminant coordinates (DC) statistically increase with the ordering of modules, thus improving prediction and prioritization efforts. The authors also experienced problems with the smoothing parameter as used previously for DA. To correct this problem and further improve predictability, separate estimation of the smoothing parameter is shown to be required.


IEEE Transactions on Software Engineering | 1998

Planning models for software reliability and cost

Mary E. Helander; M. Zhao; Niclas Ohlsson

This paper presents modeling frameworks for distributing development effort among software components to facilitate cost-effective progress toward a system reliability goal. Emphasis on components means that the frameworks can be used, for example, in cleanroom processes and to set certification criteria. The approach, based on reliability allocation, uses the operational profile to quantify the usage environment and a utilization matrix to link usage with system structure. Two approaches for reliability and cost planning are introduced: Reliability-Constrained Cost-Minimization (RCCM) and Budget-Constrained Reliability-Maximization (BCRM). Efficient solutions are presented corresponding to three general functions for measuring cost-to-attain failure intensity. One of the functions is shown to be a generalization of the basic COCOMO form. Planning within budget, adaptation for other cost functions and validation issues are also discussed. Analysis capabilities are illustrated using a software system consisting of 26 developed modules and one procured module. The example also illustrates how to specify a reliability certification level, and minimum purchase price, for the procured module.


Empirical Software Engineering | 1998

An Extended Replication of an Experiment for AssessingMethods for Software Requirements Inspections

Kristian Sandahl; Ola Blomkvist; Joachim Karlsson; Christian Krysander; Mikael Lindvall; Niclas Ohlsson

We have performed an extended replication of the Porter-Votta-Basili experiment comparing the Scenario method and the Checklist method for inspecting requirements specifications using identical instruments. The experiment has been conducted in our educational context represented by a more general definition of a defect compared to the original defect list. Our study involving 24 undergraduate students manipulated three independent variables: detection method, requirements specification, and the order of the inspections. The dependent variable measured is the defect detection rate. We found the requirements specification inspected and not the detection method to be the most probable explanation for the variance in defect detection rate. This suggests that it is important to gather knowledge of how a requirements specification can convey an understandable view of the product and to adapt inspection methods accordingly. Contrary to the original experiment, we can not significantly support the superiority of the Scenario method. This is in accordance with a replication conducted by Fusaro, Lanubile and Visaggio, and might be explained by the lack of individual defect detection skill of our less experienced subjects.


Information & Software Technology | 1998

A comparison between software design and code metrics for the prediction of software fault content

M. Zhao; Claes Wohlin; Niclas Ohlsson; Min Xie

Software metrics play an important role in measuring the quality of software. It is desirable to predict the quality of software as early as possible, and hence metrics have to be collected early as well. This raises a number of questions that has not been fully answered. In this paper we discuss, prediction of fault content and try to answer what type of metrics should be collected, to what extent design metrics can be used for prediction, and to what degree prediction accuracy can be improved if code metrics are included. Based on a data set collected from a real project, we found that both design and code metrics are correlated with the number of faults. When the metrics are used to build prediction models of the number of faults, the design metrics are as good as the code metrics, little improvement can be achieved if both design metrics and code metrics are used to model the relationship between the number of faults and the software metrics. The empirical results from this study indicate that the structural properties of the software influencing the fault content is established before the coding phase.


Empirical Software Engineering | 1997

Early Risk-Management by Identification of Fault-prone Modules

Niclas Ohlsson; Ann Christin Eriksson; Mary E. Helander

Basili, V., and Rombach, D. 1988. The TAME project: Towards improvement-oriented software environments. IEEE Transactions on Software Engineering 14(6): 758–773. Basili V. and Weiss, D. 1984. A methodology for collecting valid software engineering data. IEEE Transactions on Software Engineering 10(11): 758–773. http://www.iese.fhg.de/Services/Projects/Public-Projects/Cemp.html Hoisl, B., Oivo, M., Rombach, D. H., Ruhe, G., van Latum, F., and van Solingen, R. Shifting to goal-oriented measurement in industrial environments. Submitted for publication. Morasca, S., Macchi, F., Grigoletti, M., and Gusmeroli, C. Goal-driven measurement in a maintenance project. Tech. Report n 96-047, Politecnico di Milano Dipartimento di Elettronica. Submitted for publication.


Failure and Lessons Learned in Information Technology Management | 1998

Experiences of Fault Data in a Large Software System

Niclas Ohlsson; Claes Wohlin

Early identification of fault-prone modules is desirable both from developer and customer perspectives since it supports planning and scheduling activities that facilitate cost avoidance and improved time to market. Large scale software systems are rarely built from scratch, and usually involve modification and enhancement of existing systems. This suggest that development planning and software quality could greatly be enhanced, since knowledge about product complexity and quality of previous releases can be taken into account when making improvements in subsequent projects. In this paper we present results from empirical studies at Ericsson Telecom AB which examine the use of metrics to predict fault-prone modules in successive product releases. The results show that such prediction appears to be possible and has the potential to enhance project maintenance. Tables wrongly numbered and text is missing. See paper copy!


SOQUA | 1996

Quality Improvement by Identification of Fault-prone Modules Using Software Design Metrics

Niclas Ohlsson; Mary E. Helander; Claes Wohlin


Empirical Software Engineering | 1998

An Extended Replication of an Experiment for Assessing Methods for Software Requirements

Kristian Sandahl; Ola Blomkvist; Jonas S. Karlsson; Christian Krysander; Mikael Lindvall; Niclas Ohlsson

Collaboration


Dive into the Niclas Ohlsson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Claes Wohlin

Blekinge Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

M. Zhao

Linköping University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Min Xie

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Norman E. Fenton

Queen Mary University of London

View shared research outputs
Researchain Logo
Decentralizing Knowledge