Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where M. Zhao is active.

Publication


Featured researches published by M. Zhao.


Software Quality Journal | 1998

Application of multivariate analysis for software fault prediction

Niclas Ohlsson; M. Zhao; Mary E. Helander

Prediction of fault-prone modules provides one way to support software quality engineering through improved scheduling and project control. The primary goal of our research was to develop and refine techniques for early prediction of fault-prone modules. The objective of this paper is to review and improve an approach previously examined in the literature for building prediction models, i.e. principal component analysis (PCA) and discriminant analysis (DA). We present findings of an empirical study at Ericsson Telecom AB for which the previous approach was found inadequate for predicting the most fault-prone modules using software design metrics. Instead of dividing modules into fault-prone and not-fault-prone, modules are categorized into several groups according to the ordered number of faults. It is shown that the first discriminant coordinates (DC) statistically increase with the ordering of modules, thus improving prediction and prioritization efforts. The authors also experienced problems with the smoothing parameter as used previously for DA. To correct this problem and further improve predictability, separate estimation of the smoothing parameter is shown to be required.


IEEE Transactions on Software Engineering | 1998

Planning models for software reliability and cost

Mary E. Helander; M. Zhao; Niclas Ohlsson

This paper presents modeling frameworks for distributing development effort among software components to facilitate cost-effective progress toward a system reliability goal. Emphasis on components means that the frameworks can be used, for example, in cleanroom processes and to set certification criteria. The approach, based on reliability allocation, uses the operational profile to quantify the usage environment and a utilization matrix to link usage with system structure. Two approaches for reliability and cost planning are introduced: Reliability-Constrained Cost-Minimization (RCCM) and Budget-Constrained Reliability-Maximization (BCRM). Efficient solutions are presented corresponding to three general functions for measuring cost-to-attain failure intensity. One of the functions is shown to be a generalization of the basic COCOMO form. Planning within budget, adaptation for other cost functions and validation issues are also discussed. Analysis capabilities are illustrated using a software system consisting of 26 developed modules and one procured module. The example also illustrates how to specify a reliability certification level, and minimum purchase price, for the procured module.


Information & Software Technology | 1998

A comparison between software design and code metrics for the prediction of software fault content

M. Zhao; Claes Wohlin; Niclas Ohlsson; Min Xie

Software metrics play an important role in measuring the quality of software. It is desirable to predict the quality of software as early as possible, and hence metrics have to be collected early as well. This raises a number of questions that has not been fully answered. In this paper we discuss, prediction of fault content and try to answer what type of metrics should be collected, to what extent design metrics can be used for prediction, and to what degree prediction accuracy can be improved if code metrics are included. Based on a data set collected from a real project, we found that both design and code metrics are correlated with the number of faults. When the metrics are used to build prediction models of the number of faults, the design metrics are as good as the code metrics, little improvement can be achieved if both design metrics and code metrics are used to model the relationship between the number of faults and the software metrics. The empirical results from this study indicate that the structural properties of the software influencing the fault content is established before the coding phase.


Microelectronics Reliability | 1993

On some reliability growth models with simple graphical interpretations

Min Xie; M. Zhao

Abstract Some useful reliability growth models, which have simple graphical interpretations, are studied in this paper. The proposed models are inspired by the Duane model. For each of the models, the plot of the cumulative number of failures against the running time, when a suitable scale is used, will tend to be on a straight line if the model is valid. Otherwise the model should be rejected. The slope of the fitted line and its intercept on the vertical axis will give us the estimates of the parameters. Hence, it provides us with a simple graphical model validation and parameter estimation tool. In particular, we propose a “first-model-validation-then-parameter-estimation” approach which will simplify the model validation and parameter estimation problem in software reliability analysis. Numerical analysis of several sets of software failure data are also provided to enlighten the ideas.


Microelectronics Reliability | 1996

Reliability growth plot — An underutilized tool in reliability analysis

Min Xie; M. Zhao

Abstract System reliability and performance are improved by continuous improvement effort. The study of the increase in reliability as a function of time is the subject of reliability growth. Although the most well-known reliability growth model, the Duane model, is proposed more than thirty years ago, reliability growth analysis has attracted an increasing interest only recently because of the lack of time for testing and the high reliability of improved products leading to very few failures. In this paper we study a practical approach in reliability growth analysis. Based on the graphical plotting of failure data for some selected models, reliability can easily be estimated and predicted. This approach which is the original idea of the Duane model, overcomes the problem of parameter estimation and model validation that is usually complicated. It is especially useful when the model validation has to be done in order to select a suitable model. The approach, called the First-Model-Validation-Then-Parameter-Estimation approach, is simple and practical for the analysis of reliability growth data. We further develop some models and discuss their applicability in reliability engineering.


Reliability Engineering & System Safety | 1994

A model of storage reliability with possible initial failures

M. Zhao; Min Xie

Abstract Recently, storage reliability has attracted attention because of the increasing demand for high reliability of products in storage in both military and commercial industries. In this paper we study a general storage reliability model for the analysis of storage failure data. It is indicated that the initial failures, which are usually neglected, should be incorporated in the estimation of storage failure probability. Data from the reliability testing before and during the storage should be combined to give more accurate estimates of both initial failure probability and the probability of storage failures. The results are also useful for decision-making concerning the amount of testing to be carried out before storage. A numerical example is also given to illustrate the idea.


Journal of Systems and Software | 2016

Modeling and analysis of reliability of multi-release open source software incorporating both fault detection and correction processes

Jianfeng Yang; Yu Liu; Min Xie; M. Zhao

The failure process in testing multi-release software is analyzed with the consideration of faults correction delay.Two kinds of multi-release software reliability model are proposed.The model is validated on real test datasets from open source software.A comprehensive analysis of optimal release times based on cost-efficiency is provided. Large software systems require regular upgrading that tries to correct the reported faults in previous versions and add some functions to meet new requirements. It is thus necessary to investigate changes in reliability in the face of ongoing releases. However, the current modeling frameworks mostly rely on the idealized assumption that all faults will be removed instantaneously and perfectly. In this paper, the failure processes in testing multi-release software are investigated by taking into consideration the delays in fault repair time based on a proposed time delay model. The model is validated on real test datasets from the software that has been released three times with new features. A comprehensive analysis of optimal release times based on cost-efficiency is also provided, which could help project managers to determine the best time to release the software.


Microelectronics Reliability | 1994

EM algorithms for estimating software reliability based on masked data

M. Zhao; Min Xie

Abstract In this paper, the software reliability estimation from masked data is considered based on superposition nonhomogeneous Poisson process models. The masked data are the system failure data when the exact causes of the failures, i.e., the components that have caused the system failure, may be unknown. The components of a software system may indicate its modules, testing strategies and the types of errors according to the practical situations. In general, the maximum likelihood estimates of parameters are difficult to find when there exist masked data, because the superposition process cannot be decomposed into the ordinary processes. In this study, the EM algorithm is investigated to solve the problem of maximum likelihood estimation. It is shown that the EM algorithm is powerful to deal with the masked data. By applying the EM algorithm, the masked data problem is simplified and is reduced to the common estimation problem without the masked data. This result makes it very easy to obtain maximum likelihood estimates of parameters.


2015 IEEE International Conference on Software Quality, Reliability and Security | 2015

A New Framework and Application of Software Reliability Estimation Based on Fault Detection and Correction Processes

Yu Liu; Min Xie; Jianfeng Yang; M. Zhao

Software reliability growth modeling plays an important role in software reliability evaluation. To incorporate more information and provide more accurate analysis, modeling software fault detection and correction processes has attracted widespread research attention recently. However, the assumption of the stochastic fault correction time delay brings more difficulties in modeling and estimating the parameters. In practice, other than the grouped fault data, software test records often include some more detailed information, such as the rough time when one fault is detected or corrected. Such semi-grouped dataset contains more information about fault removal processes than commonly used grouped dataset. Using the semi-grouped datasets can improve the accuracy of time delayed models. In this paper, a fault removal modelling framework for software reliability with semi-grouped data is studied and extended into multi-released software. Also, the corresponding parameter estimation is carried out with Maximum Likelihood estimation method. One test dataset with three releases from a practical software project is applied with the proposed framework, which shows satisfactory performance with the results.


Scandinavian Journal of Statistics | 1996

On maximum likelihood estimation for a general non-homogeneous Poisson process

M. Zhao; Min Xie

Collaboration


Dive into the M. Zhao's collaboration.

Top Co-Authors

Avatar

Min Xie

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yu Liu

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Claes Wohlin

Blekinge Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge