Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Szu Hui Ng is active.

Publication


Featured researches published by Szu Hui Ng.


Reliability Engineering & System Safety | 2007

Robust recurrent neural network modeling for software fault detection and correction prediction

Qingpei Hu; Min Xie; Szu Hui Ng; Gregory Levitin

Software fault detection and correction processes are related although different, and they should be studied together. A practical approach is to apply software reliability growth models to model fault detection, and fault correction process is assumed to be a delayed process. On the other hand, the artificial neural networks model, as a data-driven approach, tries to model these two processes together with no assumptions. Specifically, feedforward backpropagation networks have shown their advantages over analytical models in fault number predictions. In this paper, the following approach is explored. First, recurrent neural networks are applied to model these two processes together. Within this framework, a systematic networks configuration approach is developed with genetic algorithm according to the prediction performance. In order to provide robust predictions, an extra factor characterizing the dispersion of prediction repetitions is incorporated into the performance function. Comparisons with feedforward neural networks and analytical models are developed with respect to a real data set.


Applied Soft Computing | 2010

A systematic comparison of metamodeling techniques for simulation optimization in Decision Support Systems

Yan-Fu Li; Szu Hui Ng; Min Xie; T. N. Goh

Simulation is a widely applied tool to study and evaluate complex systems. Due to the stochastic and complex nature of real world systems, simulation models for these systems are often difficult to build and time consuming to run. Metamodels are mathematical approximations of simulation models, and have been frequently used to reduce the computational burden associated with running such simulation models. In this paper, we propose to incorporate metamodels into Decision Support Systems to improve its efficiency and enable larger and more complex models to be effectively analyzed with Decision Support Systems. To evaluate the different metamodel types, a systematic comparison is first conducted to analyze the strengths and weaknesses of five popular metamodeling techniques (Artificial Neural Network, Radial Basis Function, Support Vector Regression, Kriging, and Multivariate Adaptive Regression Splines) for stochastic simulation problems. The results show that Support Vector Regression achieves the best performance in terms of accuracy and robustness. We further propose a general optimization framework GA-META, which integrates metamodels into the Genetic Algorithm, to improve the efficiency and reliability of the decision making process. This approach is illustrated with a job shop design problem. The results indicate that GA-Support Vector Regression achieves the best solution among the metamodels.


Journal of Systems and Software | 2005

Software failure prediction based on a Markov Bayesian network model

Chenggang Bai; Qingpei Hu; Min Xie; Szu Hui Ng

Due to the complexity of software products and development processes, software reliability models need to possess the ability of dealing with multiple parameters. Also in order to adapt to the continually refreshed data, they should provide flexibility in model construction in terms of information updating. Existing software reliability models are not flexible in this context. The main reason for this is that there are many static assumptions associated with the models. Bayesian network is a powerful tool for solving this problem, as it exhibits strong ability to adapt in problems involving complex variant factors. In this paper, a software prediction model based on Markov Bayesian networks is developed, and a method to solve the network model is proposed. The use of our model is illustrated with an example.


Journal of Quality Technology | 2010

Nonparametric CUSUM and EWMA Control Charts for Detecting Mean Shifts

Su-yi Li; Loon Ching Tang; Szu Hui Ng

Nonparametric control charts are useful when the underlying process distribution is not likely to be normal or is unknown. In this paper, we propose two nonparametric analogs of the CUSUM and EWMA control charts based on the Wilcoxon rank-sum test for detecting process mean shifts. We first derive the run-length distributions of the proposed control charts and then compare the performance of the proposed nonparametric charts to (1) CUSUM and EWMA control charts on subgroup means and (2) the median chart and the Shewhart-type nonparametric control chart based on Mann–Whitney test. We show that the charts proposed herein perform well in detecting step mean shifts and perform almost the same as the parametric counterparts when the underlying process output follows a normal distribution and better when the output is nonnormal. We also study the effect of the reference sample size and the subgroup size on the performance of the proposed charts. A numerical example is also given as an illustration of the design and implementation of the proposed charts.


IEEE Transactions on Reliability | 2007

Modeling and Analysis of Software Fault Detection and Correction Process by Considering Time Dependency

Y. P. Wu; Qingpei Hu; Min Xie; Szu Hui Ng

Software reliability modeling & estimation plays a critical role in software development, particularly during the software testing stage. Although there are many research papers on this subject, few of them address the realistic time delays between fault detection and fault correction processes. This paper investigates an approach to incorporate the time dependencies between the fault detection, and fault correction processes, focusing on the parameter estimations of the combined model. Maximum likelihood estimates of combined models are derived from an explicit likelihood formula under various time delay assumptions. Various characteristics of the combined model, like the predictive capability, are also analyzed, and compared with the traditional least squares estimation method. Furthermore, we study a direct, useful application of the proposed model & estimation method to the classical optimal release time problem faced by software decision makers. The results illustrate the effect of time delay on the optimal release policy, and the overall software development cost.


Information & Software Technology | 2011

Reliability analysis and optimal version-updating for open source software

Xiang Li; Yan-Fu Li; Min Xie; Szu Hui Ng

Context: Although reliability is a major concern of most open source projects, research on this problem has attracted attention only recently. In addition, the optimal version-dating for open source software considering its special properties is not yet discussed. Objective: In this paper, the reliability analysis and optimal version-updating for open source software are studied. Method: A modified non-homogeneous Poisson process model is developed for open source software reliability modeling and analysis. Based on this model, optimal version-updating for open source software is investigated as well. In the decision process, the rapid release strategy and the level of reliability are the two most important factors. However, they are essentially contradicting with each other. In order to consider these two conflicting factors simultaneously, a new decision model based on multi-attribute utility theory is proposed. Results: Our models are tested on the real world data sets from two famous open source projects: Apache and GNOME. It is found that traditional software reliability models provide overestimations of the reliability of open source software. In addition, the proposed decision model can help management to make a rational decision on the optimal version-updating for open source software. Conclusion: Empirical results reveal that the proposed model for open source software reliability can describe the failure process more accurately. Furthermore, it can be seen that the proposed decision model can assist management to appropriately determine the optimal version-update time for open source software.


Iie Transactions | 2004

A model for correlated failures in N-version programming

Yuan-Shun Dai; Min Xie; Kim-Leng Poh; Szu Hui Ng

The multi-version programming technique is a method to increase the reliability of safety critical software. In this technique a number of versions are developed and a voting scheme is used before a final result is provided. In the analysis of this type of systems, a common assumption is the independence of the different versions. However, the different versions are usually interdependent and failures are correlated due to the nature of the product design and development. One version may fail simultaneously with another version because of a common cause. In this paper, a model for these dependent failures is developed and studied. Using the developed model, a reliability function can be easily computed. A method is also proposed to estimate the parameters of the model. Finally, as an application of the developed model, an optimal testing resource allocation problem is formulated and a genetic algorithm is presented to solve the problem.


ACM Transactions on Modeling and Computer Simulation | 2006

Reducing parameter uncertainty for stochastic systems

Szu Hui Ng; Stephen E. Chick

The design of many production and service systems is informed by stochastic model analysis. But the parameters of statistical distributions of stochastic models are rarely known with certainty, and are often estimated from field data. Even if the mean system performance is a known function of the models parameters, there may still be uncertainty about the mean performance because the parameters are not known precisely. Several methods have been proposed to quantify this uncertainty, but data sampling plans have not yet been provided to reduce parameter uncertainty in a way that effectively reduces uncertainty about mean performance. The optimal solution is challenging, so we use asymptotic approximations to obtain closed-form results for sampling plans. The results apply to a wide class of stochastic models, including situations where the mean performance is unknown but estimated with simulation. Analytical and empirical results for the M/M/1 queue, a quadratic response-surface model, and a simulated critical care facility illustrate the ideas.


Iie Transactions | 2012

Element maintenance and allocation for linear consecutively connected systems

Rui Peng; Min Xie; Szu Hui Ng; Gregory Levitin

This article considers optimal maintenance and allocation of elements in a Linear Multi-state Consecutively Connected System (LMCCS), which is important in signal transmission and other network systems. The system consists of N+1 linearly ordered positions (nodes) and fails if the first node (source) is not connected with the final node (sink). The reliability of an LMCCS has been studied in the past but has been restricted to the case when each system element has a constant reliability. In practice, system elements usually fail with increasing failure probability due to aging effects. Furthermore, in order to increase system availability, resources can be put into the maintenance of each element to increase the availability of the element. In this article, a framework is proposed to solve the cost optimal maintenance and allocation strategy of this type of system subject to an availability requirement. A universal generating function is used to estimate the availability of the system. A genetic algorithm is adopted for optimization. Illustrative examples are presented.


Computers & Industrial Engineering | 2011

Kriging metamodel with modified nugget-effect: The heteroscedastic variance case

J. Yin; Szu Hui Ng; Kien Ming Ng

Metamodels are commonly used to approximate and analyze simulation models. However, in cases where the simulation output variances are non-zero and not constant, many of the current metamodels which assume homogeneity, fail to provide satisfactory estimation. In this paper, we present a kriging model with modified nugget-effect adapted for simulations with heterogeneous variances. The new model improves the estimations of the sensitivity parameters by explicitly accounting for location dependent non-constant variances and smoothes the kriging predictors output accordingly. We look into the effects of stochastic noise on the parameter estimation for the classic kriging model that assumes deterministic outputs and note that the stochastic noise increases the variability of the classic parameter estimation. The nugget-effect and proposed modified nugget-effect stabilize the estimated parameters and decrease the erratic behavior of the predictor by penalizing the likelihood function affected by stochastic noise. Several numerical examples suggest that the kriging model with modified nugget-effect outperforms the kriging model with nugget-effect and the classic kriging model in heteroscedastic cases.

Collaboration


Dive into the Szu Hui Ng's collaboration.

Top Co-Authors

Avatar

Min Xie

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Rui Peng

University of Science and Technology Beijing

View shared research outputs
Top Co-Authors

Avatar

Qingpei Hu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Gregory Levitin

Israel Electric Corporation

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jun Yuan

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Xiang Li

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Chengjie Xiong

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Jun Yin

National University of Singapore

View shared research outputs
Researchain Logo
Decentralizing Knowledge