A. K. Qin
RMIT University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by A. K. Qin.
congress on evolutionary computation | 2014
Borhan Kazimipour; Xiaodong Li; A. K. Qin
Although various population initialization techniques have been employed in evolutionary algorithms (EAs), there lacks a comprehensive survey on this research topic. To fill this gap and attract more attentions from EA researchers to this crucial yet less explored area, we conduct a systematic review of the existing population initialization techniques. Specifically, we categorize initialization techniques from three exclusive perspectives, i.e., randomness, compositionality and generality. Characteristics of the techniques belonging to each category are carefully analysed to further lead to several sub-categories. We also discuss several open issues related to this research topic, which demands further in-depth investigations.
congress on evolutionary computation | 2013
A. K. Qin; Xiaodong Li
Differential evolution (DE) is one of the most powerful continuous optimizers in the field of evolutionary computation. This work systematically benchmarks a classic DE algorithm (DE/rand/1/bin) on the CEC-2013 single-objective continuous optimization testbed. We report, for each test function at different problem dimensionality, the best achieved performance among a wide range of potentially effective parameter settings. It reflects the intrinsic optimization capability of DE/rand/1/bin on this testbed and can serve as a baseline for performance comparison in future research using this testbed. Furthermore, we conduct parameter sensitivity analysis using advanced non-parametric statistical tests to discover statistically significantly superior parameter settings. This analysis provides a statistically reliable rule of thumb for choosing the parameters of DE/rand/1/bin to solve unseen problems. Moreover, we report the performance of DE/rand/1/bin using one superior parameter setting advocated by parameter sensitivity analysis.
congress on evolutionary computation | 2013
Borhan Kazimipour; Xiaodong Li; A. K. Qin
Several population initialization methods for evolutionary algorithms (EAs) have been proposed previously. This paper categorizes the most well-known initialization methods and studies the effect of them on large scale global optimization problems. Experimental results indicate that the optimization of large scale problems using EAs is more sensitive to the initial population than optimizing lower dimensional problems. Statistical analysis of results show that basic random number generators, which are the most commonly used method for population initialization in EAs, lead to the inferior performance. Furthermore, our study shows, regardless of the size of the initial population, choosing a proper initialization method is vital for solving large scale problems.
congress on evolutionary computation | 2014
Borhan Kazimipour; Xiaodong Li; A. K. Qin
This work provides an in-depth investigation of the effects of population initialization on Differential Evolution (DE) for dealing with large scale optimization problems. Firstly, we conduct a statistical parameter sensitive analysis to study the effects of DEs control parameters on its performance of solving large scale problems. This study reveals the optimal parameter configurations which can lead to the statistically superior performance over the CEC-2013 large-scale test problems. Thus identified optimal parameter configurations interestingly favour much larger population sizes while agreeing with the other parameter settings compared to the most commonly employed parameter configuration. Based on one of the identified optimal configurations and the most commonly used configuration, which only differ in the population size, we investigate the influence of various population initialization techniques on DEs performance. This study indicates that initialization plays a more crucial role in DE with a smaller population size. However, this observation might be the result of insufficient convergence due to the use of a large population size under the limited computational budget, which deserve more investigations.
IEEE Transactions on Neural Networks | 2017
Chong Zhang; Pin Lim; A. K. Qin; Kay Chen Tan
In numerous industrial applications where safety, efficiency, and reliability are among primary concerns, condition-based maintenance (CBM) is often the most effective and reliable maintenance policy. Prognostics, as one of the key enablers of CBM, involves the core task of estimating the remaining useful life (RUL) of the system. Neural networks-based approaches have produced promising results on RUL estimation, although their performances are influenced by handcrafted features and manually specified parameters. In this paper, we propose a multiobjective deep belief networks ensemble (MODBNE) method. MODBNE employs a multiobjective evolutionary algorithm integrated with the traditional DBN training technique to evolve multiple DBNs simultaneously subject to accuracy and diversity as two conflicting objectives. The eventually evolved DBNs are combined to establish an ensemble model used for RUL estimation, where combination weights are optimized via a single-objective differential evolution algorithm using a task-oriented objective function. We evaluate the proposed method on several prognostic benchmarking data sets and also compare it with some existing approaches. Experimental results demonstrate the superiority of our proposed method.
Neurocomputing | 2016
Bo-Yang Qu; B.F. Lang; Jing J. Liang; A. K. Qin; O.D. Crisalle
As a single-hidden-layer feedforward neural network, an extreme learning machine (ELM) randomizes the weights between the input layer and the hidden layer as well as the bias of hidden neurons, and analytically determines the weights between the hidden layer and the output layer using the least-squares method. This paper proposes a two-hidden-layer ELM (denoted TELM) by introducing a novel method for obtaining the parameters of the second hidden layer (connection weights between the first and second hidden layer and the bias of the second hidden layer), hence bringing the actual hidden layer output closer to the expected hidden layer output in the two-hidden-layer feedforward network. Simultaneously, the TELM method inherits the randomness of the ELM technique for the first hidden layer (connection weights between the input weights and the first hidden layer and the bias of the first hidden layer). Experiments on several regression problems and some popular classification datasets demonstrate that the proposed TELM can consistently outperform the original ELM, as well as some existing multilayer ELM variants, in terms of average accuracy and the number of hidden neurons.
congress on evolutionary computation | 2014
Borhan Kazimipour; Mohammad Nabi Omidvar; Xiaodong Li; A. K. Qin
Opposition-based learning (OBL) and cooperative co-evolution (CC) have demonstrated promising performance when dealing with large-scale global optimization (LSGO) problems. In this work, we propose a novel framework for hybridizing these two techniques, and investigate the performance of simple implementations of this new framework using the most recent LSGO benchmarking test suite. The obtained results verify the effectiveness of our proposed OBL-CC framework. Moreover, some advanced statistical analyses reveal that the proposed hybridization significantly outperforms its component methods in terms of the quality of finally obtained solutions.
international conference on web services | 2015
Sajib Mistry; Athman Bouguettaya; Hai Dong; A. K. Qin
We propose a novel composition framework for an Infrastructure-as-a-Service (IaaS) provider that selects the optimal set of long-term service requests to maximize its profit. Existing solutions consider an IaaS providers economic benefits at the time of service composition and ignore the dynamic nature of the consumer requests in a long-term period. The proposed framework deploys a new multivariate HMM and ARIMA model to predict different patterns of resource utilization and Quality of Service fluctuation tolerance levels of existing service consumers. The dynamic nature of new consumer requests with no history is modelled using a new community based heuristic approach. The predicted long-term service requests are optimized using Integer Linear Programming to find a proper configuration that maximizes the profit of an IaaS provider. Experimental results prove the feasibility of the proposed approach.
4th International Conference on Evolutionary and Biologically Inspired Music, Sound, Art and Design | 2015
Allan Campbell; Vic Ciesielksi; A. K. Qin
We investigated the ability of a Deep Belief Network with logistic nodes, trained unsupervised by Contrastive Divergence, to discover features of evolved abstract art images. Two Restricted Boltzmann Machine models were trained independently on low and high aesthetic class images. The receptive fields (filters) of both models were compared by visual inspection. Roughly 10 % of these filters in the high aesthetic model approximated the form of the high aesthetic training images. The remaining 90 % of filters in the high aesthetic model and all filters in the low aesthetic model appeared noise like. The form of discovered filters was not consistent with the Gabor filter like forms discovered for MNIST training data, possibly revealing an interesting property of the evolved abstract training images. We joined the datasets and trained a Restricted Boltzmann Machine finding that roughly 30 % of the filters approximate the form of the high aesthetic input images. We trained a 10 layer Deep Belief Network on the joint dataset and used the output activities at each layer as training data for traditional classifiers (decision tree and random forest). The highest classification accuracy from learned features (84 %) was achieved at the second hidden layer, indicating that the features discovered by our Deep Learning approach have discriminative power. Above the second hidden layer, classification accuracy decreases.
congress on evolutionary computation | 2014
A. K. Qin; Ke Tang; Hong Pan; Siyu Xia
Differential evolution (DE), as a very powerful population-based stochastic optimizer, is one of the most active research topics in the field of evolutionary computation. Self-adaptive differential evolution (SaDE) is a well-known DE variant, which aims to relieve the practical difficulty faced by DE in selecting among many candidates the most effective search strategy and its associated parameters. SaDE operates with multiple candidate strategies and gradually adapts the employed strategy and its accompanying parameter setting via learning the preceding behavior of already applied strategies and their associated parameter settings. Although highly effective, SaDE concentrates more on exploration than exploitation. To enhance SaDEs exploitation capability while maintaining its exploration power, we incorporate local search chains into SaDE following two different paradigms (Lamarckian and Baldwinian) that differ in the ways of utilizing local search results in SaDE. Our experiments are conducted on the CEC-2014 real-parameter single-objective optimization testbed. The statistical comparison results demonstrate that SaDE with Baldwinian local search chains, armed with suitable parameter settings, can significantly outperform original SaDE as well as classic DE at any tested problem dimensionality.