Shuxiang Xu
University of Tasmania
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Shuxiang Xu.
Expert Systems With Applications | 2015
Zongyuan Zhao; Shuxiang Xu; Byeong Ho Kang; Mir Md. Jahangir Kabir; Yunling Liu; Rainer Wasinger
We present an Average Random Choosing method which increases 0.04 classification accuracy.Investigate different MLP models and get the best model with accuracy of 87%.Accuracy increases when the model has more hidden neurons. Multi-Layer Perceptron (MLP) neural networks are widely used in automatic credit scoring systems with high accuracy and efficiency. This paper presents a higher accuracy credit scoring model based on MLP neural networks that have been trained with the back propagation algorithm. Our work focuses on enhancing credit scoring models in three aspects: (i) to optimise the data distribution in datasets using a new method called Average Random Choosing; (ii) to compare effects of training-validation-test instance numbers; and (iii) to find the most suitable number of hidden units. We trained 34 models 20 times with different initial weights and training instances. Each model has 6 to 39 hidden units with one hidden layer. Using the well-known German credit dataset we provide test results and a comparison between models, and we get a model with a classification accuracy of 87%, which is higher by 5% than the best result reported in the relevant literature of recent years. We have also proved that our optimisation of dataset structure can increase a models accuracy significantly in comparison with traditional methods. Finally, we summarise the tendency of scoring accuracy of models when the number of hidden units increases. The results of this work can be applied not only to credit scoring, but also to other MLP neural network applications, especially when the distribution of instances in a dataset is imbalanced.
international conference on information technology and applications | 2005
Shuxiang Xu; Ming Zhang
Data mining, the extraction of hidden predictive information from large databases, is a powerful new technology with great potential to help companies focus on the most important information in their data warehouses. One of the most commonly used techniques in data mining, artificial neural networks provide nonlinear predictive models that learn through training and resemble biological neural networks in structure. This paper deals with a new adaptive neural network model: a feed-forward neural network with a new activation function called neuron-adaptive activation function. Experiments with function approximation and stock market movement analysis have been conducted to justify the new adaptive neural network model. Experimental results have revealed that the new adaptive neural network model presents several advantages over traditional neuron-fixed feed-forward networks such as much reduced network size, faster learning, and more promising financial analysis.
international symposium on neural networks | 2000
Shuxiang Xu; Ming Zhang
An empirical justification of a neuron-adaptive activation function for feedforward neural networks has been proposed in this paper. Simulation results reveal that feedforward neural networks with the proposed neuron-adaptive activation function present several advantages over traditional neuron-fixed feedforward networks such as increased flexibility, much reduced network size, faster learning, and lessened approximation errors.
international symposium on neural networks | 2001
Shuxiang Xu; Ming Zhang
This paper deals with an experimental justification of a novel adaptive activation function for feedforward neural networks (FNNs). Simulation results reveal that FNNs with the proposed adaptive activation function present several advantages over traditional neuron-fixed feedforward networks such as much reduced network size, faster learning, and lessened approximation errors. Following the definition of the neuron-adaptive activation function, we conduct experiments with function approximation and financial data simulation, and depict the experimental outcomes that exhibit the advantages of FNN with our neuron-adaptive activation function over traditional FNN with fixed activation function.
international conference on neural information processing | 2015
Mir Md. Jahangir Kabir; Shuxiang Xu; Byeong Ho Kang; Zongyuan Zhao
Data mining techniques involve extracting useful, novel and interesting patterns from large data sets. Traditional association rule mining algorithms generate a huge number of unnecessary rules because of using support and confidence values as a constraint for measuring the quality of generated rules. Recently, several studies defined the process of extracting association rules as a multi-objective problem allowing researchers to optimize different measures that can present in different degrees depending on the data sets used. Applying evolutionary algorithms to noisy data of a large data set, is especially useful for automatic data processing and discovering meaningful and significant association rules. From the beginning of the last decade, multi-objective evolutionary algorithms are gradually becoming more and more useful in data mining research areas. In this paper, we propose a new multi-objective evolutionary algorithm, MBAREA, for mining useful Boolean association rules with low computational cost. To accomplish this our proposed method extends a recent multi-objective evolutionary algorithm based on a decomposition technique to perform evolutionary learning of a fitness value of each rule, while introducing a best population and a class based mutation method to store all the best rules obtained at some point of intermediate generation of a population and improving the diversity of the obtained rules. Moreover, this approach maximizes two objectives such as performance and interestingness for getting rules which are useful, easy to understand and interesting. This proposed algorithm is applied to different real world data sets to demonstrate the effectiveness of the proposed approach and the result is compared with existing evolutionary algorithm based approaches.
congress on evolutionary computation | 2015
Mir Md. Jahangir Kabir; Shuxiang Xu; Byeong Ho Kang; Zongyuan Zhao
In the data mining research area, discovering frequent item sets is an important issue and key factor for mining association rules. For large datasets, a huge amount of frequent patterns are generated for a low support value, which is a major challenge in frequent pattern mining tasks. A Maximal frequent pattern mining task helps to resolve this problem since a maximal frequent pattern contains information about a large number of small frequent sub patterns. For this study we have developed a genetic based approach to find maximal frequent patterns using a user defined threshold value as a constraint. To optimize the search problems, a genetic algorithm is one of the best choices which mimics the natural selection procedure and considers global search mechanism which is good for searching solution especially when the search space is large. The use of evolutionary algorithm is also effective for undetermined solutions. Therefore, this approach uses a genetic algorithm to find maximal frequent item sets from different sorts of data sets. A low support value generates some large patterns which contain the information about huge amount of small frequent sub patterns that could be useful for mining association rules. We have applied this genetic based approach for different real data sets as well as synthetic data sets. The experimental results show that our proposed approach evaluates less nodes than the number of candidate item sets considered by Apriori algorithm, especially when the support value is set low.
International Journal of Computers and Applications | 2007
Ming Zhang; Shuxiang Xu; John Fulcher
Abstract We propose a new neural network model, Neuron-Adaptive artificial neural Network (NAN). A learning algorithm is derived to tune both the neuron activation function free parameters and the connection weights between neurons. We proceed to prove that a NAN can approximate any piecewise continuous function to any desired accuracy, and then relate the approximation properties of NAN models to some special mathematical functions. A neuron-Adaptive artificial Neural network System for Estimating Rainfall (ANSER), which uses NAN as its basic reasoning network, is described. Empirical results show that the NAN model performs about 1.8% better than artificial neural network groups, and around 16.4% better than classical artificial neural networks when using a rainfall estimate experimental database. The empirical results also show that by using the NAN model, ANSER plus can (1) automatically compute rainfall amounts ten times faster; and (2) reduce average errors of rainfall estimates for the total precipitation event to less than 10%.
australian joint conference on artificial intelligence | 2002
Shuxiang Xu; Ming Zhang
This paper deals with higher order feed-forward neural networks with a new activation function - neuron-adaptive activation function. Experiments with function approximation and stock market movement simulation have been conducted to justify the new activation function. Experimental results have revealed that higher order feed-forward neural networks with the new neuron-adaptive activation function present several advantages over traditional neuron-fixed higher order feed-forward networks such as much reduced network size, faster learning, and more accurate financial data simulation.
Expert Systems With Applications | 2017
Mir Md. Jahangir Kabir; Shuxiang Xu; Byeong Ho Kang; Zongyuan Zhao
A new multiple seeds based genetic algorithm is proposed.This method relies on generating multiple seeds from different domains.This scheme introduces m-domain model and m-seeds selection process.Multiple seeds are used to generate an effective initial population.The experiments were conducted to show the effeciency of the proposed method. Association rule mining algorithms mostly use a randomly generated single seed to initialize a population without paying attention to the effectiveness of that population in evolutionary learning. Recently, research has shown significant impact of the initial population on the production of good solutions over several generations of a genetic algorithm. Single seed based genetic algorithms suffer from the following major challenges (1) solutions of a genetic algorithm are varied, since different seeds generate different initial population, (2) difficulty in defining a good seed for a specific application. To avoid these problems, in this paper we propose the MSGA, a new multiple seeds based genetic algorithm which generates multiple seeds from different domains of a solution space to discover high quality rules from a large data set. This scheme introduces m-domain model and m-seeds selection process through which the whole solution space is subdivided into m- number of same size domains, selecting a seed from each domain. Use of these seeds enables this method to generate an effective initial population for evolutionary learning of the fitness value of each rule. As a result, strong searching efficiency is obtained at the beginning of the evolution, achieving fast convergence. The MSGA is tested with different mutation and crossover operators for mining interesting Boolean association rules from four real world data sets. The results are compared to different single seeds based genetic algorithms under the same conditions.
computer science and software engineering | 2008
Shuxiang Xu; L Chen
This paper introduces an adaptive Higher Order Neural Network (HONN) model and applies it in data mining such as simulating and forecasting government taxation revenues. The proposed adaptive HONN model offers significant advantages over conventional Artificial Neural Network (ANN) models such as much reduced network size, faster training, as well as much improved simulation and forecasting errors. The generalization ability of this HONN model is explored and discussed. A new approach for determining the best number of hidden neurons is also proposed.