Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lixiong Xu is active.

Publication


Featured researches published by Lixiong Xu.


Computational Intelligence and Neuroscience | 2015

MapReduce based parallel neural networks in enabling large scale machine learning

Yang Liu; Jie Yang; Yuan Huang; Lixiong Xu; Siguang Li; Man Qi

Artificial neural networks (ANNs) have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation.


International Journal of Parallel Programming | 2017

The Parallelization of Back Propagation Neural Network in MapReduce and Spark

Yang Liu; Lixiong Xu; Maozhen Li

Artificial neural network is proved to be an effective algorithm for dealing with recognition, regression and classification tasks. At present a number of neural network implementations have been developed, for example Hamming network, Grossberg network, Hopfield network and so on. Among these implementations, back propagation neural network (BPNN) has become the most popular one due to its sensational function approximation and generalization abilities. However, in the current big data researches, BPNN, as a both data intensive and computational intensive algorithm, its efficiency has been significantly impacted. Therefore, this paper presents a parallel BPNN algorithm based on data separation in three distributed computing environments including Hadoop, HaLoop and Spark. Moreover to improve the algorithm performance in terms of accuracy, ensemble techniques have been employed. The algorithm is firstly evaluated in a small-scale cluster. And then it is further evaluated in a commercial cloud computing environment. The experimental results indicate that the proposed algorithm can improve the efficiency of BPNN with guaranteeing its accuracy.


Computational Intelligence and Neuroscience | 2016

Parallelizing Backpropagation Neural Network Using MapReduce and Cascading Model

Yang Liu; Weizhe Jing; Lixiong Xu

Artificial Neural Network (ANN) is a widely used algorithm in pattern recognition, classification, and prediction fields. Among a number of neural networks, backpropagation neural network (BPNN) has become the most famous one due to its remarkable function approximation ability. However, a standard BPNN frequently employs a large number of sum and sigmoid calculations, which may result in low efficiency in dealing with large volume of data. Therefore to parallelize BPNN using distributed computing technologies is an effective way to improve the algorithm performance in terms of efficiency. However, traditional parallelization may lead to accuracy loss. Although several complements have been done, it is still difficult to find out a compromise between efficiency and precision. This paper presents a parallelized BPNN based on MapReduce computing model which supplies advanced features including fault tolerance, data replication, and load balancing. And also to improve the algorithm performance in terms of precision, this paper creates a cascading model based classification approach, which helps to refine the classification results. The experimental results indicate that the presented parallelized BPNN is able to offer high efficiency whilst maintaining excellent precision in enabling large-scale machine learning.


Concurrency and Computation: Practice and Experience | 2016

A MapReduce-based parallel K-means clustering for large-scale CIM data verification

Chuang Deng; Yang Liu; Lixiong Xu; Jie Yang; Junyong Liu; Siguang Li; Maozhen Li

The Common Information Model (CIM) has been heavily used in electric power grids for data exchange among a number of auxiliary systems such as communication systems, monitoring systems, and marketing systems. With a rapid deployment of digitalized devices in electric power networks, the volume of data continuously grows, which makes verification of CIM data a challenging issue. This paper presents a parallel K‐means clustering algorithm for large‐scale CIM data verification. The parallel K‐means builds on the MapReduce computing model which has been widely taken up by the community in dealing with data‐intensive applications. A genetic algorithm‐based load‐balancing scheme is designed to balance the workloads among the heterogeneous computing nodes for a further improvement in computation efficiency. The performance of the parallel K‐means is initially evaluated in a small‐scale in‐house MapReduce cluster and subsequently evaluated in a commercial cloud computing platform. Finally, the parallel K‐means is evaluated in large‐scale simulated MapReduce environments. Both the experimental and simulation results show that the parallel K‐means reduces the CIM data‐verification time significantly compared with the sequential K‐means clustering, while generating a high level of precision in data verification. Copyright


Scientific Programming | 2017

Parallelizing Gene Expression Programming Algorithm in Enabling Large-Scale Classification

Lixiong Xu; Yuan Huang; Xiaodong Shen; Yang Liu

As one of the most effective function mining algorithms, Gene Expression Programming (GEP) algorithm has been widely used in classification, pattern recognition, prediction, and other research fields. Based on the self-evolution, GEP is able to mine an optimal function for dealing with further complicated tasks. However, in big data researches, GEP encounters low efficiency issue due to its long time mining processes. To improve the efficiency of GEP in big data researches especially for processing large-scale classification tasks, this paper presents a parallelized GEP algorithm using MapReduce computing model. The experimental results show that the presented algorithm is scalable and efficient for processing large-scale classification tasks.


Concurrency and Computation: Practice and Experience | 2017

MapReduce-based parallel GEP algorithm for efficient function mining in big data applications

Yang Liu; Chenxiao Ma; Lixiong Xu; Xiaodong Shen; Maozhen Li; Pengcheng Li

Gene expression programming (GEP) algorithm is one of the most effective function mining algorithms in enabling the mathematical equation fitting for the input dataset. However, GEP algorithm encounters low efficiency issue in big data processing due to large overhead in its evolution when it handles the large‐scale data. In order to solve the issue, this paper presents two parallelized GEP algorithms using MapReduce. Based on data separation, the first algorithm aims at speeding up the large‐scale classification. However, it is lack of ability to output the mined equation explicitly. Therefore, based on the further improvements of the first algorithm, the second parallelized GEP algorithm aims at mining the equation efficiently and also outputs the equation explicitly and directly. The experimental results show that both algorithms are effective for processing large volume of data.


ieee international conference on power system technology | 2014

A GA based optimized PMU-location-decision algorithm considering WAMS reliability

Xiaowen Huang; Junyong Liu; Yang Liu; Lixiong Xu; Jiashi Yang; Yaqi Ni

Due to the unpredictable occurrences of serious blackout in power system, the importance of wide area measurement system has been highlighted. By using the data of phase measurement units (PMU) installed over the power grid, the indices of power system security and stability can be retrieved. As a result, based on the reflected dynamic information people have successfully implemented accurate power flow dispatch. However, one critical issue has to be pointed out that frequently monitoring devices are supposed to be reliable, which results in the overpass of secondary system failures in the researches of power system reliability. This may greatly impacts system operations and dispatching decision. To solve the issue, researchers have proposed a number of algorithms focusing on the improvements of the PMU availability. However, an individual PMU cannot fully describe the states of WAMS which can globally represent the power system states. And also their proposed work has less consideration that the secondary measuring system failure may be fed back into and impact the primary system. Consequently, a GA based optimized PMU-location-decision algorithm considering WAMS reliability is proposed in this paper. Firstly the algorithm computes the availability of PMU using Markov chains. Secondly in order to find the optimal deployment of PMU, the influence of PMUs failure is analysed by the indices of WAMS reliability assessment. Finally by creating the `Energy-Information system scenario relational set with additional EDNS, the unestimated activities of primary system caused by PMU failure is analyzed. The experimental results show that along with the PMU failure rising the WAMS failure rate keeps rising, which generates uncertainty during the system operation. It also indicates through the redundant configuration of PMU, the reliability of system can be significantly enhanced.


fuzzy systems and knowledge discovery | 2014

A MapReduce based parallel algorithm for CIM data verification

Yang Liu; Xiaodong Shen; Lixiong Xu; Maozhen Li

At present, the power system is building up on top of a series of auxiliary systems for examples communication systems, monitoring systems, marketing systems and so on. All the systems work based on the shared power system data which are defined using Common Information Model (CIM). Due to diversiform reasons, errors may exist in the data. Therefore the verification technologies are developed. So far the researchers mainly focus on the accuracy of the verification. However, along with the volume of data increasing, efficiency has become an issue. This paper proposes a CIM verifying algorithm based on MapReduce in terms of efficiency improvement. The experimental results show that it can enhance the performance of verification.


international conference on natural computation | 2016

Cascading model based back propagation neural network in enabling precise classification

Yang Liu; Weizhe Jing; Lixiong Xu

Artificial Neural Network has been widely used in types of tasks which are related to classification, for example image annotation, pattern recognition, trend prediction and so on. There are a number of neural network algorithm theories based on various concepts, however, Back Propagation Neural Network (BPNN) has become the most famous one due to its remarkable function approximation ability. BPNN employs feed forward and back propagation to tune the parameters in neurons, and uses only feed forward to execute the classification. Based on gradient descent mechanism, BPNN can effectively find the mapping relationships between input and output. However, BPNN has met one critical issue. Due to large number of sum and sigmoid operations, BPNN has low efficiency in dealing with large number of instances. To speed up the processing, researchers employed data separation. However, simple data separation in training phase results in classification accuracy loss. Therefore, this paper presents a cascading model based BPNN (CBPNN) which aims at enhancing the classification accuracy of BPNN. The paper further employs multi-threading technique to speed up CBPNN. The experimental results indicate that CBPNN can improve the classification precision whilst maintains satisfied efficiency.


china international conference on electricity distribution | 2016

Bi-level programming based optimal reactive power allocation for hierarchical voltage/var control in partitioned distribution network

Yanlin Guo; Junyong Liu; Lixiong Xu; Yuanxi Li; Yang Liu; Xiaodong Shen; Zhenghua Jiang

This paper proposes a novel reactive power allocation optimization method based on the bi-level programming (BP) model. Considering hierarchical voltage/ var control in partitioned distribution network and dynamic coupling relationships between periods, the BP model is applied to make a research on the optimized expansion of existing allocation. In the proposal, lower model aims to maximize the balance of reactive power locally while upper model targets at minimizing the voltage deviation from expected value. Besides the upper and lower model are interlinked and coordinated through the nodal voltage. Simultaneously, two contribution factor indexes are employed to quantify the ability of improving the voltage quality and optimizing the branch flow. And these indexes can be derived from the voltage-var sensitivity and branch flow-var sensitivity. Then according to the indexes, the best ones are chosen as compensation nodes and employed in the bi-level programming model. Finally the results of the American PG&E 69-buses distribution network validate the rationality and effectiveness of the proposed method.

Collaboration


Dive into the Lixiong Xu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Maozhen Li

Brunel University London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge