Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yunpeng Ma is active.

Publication


Featured researches published by Yunpeng Ma.


Knowledge Based Systems | 2014

Tuning extreme learning machine by an improved artificial bee colony to model and optimize the boiler efficiency

Guoqiang Li; Peifeng Niu; Yunpeng Ma; Hongbin Wang; Weiping Zhang

In this paper, a novel optimization technique based on artificial bee colony algorithm (ABC), which is called as PS-ABCII, is presented. In PS-ABCII, there are three major differences from other ABC-based techniques: (1) the opposition-based learning is applied to the population initialization; (2) the greedy selection mechanism is not adopted; (3) the mode that employed bees become scouts is modified. In order to illustrate the superiority of the proposed modified technique over other ABC-based techniques, ten classical benchmark functions are employed to test. In addition, a hybrid model called PS-ABCII-ELM is also proposed in this paper, which is combined of the PS-ABCII and Extreme Learning Machine (ELM). In PS-ABCII-ELM, the PS-ABCII is applied to tune input weights and biases of ELM in order to improve the generalization performance of ELM. And then it is applied to model and optimize the thermal efficiency of a 300MW coal-fired boiler. The experimental results show that the proposed model is very convenient, direct and accurate, and it can give a general and suitable way to predict and improve the boiler efficiency of a coal-fired boiler under various operating conditions.


Knowledge Based Systems | 2017

Model turbine heat rate by fast learning network with tuning based on ameliorated krill herd algorithm

Peifeng Niu; Ke Chen; Yunpeng Ma; Xia Li; Aling Liu; Guoqiang Li

The krill herd (KH) is an innovative biologically-inspired algorithm. To improve the solution quality and to quicken the global convergence speed of KH, an ameliorated krill herd algorithm (A-KH) is proposed to solve the aforementioned problems and test it by classical benchmark functions, which is one of the major contributions of this paper. Compared with other several state-of-art optimization algorithms (biogeography-based optimization, particle swarm optimization, artificial bee colony and krill herd algorithm), A-KH shows better search performance. There is, furthermore, another contribution that the A-KH is adopted to adjust the parameters of the fast learning network (FLN) so as to build the turbine heat rate model of a 600MW supercritical steam and obtain a high-precision prediction model. Experimental results show that, compared with other several turbine heat rate models, the tuned FLN model by A-KH has better regression precision and generalization capability.


Journal of Intelligent Manufacturing | 2015

Enhanced shuffled frog-leaping algorithm for solving numerical function optimization problems

Chao Liu; Peifeng Niu; Guoqiang Li; Yunpeng Ma; Weiping Zhang; Ke Chen

The shuffled frog-leaping algorithm (SFLA) is a relatively new meta-heuristic optimization algorithm that can be applied to a wide range of problems. After analyzing the weakness of traditional SFLA, this paper presents an enhanced shuffled frog-leaping algorithm (MS-SFLA) for solving numerical function optimization problems. As the first extension, a new population initialization scheme based on chaotic opposition-based learning is employed to speed up the global convergence. In addition, to maintain efficiently the balance between exploration and exploitation, an adaptive nonlinear inertia weight is introduced into the SFLA algorithm. Further, a perturbation operator strategy based on Gaussian mutation is designed for local evolutionary, so as to help the best frog to jump out of any possible local optima and/or to refine its accuracy. In order to illustrate the efficiency of the proposed method (MS-SFLA), 23 well-known numerical function optimization problems and 25 benchmark functions of CEC2005 are selected as testing functions. The experimental results show that the enhanced SFLA has a faster convergence speed and better search ability than other relevant methods for almost all functions.


Neural Processing Letters | 2017

A Hybrid Heat Rate Forecasting Model Using Optimized LSSVM Based on Improved GSA

Chao Liu; Peifeng Niu; Guoqiang Li; Xia You; Yunpeng Ma; Weiping Zhang

Heat rate value is considered as one of the most important thermal economic indicators, which determines the economic, efficient and safe operation of steam turbine unit. At the same time, an accurate heat rate forecasting is core task in the optimal operation of steam turbine unit. Recently, least squares support vector machine (LSSVM) is being proved an effective machine learning technique for solving nonlinear regression problem with a small sample set. However, it has also been proved that the prediction precision of LSSVM is highly dependent on its parameters, which are hardly choosing for the LSSVM. In the paper, an improved gravitational search algorithm (AC-GSA) is presented to further enhance optimal performance of GSA, and it is employed to serve as an approach for pre-selecting LSSVM parameters. Then, a novel soft computing method, based on LSSVM and AC-GSA, is therefore proposed to forecast heat rate of a 600 MW supercritical steam turbine unit. It combines the merits of the high accuracy of LSSVM and the fast convergence of GSA in order to build heat rate prediction model and obtain a well-generalized model. Results indicate that the developed AC-GSA–LSSVM model demonstrates better regression precision and generalization capability.


Neural Processing Letters | 2017

Fast Learning Network with Parallel Layer Perceptrons

Guoqiang Li; Xiaobin Qi; Bin Chen; Yunpeng Ma; Peifeng Niu; Zhiwang Chen

This paper proposes a novel artificial neural network called Parallel Layer Perceptron Fast Learning Network (PLP-FLN). In PLP-FLN, a parallel single hidden layer feed-forward neural network is added on the basis of Fast Learning Network (FLN) which is an improved Extreme Learning Machine (ELM). Input weights and hidden layer biases are randomly generated. The weights connect the output nodes and the input nodes, and the weights connect the output nodes and the hidden nodes are analytically determined based on least squares methods. In order to test the PLP-FLN validity, this paper compared it with ELM, FLN, Kernel ELM and Incremental ELM through 12 regression applications and 7 classification problems. By comparing the experimental results, it shows that the PLP-FLN with much more compact networks have demonstrated better approximations, classification performances and generalization ability.


soft computing | 2018

Model NOx emission and thermal efficiency of CFBB based on an ameliorated extreme learning machine

Peifeng Niu; Yunpeng Ma; Guoqiang Li

Extreme learning machine (ELM) is a novel single hidden layer feed-forward network, which has become a research hotspot in various domains. Through in-depth analysis on ELM, there are four factors mainly affect its model performance, such as the input data, the input weights, the number of hidden layer nodes and the hidden layer activation function. In order to enhance the performance of ELM, an ameliorated extreme learning machine, namely AELM, is proposed based on the aforementioned four factors. The proposed method owns new way to generate input weights and bias of hidden layer and has a new-type hidden layer activation function. Simulations on many UCI benchmark regression problems have demonstrated that the AELM generally outperforms the original ELM as well as several variants of ELM. Simultaneously, the AELM is adopted to build thermal efficiency model and NOx emission model of a 330MW circulating fluidized bed boiler. The results demonstrate the AELM is a useful machine learning tool.


Neural Networks | 2017

Global Mittag-Leffler stability analysis of fractional-order impulsive neural networks with one-side Lipschitz condition

Xinxin Zhang; Peifeng Niu; Yunpeng Ma; Yanqiao Wei; Guoqiang Li

This paper is concerned with the stability analysis issue of fractional-order impulsive neural networks. Under the one-side Lipschitz condition or the linear growth condition of activation function, the existence of solution is analyzed respectively. In addition, the existence, uniqueness and global Mittag-Leffler stability of equilibrium point of the fractional-order impulsive neural networks with one-side Lipschitz condition are investigated by the means of contraction mapping principle and Lyapunov direct method. Finally, an example with numerical simulation is given to illustrate the validity and feasibility of the proposed results.


Neural Processing Letters | 2016

A Kind of Parameters Self-adjusting Extreme Learning Machine

Peifeng Niu; Yunpeng Ma; Mengning Li; Shanshan Yan; Guoqiang Li


Knowledge Based Systems | 2017

Research and application of quantum-inspired double parallel feed-forward neural network

Yunpeng Ma; Peifeng Niu; Xinxin Zhang; Guoqiang Li


International Journal of Machine Learning and Cybernetics | 2018

A modified teaching–learning-based optimization algorithm for numerical function optimization

Peifeng Niu; Yunpeng Ma; Shanshan Yan

Collaboration


Dive into the Yunpeng Ma's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge