Yulin He
Shenzhen University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yulin He.
Information Sciences | 2017
Rana Aamir Raza Ashfaq; Xizhao Wang; Joshua Zhexue Huang; Haider Abbas; Yulin He
Countering cyber threats, especially attack detection, is a challenging area of research in the field of information assurance. Intruders use polymorphic mechanisms to masquerade the attack payload and evade the detection techniques. Many supervised and unsupervised learning approaches from the field of machine learning and pattern recognition have been used to increase the efficacy of intrusion detection systems (IDSs). Supervised learning approaches use only labeled samples to train a classifier, but obtaining sufficient labeled samples is cumbersome, and requires the efforts of domain experts. However, unlabeled samples can easily be obtained in many real world problems. Compared to supervised learning approaches, semi-supervised learning (SSL) addresses this issue by considering large amount of unlabeled samples together with the labeled samples to build a better classifier. This paper proposes a novel fuzziness based semi-supervised learning approach by utilizing unlabeled samples assisted with supervised learning algorithm to improve the classifiers performance for the IDSs. A single hidden layer feed-forward neural network (SLFN) is trained to output a fuzzy membership vector, and the sample categorization (low, mid, and high fuzziness categories) on unlabeled samples is performed using the fuzzy quantity. The classifier is retrained after incorporating each category separately into the original training set. The experimental results using this technique of intrusion detection on the NSL-KDD dataset show that unlabeled samples belonging to low and high fuzziness groups make major contributions to improve the classifiers performance compared to existing classifiers e.g., naive bayes, support vector machine, random forests, etc.
Information Sciences | 2016
Yulin He; Xizhao Wang; Joshua Zhexue Huang
Modeling a fuzzy-in fuzzy-out system where both inputs and outputs are uncertain is of practical and theoretical importance. Fuzzy nonlinear regression (FNR) is one of the approaches used most widely to model such systems. In this study, we propose the use of a Random Weight Network (RWN) to develop a FNR model called FNRRWN, where both the inputs and outputs are triangular fuzzy numbers. Unlike existing FNR models based on back-propagation (BP) and radial basis function (RBF) networks, FNRRWN does not require iterative adjustment of the network weights and biases. Instead, the input layer weights and hidden layer biases of FNRRWN are selected randomly. The output layer weights for FNRRWN are calculated analytically based on a derived updating rule, which aims to minimize the integrated squared error between α-cut sets that correspond to the predicted fuzzy outputs and target fuzzy outputs, respectively. In FNRRWN, the integrated squared error is solved approximately by Riemann integral theory. The experimental results show that the proposed FNRRWN method can effectively approximate a fuzzy-in fuzzy-out system. FNRRWN obtains better prediction accuracy in a lower computational time compared with existing FNR models based on BP and RBF networks.
Information Sciences | 2016
Yi-Chao He; Xizhao Wang; Yulin He; Shu-Liang Zhao; Wen-Bin Li
The Discounted {0-1} Knapsack Problem (D{0-1}KP) is an extension of the classical 0-1 knapsack problem (0-1 KP) that consists of selecting a set of item groups where each group includes three items and at most one of the three items can be selected. The D{0-1}KP is more challenging than the 0-1 KP because four choices of items in an item group diversify the selection of the items. In this paper, we systematically studied the exact and approximate algorithms for solving D{0-1}KP. Firstly, a new exact algorithm based on the dynamic programming and its corresponding fully polynomial time approximation scheme were designed. Secondly, a 2-approximation algorithm for D{0-1}KP was developed. Thirdly, a greedy repair algorithm for handling the infeasible solutions of D{0-1}KP was proposed and we further studied how to use binary particle swarm optimization and greedy repair algorithm to solve the D{0-1}KP. Finally, we used four different kinds of instances to compare the approximate rate and solving time of the exact and approximate algorithms. The experimental results and theoretical analysis showed that the approximate algorithms worked well for D{0-1}KP instances with large value, weight, and size coefficients, while the exact algorithm was good at solving D{0-1}KP instances with small value, weight, and size coefficients.
International Journal of Machine Learning and Cybernetics | 2017
Rana Aamir Raza Ashfaq; Yulin He; Degang Chen
Building a high quality classifier is one of the key problems in the field of machine learning (ML) and pattern recognition. Many ML algorithms have suffered from high computational power in the presence of large scale data sets. This paper proposes a fuzziness based instance selection technique for the large data sets to increase the efficiency of supervised learning algorithms by improving the shortcomings of designing an effective intrusion detection system (IDS). The proposed methodology is dependent on a new kind of single layer feed-forward neural network (SLFN), called random weight neural network (RWNN). At the first stage, a membership vector corresponding to every training instance is obtained by using RWNN for computing the fuzziness. Secondly, the training instances (along with their fuzziness values) according to the actual class labels are grouped separately. After this, the instances having low fuzziness values in each group are extracted, which are used to build a reduced data set. The instances outputted by the proposed method are used as an input for ML classifiers, which result in reducing the learning time and also increasing the learning capability. The proposed methodology exhibits that the reduced data set can easily learn the boundaries between class labels. The most obvious finding from this study is a considerable increase in the accuracy rate with unseen examples when compared with other instance selection method, i.e., IB2. The proposed method provides the better generalization and fast learning capability. The reasonability of the proposed methodology is theoretically explained and experiments on well known ID data sets support its usefulness.
Applied Soft Computing | 2017
Yulin He; Chenghao Wei; Hao Long; Rana Aamir Raza Ashfaq; Joshua Zhexue Huang
Abstract This paper proposes a random weight network (RWN)-based fuzzy nonlinear regression (FNR) model, abbreviated as TraFNR RWN , to solve the FNR problem in which both inputs and outputs are trapezoidal fuzzy numbers. TraFNR RWN is a special single hidden layer feed-forward neural network which does not require any iterative process to train the network weights. The input-layer weights of TraFNR RWN are randomly assigned and its output-layer weights are analytically determined by solving a constrained-optimization problem. In addition, a new strategy is used to construct the fuzzy membership degree function for the predicted fuzzy-out based on the derived output-layer weights of TraFNR RWN . A fuzzification method is developed to fuzzify the crisp numbers of data sets into trapezoidal fuzzy numbers. Twelve fuzzified data sets were used in the experiments to compare the performance of TraFNR RWN with five different FNR models. The experimental results have shown that TraFNR RWN obtained better prediction performance with less training time because it did not require time-consuming weight learning and parameter tuning.
Neural Computing and Applications | 2017
Hai-tao Liu; Jing Wang; Yulin He; Rana Aamir Raza Ashfaq
It is practically and theoretically significant to approximate and simulate a system with fuzzy inputs and fuzzy outputs. This paper proposes a extreme learning machine (ELM)-based fuzzy regression model (
international conference on cloud computing | 2018
Chenghao Wei; Salman Salloum; Tamer Z. Emara; Xiaoliang Zhang; Joshua Zhexue Huang; Yulin He
Neural Computing and Applications | 2018
Li-fen Yang; Chong Liu; Hao Long; Rana Aamir Raza Ashfaq; Yulin He
{{\rm FR}}_{{{\rm ELM}}}
Applied Soft Computing | 2018
Yulin He; Xiaoliang Zhang; Wei Ao; Joshua Zhexue Huang
pacific-asia conference on knowledge discovery and data mining | 2017
Hao Long; Yulin He; Joshua Zhexue Huang; Qiang Wang
FRELM) in which both inputs and outputs are triangular fuzzy numbers. Algorithm for training