Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Qun Dai is active.

Publication


Featured researches published by Qun Dai.


Applied Soft Computing | 2015

A new reverse reduce-error ensemble pruning algorithm

Qun Dai; Ting Zhang; Ningzhong Liu

An interesting RRE pruning algorithm incorporated with the operation of subtraction is proposed in this work.The WSM is chosen and its votes are subtracted from the votes made by those selected components.The backfitting step of RE algorithm is replaced with the selection step of a WSB in RRE.The problem of ties might be solved more naturally with RRE.Soft voting approach is employed in the testing to RRE algorithm. Although greedy algorithms possess high efficiency, they often receive suboptimal solutions of the ensemble pruning problem, since their exploration areas are limited in large extent. And another marked defect of almost all the currently existing ensemble pruning algorithms, including greedy ones, consists in: they simply abandon all of the classifiers which fail in the competition of ensemble selection, causing a considerable waste of useful resources and information. Inspired by these observations, an interesting greedy Reverse Reduce-Error (RRE) pruning algorithm incorporated with the operation of subtraction is proposed in this work. The RRE algorithm makes the best of the defeated candidate networks in a way that, the Worst Single Model (WSM) is chosen, and then, its votes are subtracted from the votes made by those selected components within the pruned ensemble. The reason is because, for most cases, the WSM might make mistakes in its estimation for the test samples. And, different from the classical RE, the near-optimal solution is produced based on the pruned error of all the available sequential subensembles. Besides, the backfitting step of RE algorithm is replaced with the selection step of a WSM in RRE. Moreover, the problem of ties might be solved more naturally with RRE. Finally, soft voting approach is employed in the testing to RRE algorithm. The performances of RE and RRE algorithms, and two baseline methods, i.e., the method which selects the Best Single Model (BSM) in the initial ensemble, and the method which retains all member networks of the initial ensemble (ALL), are evaluated on seven benchmark classification tasks under different initial ensemble setups. The results of the empirical investigation show the superiority of RRE over the other three ensemble pruning algorithms.


Applied Soft Computing | 2013

ModEnPBT: A Modified Backtracking Ensemble Pruning algorithm

Qun Dai; Zhuan Liu

Abstract This paper proposes a new Modified Backtracking Ensemble Pruning algorithm (ModEnPBT), which is based upon the design idea of our previously proposed Ensemble Pruning via Backtracking algorithm (EnPBT), and however, aiming at overcoming its drawback of redundant solution space definition. Solution space of ModEnPBT is compact with no repeated solution vectors, therefore it possesses relatively higher searching efficiency compared with EnPBT algorithm. ModEnPBT still belongs to the category of Backtracking algorithm, which can systematically search for the solutions of a problem in a manner of depth-first, which is suitable for solving all those large-scale combinatorial optimization problems. Experimental results on three benchmark classification tasks demonstrate the validity and effectiveness of the proposed ModEnPBT.


Applied Intelligence | 2015

Extreme learning machines' ensemble selection with GRASP

Ting Zhang; Qun Dai; Zhongchen Ma

Credit scoring, which is also called credit risk assessment has attracted the attention of many financial institutions and much research has been carried out. In this work, a new Extreme Learning Machines’ (ELMs) Ensemble Selection algorithm based on the Greedy Randomized Adaptive Search Procedure (GRASP), referred to as ELMsGraspEnS, is proposed for credit risk assessment of enterprises. On the one hand, the ELM is used as the base learner for ELMsGraspEnS owing to its significant advantages including an extremely fast learning speed, good generalization performance, and avoidance of issues like local minima and overfitting. On the other hand, to ameliorate the local optima problem faced by classical greedy ensemble selection methods, we incorporated GRASP, a meta-heuristic multi-start algorithm for combinatorial optimization problems, into the solution of ensemble selection, and proposed an ensemble selection algorithm based on GRASP (GraspEnS) in our previous work. The GraspEnS algorithm has the following three advantages. (1) By incorporating a random factor, a solution is often able to escape local optima. (2) GraspEnS realizes a multi-start search to some degree. (3) A better performing subensemble can usually be found with GraspEnS. Moreover, not much research on applying ensemble selection approaches to credit scoring has been reported in the literature. In this paper, we integrate the ELM with GraspEnS, and propose a novel ensemble selection algorithm based on GRASP (ELMsGraspEnS). ELMsGraspEnS naturally inherits the inherent advantages of both the ELM and GraspEnS, effectively combining their advantages. The experimental results of applying ELMsGraspEnS to three benchmark real world credit datasets show that in most cases ELMsGraspEnS significantly improves the performance of credit risk assessment compared with several state-of-the-art algorithms. Thus, it can be concluded that ELMsGraspEnS simultaneously exhibits relatively high efficiency and effectiveness.


Applied Soft Computing | 2017

Considering diversity and accuracy simultaneously for ensemble pruning

Qun Dai; Rui Ye; Zhuan Liu

Abstract Diversity among individual classifiers is widely recognized to be a key factor to successful ensemble selection, while the ultimate goal of ensemble pruning is to improve its predictive accuracy. Diversity and accuracy are two important properties of an ensemble. Existing ensemble pruning methods always consider diversity and accuracy separately. However, in contrast, the two closely interrelate with each other, and should be considered simultaneously. Accordingly, three new measures, i.e., Simultaneous Diversity & Accuracy, Diversity-Focused-Two and Accuracy-Reinforcement, are developed for pruning the ensemble by greedy algorithm. The motivation for Simultaneous Diversity & Accuracy is to consider the difference between the subensemble and the candidate classifier, and simultaneously, to consider the accuracy of both of them. With Simultaneous Diversity & Accuracy, those difficult samples are not given up so as to further improve the generalization performance of the ensemble. The inspiration of devising Diversity-Focused-Two stems from the cognition that ensemble diversity attaches more importance to the difference among the classifiers in an ensemble. Finally, the proposal of Accuracy-Reinforcement reinforces the concern about ensemble accuracy. Extensive experiments verified the effectiveness and efficiency of the proposed three pruning measures. Through the investigation of this work, it is found that by considering diversity and accuracy simultaneously for ensemble pruning, well-performed selective ensemble with superior generalization capability can be acquired, which is the scientific value of this paper.


Applied Intelligence | 2016

An efficient ordering-based ensemble pruning algorithm via dynamic programming

Qun Dai; Xiaomeng Han

Although ordering-based pruning algorithms possess relatively high efficiency, there remains room for further improvement. To this end, this paper describes the combination of a dynamic programming technique with the ensemble-pruning problem. We incorporate dynamic programming into the classical ordering-based ensemble-pruning algorithm with complementariness measure (ComEP), and, with the help of two auxiliary tables, propose a reasonably efficient dynamic form, which we refer to as ComDPEP. To examine the performance of the proposed algorithm, we conduct a series of simulations on four benchmark classification datasets. The experimental results demonstrate the significantly higher efficiency of ComDPEP over the classic ComEP algorithm. The proposed ComDPEP algorithm also outperforms two other state-of-the-art ordering-based ensemble-pruning algorithms, which use uncertainty weighted accuracy and reduce-error pruning, respectively, as their measures. It is noteworthy that, the effectiveness of ComDPEP is just the same with that of the classical ComEP algorithm.


Applied Soft Computing | 2014

A two-phased and Ensemble scheme integrated Backpropagation algorithm

Qun Dai; Zhongchen Ma; QiongYu Xie

A novel two-phased and Ensemble scheme integrated Backpropagation (TP-ES-BP) algorithm.Greatly alleviate the problem of local minima of SBP algorithm.Overcome the limitations of individual components in classification performance by the integration of Ensemble.Empirical and t-test results on ORL face image database and UCI database show its effectiveness. This paper presents a novel two-phased and Ensemble scheme integrated Backpropagation (TP-ES-BP) algorithm, which could greatly alleviate the problem of local minima of standard BP (SBP) algorithm, and overcome the limitations of individual component BPs in classification performance by the integration of Ensemble method. Empirical and t-test results of three groups of simulation experiments, including the face recognition task on ORL face image database and four benchmark classification tasks on datasets drawn from the UCI repository of machine learning databases, show that TP-ES-BP algorithm achieves significantly better recognition results and higher generalization performance compared to SBP and the state-of-the-art emotional backpropagation (EmBP) learning algorithm.


Applied Intelligence | 2016

Hybrid ensemble selection algorithm incorporating GRASP with path relinking

Ting Zhang; Qun Dai

The greedy randomized adaptive search procedure (GRASP) is an iterative two-phase multi-start metaheuristic procedure for a combination optimization problem, while path relinking is an intensification procedure applied to the solutions generated by GRASP. In this paper, a hybrid ensemble selection algorithm incorporating GRASP with path relinking (PRelinkGraspEnS) is proposed for credit scoring. The base learner of the proposed method is an extreme learning machine (ELM). Bootstrap aggregation (bagging) is used to produce multiple diversified ELMs, while GRASP with path relinking is the approach for ensemble selection. The advantages of the ELM are inherited by the new algorithm, including fast learning speed, good generalization performance, and easy implementation. The PRelinkGraspEnS algorithm is able to escape from local optima and realizes a multi-start search. By incorporating path relinking into GRASP and using it as the ensemble selection method for the PRelinkGraspEnS the proposed algorithm becomes a procedure with a memory and high convergence speed. Three credit datasets are used to verify the efficiency of our proposed PRelinkGraspEnS algorithm. Experimental results demonstrate that PRelinkGraspEnS achieves significantly better generalization performance than the classical directed hill climbing ensemble pruning algorithm, support vector machines, multi-layer perceptrons, and a baseline method, the best single model. The experimental results further illustrate that by decreasing the average time needed to find a good-quality subensemble for the credit scoring problem, GRASP with path relinking outperforms pure GRASP (i.e., without path relinking).


Applied Intelligence | 2018

Batch-normalized Mlpconv-wise supervised pre-training network in network

Xiaomeng Han; Qun Dai

Deep multi-layered neural networks have nonlinear levels that allow them to represent highly varying nonlinear functions compactly. In this paper, we propose a new deep architecture with enhanced model discrimination ability that we refer to as mlpconv-wise supervised pre-training network in network (MPNIN). The process of information abstraction is facilitated within the receptive fields for MPNIN. The proposed architecture uses the framework of the recently developed NIN structure, which slides a universal approximator, such as a multilayer perceptron with rectifier units, across an image to extract features. However, the random initialization of NIN can produce poor solutions to gradient-based optimization. We use mlpconv-wise supervised pre-training to remedy this defect because this pre-training technique may contribute to overcoming the difficulties of training deep networks by better initializing the weights in all the layers. Moreover, batch normalization is applied to reduce internal covariate shift by pre-conditioning the model. Empirical investigations are conducted on the Mixed National Institute of Standards and Technology (MNIST), the Canadian Institute for Advanced Research (CIFAR-10), CIFAR-100, the Street View House Numbers (SVHN), the US Postal (USPS), Columbia University Image Library (COIL20), COIL100 and Olivetti Research Ltd (ORL) datasets, and the results verify the effectiveness of the proposed MPNIN architecture.


Applied Intelligence | 2017

A hierarchical and parallel branch-and-bound ensemble selection algorithm

Qun Dai; ChangSheng Yao

This paper describes the development of an effective and efficient Hierarchical and Parallel Branch-and-Bound Ensemble Selection (H&PB&BEnS) algorithm. Using the proposed H&PB&BEnS, ensemble selection is accomplished in a divisional, parallel, and hierarchical way. H&PB&BEnS uses the superior performance of the Branch-and-Bound (B&B) algorithm in relation to small-scale combinational optimization problems, whilst also managing to avoid “the curse of dimensionality” that can result from the direct application of B&B to ensemble selection problems. The B&B algorithm is used to select each partitioned subensemble, which enhances the predictive accuracy of each pruned subsolution, and then the working mechanism of H&PB&BEnS improves the diversity of the ensemble selection results. H&PB&BEnS realizes layer-wise refinement of the selected ensemble solutions, which enables the classification performance of the selected ensembles to be improved in a layer-by-layer manner. Empirical investigations are conducted using five benchmark classification datasets, and the results verify the effectiveness and efficiency of the proposed H&PB&BEnS algorithm.


Applied Intelligence | 2018

A novel knowledge-leverage-based transfer learning algorithm

Meiling Li; Qun Dai

A major assumption in traditional machine leaning is that the training and testing data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. In recent years, transfer learning has emerged as a new learning paradigm to cope with this considerable challenge. It focuses on exploiting previously learnt knowledge by leveraging information from an old source domain to help learning in a new target domain. In this work, we integrate the knowledge-leverage-based Transfer Learning mechanism with a Rank-based Reduce Error ensemble selection approach to fulfill the transfer learning task, called RankRE-TL. Ensemble selection is important for improving both efficiency and predictive accuracy of an ensemble system. It aims to select a proper subset of the whole ensemble, which usually outperforms the whole one. Therefore, we appropriately modify the Reduce Error (RE) pruning technique and design a new Rank-based Reduce Error ensemble selection method (RankRE) to deal with the transfer learning task. The design idea of RankRE is to find the candidate classifier which is expected to improve the classification performance of the extended subensemble the most. In the RankRE-TL algorithm, the initial Support Vector Machine (SVM) ensemble is learnt based upon dynamic training dataset regrouping. And simultaneously, a new construction method of validation set is designed for RankRE-TL, which differs from the method used in conventional ensemble selection paradigm.

Collaboration


Dive into the Qun Dai's collaboration.

Top Co-Authors

Avatar

Ting Zhang

Nanjing University of Aeronautics and Astronautics

View shared research outputs
Top Co-Authors

Avatar

Xiaomeng Han

Nanjing University of Aeronautics and Astronautics

View shared research outputs
Top Co-Authors

Avatar

Zhongchen Ma

Nanjing University of Aeronautics and Astronautics

View shared research outputs
Top Co-Authors

Avatar

Zhuan Liu

Nanjing University of Aeronautics and Astronautics

View shared research outputs
Top Co-Authors

Avatar

ChangSheng Yao

Nanjing University of Aeronautics and Astronautics

View shared research outputs
Top Co-Authors

Avatar

Meiling Li

Nanjing University of Aeronautics and Astronautics

View shared research outputs
Top Co-Authors

Avatar

Ningzhong Liu

Nanjing University of Aeronautics and Astronautics

View shared research outputs
Top Co-Authors

Avatar

QiongYu Xie

Nanjing University of Aeronautics and Astronautics

View shared research outputs
Top Co-Authors

Avatar

Rui Ye

Nanjing University of Aeronautics and Astronautics

View shared research outputs
Researchain Logo
Decentralizing Knowledge