Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jingjing Cao is active.

Publication


Featured researches published by Jingjing Cao.


Information Sciences | 2012

Achieving balance between proximity and diversity in multi-objective evolutionary algorithm

Ke Li; Sam Kwong; Jingjing Cao; Miqing Li; Jinhua Zheng; Ruimin Shen

Currently, an alternative framework using the hypervolume indicator to guide the search for elite solutions of a multi-objective problem is studied in the evolutionary multi-objective optimization community very actively, comparing to the traditional Pareto dominance based approach. In this paper, we present a dynamic neighborhood multi-objective evolutionary algorithm based on hypervolume indicator (DNMOEA/HI), which benefits from both Pareto dominance and hypervolume indicator based frameworks. DNMOEA/HI is featured by the employment of hypervolume indicator as a truncation operator to prune the exceeded population, while a well-designed density estimator (i.e., tree neighborhood density) is combined with the Pareto strength value to perform fitness assignment. Moreover, a novel algorithm is proposed to directly evaluate the hypervolume contribution of a single individual. The performance of DNMOEA/HI is verified on a comprehensive benchmark suite, in comparison with six other multi-objective evolutionary algorithms. Experimental results demonstrate the efficiency of our proposed algorithm. Solutions obtained by DNMOEA/HI well approach the Pareto optimal front and are evenly distributed over the front, simultaneously.


Pattern Recognition | 2012

A noise-detection based AdaBoost algorithm for mislabeled data

Jingjing Cao; Sam Kwong; Ran Wang

Noise sensitivity is known as a key related issue of AdaBoost algorithm. Previous works exhibit that AdaBoost is prone to be overfitting in dealing with the noisy data sets due to its consistent high weights assignment on hard-to-learn instances (mislabeled instances or outliers). In this paper, a new boosting approach, named noise-detection based AdaBoost (ND-AdaBoost), is exploited to combine classifiers by emphasizing on training misclassified noisy instances and correctly classified non-noisy instances. Specifically, the algorithm is designed by integrating a noise-detection based loss function into AdaBoost to adjust the weight distribution at each iteration. A k-nearest-neighbor (k-NN) and an expectation maximization (EM) based evaluation criteria are both constructed to detect noisy instances. Further, a regeneration condition is presented and analyzed to control the ensemble training error bound of the proposed algorithm which provides theoretical support. Finally, we conduct some experiments on selected binary UCI benchmark data sets and demonstrate that the proposed algorithm is more robust than standard and other types of AdaBoost for noisy data sets.


Information Sciences | 2013

Particle swarm optimization based on intermediate disturbance strategy algorithm and its application in multi-threshold image segmentation

Hao Gao; Sam Kwong; Jijiang Yang; Jingjing Cao

Abstract Particle swarm optimization (PSO) algorithm simulates social behavior among individuals (or particles) “flying” through multidimensional search space. For enhancing the local search ability of PSO and guiding the search, a region that had most number of the particles was defined and analyzed in detail. Inspired by the ecological behavior, we presented a PSO algorithm with intermediate disturbance searching strategy (IDPSO), which enhances the global search ability of particles and increases their convergence rates. The experimental results on comparing the IDPSO to ten known PSO variants on 16 benchmark problems demonstrated the effectiveness of the proposed algorithm. Furthermore, we applied the IDPSO algorithm to multilevel image segmentation problem for shortening the computational time. Experimental results of the new algorithm on a variety of images showed that it can effectively segment an image faster.


Pattern Recognition | 2014

HEp-2 cell pattern classification with discriminative dictionary learning

Xiangfei Kong; Kuan Li; Jingjing Cao; Qingxiong Yang; Liu Wenyin

Abstract The paper presents a supervised discriminative dictionary learning algorithm specially designed for classifying HEp-2 cell patterns. The proposed algorithm is an extension of the popular K-SVD algorithm: at the training phase, it takes into account the discriminative power of the dictionary atoms and reduces their intra-class reconstruction error during each update. Meanwhile, their inter-class reconstruction effect is also considered. Compared to the existing extension of K-SVD, the proposed algorithm is more robust to parameters and has better discriminative power for classifying HEp-2 cell patterns. Quantitative evaluation shows that the proposed algorithm outperforms general object classification algorithms significantly on standard HEp-2 cell patterns classifying benchmark 1 and also achieves competitive performance on standard natural image classification benchmark.


Neural Computing and Applications | 2016

Empirical analysis: stock market prediction via extreme learning machine

Xiaodong Li; Haoran Xie; Ran Wang; Yi Cai; Jingjing Cao; Feng Wang; Huaqing Min; Xiaotie Deng

Abstract How to predict stock price movements based on quantitative market data modeling is an attractive topic. In front of the market news and stock prices that are commonly believed as two important market data sources, how to extract and exploit the hidden information within the raw data and make both accurate and fast predictions simultaneously becomes a challenging problem. In this paper, we present the design and architecture of our trading signal mining platform that employs extreme learning machine (ELM) to make stock price prediction based on those two data sources concurrently. Comprehensive experimental comparisons between ELM and the state-of-the-art learning algorithms, including support vector machine (SVM) and back-propagation neural network (BP-NN), have been undertaken on the intra-day tick-by-tick data of the H-share market and contemporaneous news archives. The results have shown that (1) both RBF ELM and RBF SVM achieve higher prediction accuracy and faster prediction speed than BP-NN; (2) the RBF ELM achieves similar accuracy with the RBF SVM and (3) the RBF ELM has faster prediction speed than the RBF SVM. Simulations of a preliminary trading strategy with the signals are conducted. Results show that strategy with more accurate signals will make more profits with less risk.


Information Sciences | 2013

A vector-valued support vector machine model for multiclass problem

Ran Wang; Sam Kwong; Degang Chen; Jingjing Cao

In this paper, a new model named Multiclass Support Vector Machines with Vector-Valued Decision (M-SVMs-VVD) or VVD is proposed. The basic idea is to separate 2^a classes by a SVM hyperplanes in the feature space induced by certain kernels, where a is a finite positive integer. We start from a 2^a-class problem, and extend it to any-class problem by applying a hierarchical decomposition procedure. Compared with the existing SVM-based multiclass methods, the VVD model has two advantages. First, it reduces the computational complexity by using a small number of classifiers. Second, the feature space partition induced by the hyperplanes effectively eliminates the Unclassifiable regions (URs) that may affect the classification performance of the algorithm. Experimental comparisons with several state-of-the-art multiclass methods demonstrate that VVD maintains a comparable testing accuracy, while it improves the classification efficiency with less classifiers, a smaller number of support vectors (SVs), and shorter testing time.


Neurocomputing | 2015

Class-specific soft voting based multiple extreme learning machines ensemble

Jingjing Cao; Sam Kwong; Ran Wang; Xiaodong Li; Ke Li; Xiangfei Kong

Compared with conventional weighted voting methods, class-specific soft voting (CSSV) system has several advantages. On one hand, it not only deals with the soft class probability outputs but also refines the weights from classifiers to classes. On the other hand, the class-specific weights can be used to improve the combinative performance without increasing much computational load. This paper proposes two weight optimization based ensemble methods (CSSV-ELM and SpaCSSV-ELM) under the framework of CSSV scheme for multiple extreme learning machines (ELMs). The designed two models are in terms of accuracy and sparsity aspects, respectively. Firstly, CSSV-ELM takes advantage of the condition number of matrix, which reveals the stability of linear equation, to determine the weights of base ELM classifiers. This model can reduce the unreliability induced by randomly input parameters of a single ELM, and solve the ill-conditioned problem caused by linear system structure of ELM simultaneously. Secondly, sparse ensemble methods can lower memory requirement and speed up the classification process, but only for classifier-specific weight level. Therefore, a SpaCSSV-ELM method is proposed by transforming the weight optimization problem to a sparse coding problem, which uses the sparse representation technique for maintaining classification performance with less nonzero weight coefficients. Experiments are carried out on twenty UCI data sets and Finance event series data and the experimental results show the superior performance of the CSSV based ELM algorithms by comparing with the state-of-the-art algorithms.


systems, man and cybernetics | 2012

Multi-objective differential evolution with self-navigation

Ke Li; Sam Kwong; Ran Wang; Jingjing Cao; Imre J. Rudas

Traditional differential evolution (DE) mutation operators explore the search space with no considering the information about the search directions, which results in a purely stochastic behavior. This paper presents a DE variant with self-navigation ability for multi-objective optimization (MODE/SN). It maintains a pool of well designed DE mutation operators with distinct search behaviors and applies them in an adaptive way according to the feedback information from the optimization process. Moreover, we deploy the neural network, which is trained by the extreme learning machine, for mapping an artificially generated solution in the objective space back into the decision space. Empirical results demonstrate that MODE/SN outperforms several state-of-the-art algorithms on a set of benchmark problems with variable linkages.


systems, man and cybernetics | 2011

Combining interpretable fuzzy rule-based classifiers via multi-objective hierarchical evolutionary algorithm

Jingjing Cao; Hanli Wang; Sam Kwong; Ke Li

The contributions of this paper are two-fold: firstly, it employs a multi-objective evolutionary hierarchical algorithm to obtain a non-dominated fuzzy rule classifier set with interpretability and diversity preservation. Secondly, a reduce-error based ensemble pruning method is utilized to decrease the size and enhance the accuracy of the combined fuzzy rule classifiers. In this algorithm, each chromosome represents a fuzzy rule classifier and compose of three different types of genes: control, parameter and rule genes. In each evolution iteration, each pair of classifiers in non-dominated solution set with the same multi-objective qualities are examined in terms of Q statistic diversity values. Then, similar classifiers are removed to preserve the diversity of the fuzzy system. Finally, experimental results on the ten UCI benchmark datasets indicate that our approach can maintain a good trade-off among accuracy, interpretability and diversity of fuzzy classifiers.


International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems | 2013

EVOLVING EXTREME LEARNING MACHINE PARADIGM WITH ADAPTIVE OPERATOR SELECTION AND PARAMETER CONTROL

Ke Li; Ran Wang; Sam Kwong; Jingjing Cao

Extreme Learning Machine (ELM) is an emergent technique for training Single-hidden Layer Feedforward Networks (SLFNs). It attracts significant interest during the recent years, but the randomly assigned network parameters might cause high learning risks. This fact motivates our idea in this paper to propose an evolving ELM paradigm for classification problems. In this paradigm, a Differential Evolution (DE) variant, which can online select the appropriate operator for offspring generation and adaptively adjust the corresponding control parameters, is proposed for optimizing the network. In addition, a 5-fold cross validation is adopted in the fitness assignment procedure, for improving the generalization capability. Empirical studies on several real-world classification data sets have demonstrated that the evolving ELM paradigm can generally outperform the original ELM as well as several recent classification algorithms.

Collaboration


Dive into the Jingjing Cao's collaboration.

Top Co-Authors

Avatar

Sam Kwong

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Ran Wang

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Ke Li

University of Exeter

View shared research outputs
Top Co-Authors

Avatar

Nieqing Cao

Wuhan University of Technology

View shared research outputs
Top Co-Authors

Avatar

Panpan Liu

Wuhan University of Technology

View shared research outputs
Top Co-Authors

Avatar

Wenfeng Li

Wuhan University of Technology

View shared research outputs
Top Co-Authors

Avatar

Xiangfei Kong

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Xiaodong Li

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Bing Li

Wuhan University of Technology

View shared research outputs
Top Co-Authors

Avatar

Degang Chen

North China Electric Power University

View shared research outputs
Researchain Logo
Decentralizing Knowledge