Xiaoqing Ding
Tsinghua University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Xiaoqing Ding.
Neurocomputing | 2015
Yali Li; Shengjin Wang; Qi Tian; Xiaoqing Ding
Feature detection is a fundamental and important problem in computer vision and image processing. It is a low-level processing step which serves as the essential part for computer vision based applications. The goal of this paper is to present a survey of recent progress and advances in visual feature detection. Firstly we describe the relations among edges, corners and blobs from the psychological view. Secondly we classify the algorithms in detecting edges, corners and blobs into different categories and provide detailed descriptions for representative recent algorithms in each category. Considering that machine learning becomes more involved in visual feature detection, we put more emphasis on machine learning based feature detection methods. Thirdly, evaluation standards and databases are also introduced. Through this survey we would like to present the recent progress in visual feature detection and identify future trends as well as challenges. We survey the recent progress and advances in visual feature detection.The relations among different kinds of features are covered.Representative feature detection algorithms are described.We categorize and discuss the pros/cons for different kinds of visual features.We put some emphasis on future challenges in feature design through this survey.
systems man and cybernetics | 2008
Yun Lei; Xiaoqing Ding; Shengjin Wang
This paper presents a novel solution to track a visual object under changes in illumination, viewpoint, pose, scale, and occlusion. Under the framework of sequential Bayesian learning, we first develop a discriminative model-based tracker with a fast relevance vector machine algorithm, and then, a generative model-based tracker with a novel sequential Gaussian mixture model algorithm. Finally, we present a three-level hierarchy to investigate different schemes to combine the discriminative and generative models for tracking. The presented hierarchical model combination contains the learner combination (at level one), classifier combination (at level two), and decision combination (at level three). The experimental results with quantitative comparisons performed on many realistic video sequences show that the proposed adaptive combination of discriminative and generative models achieves the best overall performance. Qualitative comparison with some state-of-the-art methods demonstrates the effectiveness and efficiency of our method in handling various challenges during tracking.
Pattern Recognition Letters | 2010
Yali Li; Shengjin Wang; Xiaoqing Ding
This paper proposes a novel algorithm framework to solve the eye/eyes tracking problem. Two unified deformable templates are introduced for single eye tracking and double eyes tracking, respectively. Each deformable template can describe both open and closed eye states. Based on the templates the particle filtering framework is applied. A dynamic model considering both the geometrical transformations and state transitions of eye/eyes is given. Furthermore, a measurement model for contour tracking is also modified to adapt for eye/eyes tracking. The experimental results show that our approach can not only track the locations of eye/eyes accurately, but also obtain the eye contour parameters simultaneously.
international conference on document analysis and recognition | 1999
Jiang Gao; Xiaoqing Ding; Youshou Wu
A new algorithm for segmenting handwritten Chinese character strings is presented. This approach is based on a split-and-merge strategy by locating possible ligatures between Chinese characters and merging the over-split components. The strategy is proposed by considering the structural properties of Chinese character strings. To guarantee the accuracy of the above segmentation algorithm, a recognizer is involved in order to aid the segmentation process. A maximum a posteriori probability index is derived for joint optimization of the segmentation and recognition results, and a dynamic programming algorithm-a modified level-building algorithm-is used to optimize this index. The whole algorithm is applied to a Chinese bank check amount recognition task, and some promising experimental results are obtained.
IEEE Transactions on Image Processing | 2014
Yali Li; Shengjin Wang; Qi Tian; Xiaoqing Ding
This paper focuses on the problem of detecting a number of different class objects in images. We present a novel part-based model for object detection with cascaded classifiers. The coarse root and fine part classifiers are combined into the model. Different from the existing methods which learn root and part classifiers independently, we propose a shared-Boost algorithm to jointly train multiple classifiers. This paper is distinguished by two key contributions. The first is to introduce a new definition of shared features for similar pattern representation among multiple classifiers. Based on this, a shared-Boost algorithm which jointly learns multiple classifiers by reusing the shared feature information is proposed. The second contribution is a method for constructing a discriminatively trained part-based model, which fuses the outputs of cascaded shared-Boost classifiers as high-level features. The proposed shared-Boost-based part model is applied for both rigid and deformable object detection experiments. Compared with the state-of-the-art method, the proposed model can achieve higher or comparable performance. In particular, it can lift up the detection rates in low-resolution images. Also the proposed procedure provides a systematic framework for information reusing among multiple classifiers for part-based object detection.This paper focuses on the problem of detecting a number of different class objects in images. We present a novel part-based model for object detection with cascaded classifiers. The coarse root and fine part classifiers are combined into the model. Different from the existing methods which learn root and part classifiers independently, we propose a shared-Boost algorithm to jointly train multiple classifiers. This paper is distinguished by two key contributions. The first is to introduce a new definition of shared features for similar pattern representation among multiple classifiers. Based on this, a shared-Boost algorithm which jointly learns multiple classifiers by reusing the shared feature information is proposed. The second contribution is a method for constructing a discriminatively trained part-based model, which fuses the outputs of cascaded shared-Boost classifiers as high-level features. The proposed shared-Boost-based part model is applied for both rigid and deformable object detection experiments. Compared with the state-of-the-art method, the proposed model can achieve higher or comparable performance. In particular, it can lift up the detection rates in low-resolution images. Also the proposed procedure provides a systematic framework for information reusing among multiple classifiers for part-based object detection.
Neurocomputing | 2015
Yiwen Guo; Xiaoqing Ding; Jing-Hao Xue
Abstract In a vast number of real-world face recognition applications, gallery and probe image sets are captured from different scenarios. For such multi-view data, face recognition systems often perform poorly. To tackle this problem, in this paper we propose a graph embedding framework, which can project the multi-view data into a common subspace of higher discriminability between classes. This framework can be readily utilized to extend classical dimensionality reduction methods to multi-view scenarios. Hence, by utilizing the framework for multi-view face recognition, we propose multi-view linear discriminant analysis (MiLDA). We also empirically demonstrate that, for several distinct multi-view face recognition scenarios, MiLDA has an excellent performance and outperforms many popular approaches.
international symposium on neural networks | 1997
Youshou Wu; Mingsheng Zhao; Xiaoqing Ding
This paper proposes a new kind of neuron model, which has trainable activation function (TAF) in addition to only trainable weights in the conventional M-P model. The final neuron activation function can be derived by training a primitive neuron activation function. BP like learning algorithm has been presented for MFNN constructed by neurons of TAF model. Two simulation examples are given to show the network capacity and performance advantages of the new MFNN in comparison with that of conventional sigmoid MFNN.
Neurocomputing | 2016
Yicong Liang; Xiaoqing Ding; Changsong Liu; Jing-Hao Xue
Multibiometric systems based on score fusion can effectively combine the discriminative power of multiple biometric traits and overcome the limitations of individual trait, leading to a better performance of biometric authentication. To tackle multiple adverse issues with the established classifier-based or probability-based algorithms, in this paper we propose a novel order-preserving probabilistic score fusion algorithm, Order-Preserving Tree (OPT), by casting the score fusion problem into an optimisation problem with the natural order-preserving constraint. OPT is an algorithm fully non-parametric and widely applicable, not assuming any parametric forms of probabilities or independence among sources, directly estimating the posterior probabilities from maximum likelihood estimation, and exploiting the power of tree-structured ensembles. We demonstrate the effectiveness of our OPT algorithm by comparing it with many widely used score fusion algorithms on two prevalent multibiometric databases. HighlightsWe propose a probabilistic score fusion algorithm.The algorithm is based on the order-preserving constraints.The algorithm is fully non-parametric with no hyper-parameters to be tuned.A tree-structured ensemble is used to avoid the dimensionality curse.Experiments on two databases show the effectiveness of the algorithm.
IEEE Transactions on Image Processing | 2016
Yiwen Guo; Xiaoqing Ding; Changsong Liu; Jing-Hao Xue
Canonical correlation analysis (CCA) is an effective way to find two appropriate subspaces in which Pearsons correlation coefficients are maximized between projected random vectors. Due to its well-established theoretical support and relatively efficient computation, CCA is widely used as a joint dimension reduction tool and has been successfully applied to many image processing and computer vision tasks. However, as reported, the traditional CCA suffers from overfitting in many practical cases. In this paper, we propose sufficient CCA (S-CCA) to relieve CCAs overfitting problem, which is inspired by the theory of sufficient dimension reduction. The effectiveness of S-CCA is verified both theoretically and experimentally. Experimental results also demonstrate that our S-CCA outperforms some of CCAs popular extensions during the prediction phase, especially when severe overfitting occurs.
IEEE Transactions on Information Forensics and Security | 2015
Yicong Liang; Xiaoqing Ding; Jing-Hao Xue
Generative Bayesian models have recently become the most promising framework in classifier design for face verification. However, we report in this paper that the joint Bayesian method, a successful classifier in this framework, suffers performance degradation due to its underuse of the expectation-maximization algorithm in its training phase. To rectify the underuse, we propose a new method termed advanced joint Bayesian (AJB). AJB has a good convergence property and achieves a higher verification rate than both the Joint Bayesian method and other state-of-the-art classifiers on the labeled faces in the wild face database.