Zhong-Qiu Zhao
Hefei University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Zhong-Qiu Zhao.
Neurocomputing | 2015
Zhong-Qiu Zhao; Linhai Ma; Yiu-ming Cheung; Xindong Wu; Yuan Yan Tang; Chun Lung Philip Chen
Abstract To automatically identify plant species is very useful for ecologists, amateur botanists, educators, and so on. The Leafsnap is the first successful mobile application system which tackles this problem. However, the Leafsnap is based on the IOS platform. And to the best of our knowledge, as the mobile operation system, the Android is more popular than the IOS. In this paper, an Android-based mobile application designed to automatically identify plant species according to the photographs of tree leaves is described. In this application, one leaf image can be either a digital image from one existing leaf image database or a picture collected by a camera. The picture should be a single leaf placed on a light and untextured background without other clutter. The identification process consists of three steps: leaf image segmentation, feature extraction, and species identification. The demo system is evaluated on the ImageCLEF2012 Plant Identification database which contains 126 tree species from the French Mediterranean area. The outputs of the system to users are the top several species which match the query leaf image the best, as well as the textual descriptions and additional images about plant leaves, flowers, etc. Our system works well with state-of-the-art identification performance.
IEEE Transactions on Knowledge and Data Engineering | 2015
Jing Wang; Meng Wang; Peipei Li; Luoqi Liu; Zhong-Qiu Zhao; Xuegang Hu; Xindong Wu
Online selection of dynamic features has attracted intensive interest in recent years. However, existing online feature selection methods evaluate features individually and ignore the underlying structure of a feature stream. For instance, in image analysis, features are generated in groups which represent color, texture, and other visual information. Simply breaking the group structure in feature selection may degrade performance. Motivated by this observation, we formulate the problem as an online group feature selection. The problem assumes that features are generated individually but there are group structures in the feature stream. To the best of our knowledge, this is the first time that the correlation among streaming features has been considered in the online feature selection process. To solve this problem, we develop a novel online group feature selection method named OGFS. Our proposed approach consists of two stages: online intra-group selection and online inter-group selection. In the intra-group selection, we design a criterion based on spectral analysis to select discriminative features in each group. In the inter-group selection, we utilize a linear regression model to select an optimal subset. This two-stage procedure continues until there are no more features arriving or some predefined stopping conditions are met. Finally, we apply our method to multiple tasks including image classification and face verification. Extensive empirical studies performed on real-world and benchmark data sets demonstrate that our method outperforms other state-of-the-art online feature selection methods.
IEEE Transactions on Image Processing | 2012
Zhong-Qiu Zhao; Hervé Glotin; Zhao Xie; Jun Gao; Xindong Wu
Recent studies have shown that sparse representation (SR) can deal well with many computer vision problems, and its kernel version has powerful classification capability. In this paper, we address the application of a cooperative SR in semi-supervised image annotation which can increase the amount of labeled images for further use in training image classifiers. Given a set of labeled (training) images and a set of unlabeled (test) images, the usual SR method, which we call forward SR, is used to represent each unlabeled image with several labeled ones, and then to annotate the unlabeled image according to the annotations of these labeled ones. However, to the best of our knowledge, the SR method in an opposite direction, that we call backward SR to represent each labeled image with several unlabeled images and then to annotate any unlabeled image according to the annotations of the labeled images which the unlabeled image is selected by the backward SR to represent, has not been addressed so far. In this paper, we explore how much the backward SR can contribute to image annotation, and be complementary to the forward SR. The co-training, which has been proved to be a semi-supervised method improving each other only if two classifiers are relatively independent, is then adopted to testify this complementary nature between two SRs in opposite directions. Finally, the co-training of two SRs in kernel space builds a cooperative kernel sparse representation (Co-KSR) method for image annotation. Experimental results and analyses show that two KSRs in opposite directions are complementary, and Co-KSR improves considerably over either of them with an image annotation performance better than other state-of-the-art semi-supervised classifiers such as transductive support vector machine, local and global consistency, and Gaussian fields and harmonic functions. Comparative experiments with a nonsparse solution are also performed to show that the sparsity plays an important role in the cooperation of image representations in two opposite directions. This paper extends the application of SR in image annotation and retrieval.
IEEE Intelligent Systems | 2015
Xindong Wu; Huanhuan Chen; Gongqing Wu; Jun Liu; Qinghua Zheng; Xiaofeng He; Aoying Zhou; Zhong-Qiu Zhao; Bifang Wei; Yang Li; Qiping Zhang; Shichao Zhang
In the era of big data, knowledge engineering faces fundamental challenges induced by fragmented knowledge from heterogeneous, autonomous sources with complex and evolving relationships. The knowledge representation, acquisition, and inference techniques developed in the 1970s and 1980s, driven by research and development of expert systems, must be updated to cope with both fragmented knowledge from multiple sources in the big data revolution and in-depth knowledge from domain experts. This article presents BigKE, a knowledge engineering framework that handles fragmented knowledge modeling and online learning from multiple information sources, nonlinear fusion on fragmented knowledge, and automated demand-driven knowledge navigation.
Pattern Recognition | 2016
Zhong-Qiu Zhao; Yiu-ming Cheung; Haibo Hu; Xindong Wu
In image classification, can sparse representation (SR) associate one test image with all training ones from the correct class, but not associate with any training ones from the incorrect classes? The backward sparse representation (bSR) which contains complementary information in an opposite direction can remedy the imperfect associations discovered by the general forward sparse representation (fSR). Unfortunately, this complementarity between the fSR and the bSR has not been studied in face recognition. There are two key problems to be solved. One is how to produce additional bases for the bSR. In face recognition, there is no other bases than the single test face image itself for the bSR, which results in large reconstruction residual and weak classification capability of the bSR. The other problem is how to deal with the robustness of the bSR to image corruption. In this paper, we introduce a CoSR model, which combines the fSR and the bSR together, into robust face recognition, by proposing two alternative methods to these two key problems: learning bases and unknown faces help to enrich the bases set of the bSR. Thereby, we also propose two improved algorithms of the CoSR for robust face recognition. Our study shows that our CoSR algorithms obtain inspiring and competitive recognition rates, compared with other state-of-the-art algorithms. The bSR with the proposed methods enriching the bases set contributes the most to the robustness of our CoSR algorithm, and unknown faces works better than learned bases. Moreover, since our CoSR model is performed in a subspace with a very low dimensionality, it gains an overwhelming advantage on time consumption over the traditional RSR algorithm in image pixel space. In addition, our study also reveals that the sparsity plays an important role in our CoSR algorithm for face recognition. HighlightsIt is the first time to explore to compensate the general SR for face recognition.We explore the complementary between fSR and bSR for face recognition.We propose two methods robust to corruption in faces, to expand the bases of bSR.The CoSR is introduced into face recognition, obtaining very competitive performance.
Science in China Series F: Information Sciences | 2014
Zhong-Qiu Zhao; Xindong Wu; CanYi Lu; Hervé Glotin; Jun Gao
The radial basis function (RBF) centers play different roles in determining the classification capability of a Gaussian radial basis function neural network (GRBFNN) and should hold different width values. However, it is very hard and time-consuming to optimize the centers and widths at the same time. In this paper, we introduce a new insight into this problem. We explore the impact of the definition of widths on the selection of the centers, propose an optimization algorithm of the RBF widths in order to select proper centers from the center candidate pool, and improve the classification performance of the GRBFNN. The design of the objective function of the optimization algorithm is based on the local mapping capability of each Gaussian RBF. Further, in the design of the objective function, we also handle the imbalanced problem which may occur even when different local regions have the same number of examples. Finally, the recursive orthogonal least square (ROLS) and genetic algorithm (GA), which are usually adopted to optimize the RBF centers, are separately used to select the centers from the center candidates with the initialized widths, in order to testify the validity of our proposed width initialization strategy on the selection of centers. Our experimental results show that, compared with the heuristic width setting method, the width optimization strategy makes the selected centers more appropriate, and improves the classification performance of the GRBFNN. Moreover, the GRBFNN constructed by our method can attain better classification performance than the RBF LS-SVM, which is a state-of-the-art classifier.
cross language evaluation forum | 2008
Sabrina Tollari; Philippe Mulhem; Marin Ferecatu; Hervé Glotin; Marcin Detyniecki; Patrick Gallinari; Hichem Sahbi; Zhong-Qiu Zhao
This article compares eight different diversity methods: 3 based on visual information, 1 based on date information, 3 adapted to each topic based on location and visual information; finally, for completeness, 1 based on random permutation. To compare the effectiveness of these methods, we apply them on 26 runs obtained with varied methods from different research teams and based on different modalities. We then discuss the results of the more than 200 obtained runs. The results show that query-adapted methods are more effcient than nonadapted method, that visual only runs are more difficult to diversify than text only and text-image runs, and finally that only few methods maximize both the precision and the cluster recall at 20 documents.
IEEE MultiMedia | 2009
Zhong-Qiu Zhao; Herve Glotin
A postprocessing system based on affinity-propagation clustering on manifolds can improve the diversity of retrieval results without reducing their relevance.
Pattern Recognition | 2017
Peng Zheng; Zhong-Qiu Zhao; Jun Gao; Xindong Wu
Abstract Image set classification has been widely applied to many real-life scenarios including surveillance videos, multi-view camera networks and personal albums. Compared with single image based classification, it is more promising and therefore has attracted significant research attention in recent years. Traditional (forward) sparse representation (fSR) just makes use of training images to represent query ones. If we can find complementary information from backward sparse representation (bSR) which represents query images with training ones, the performance will be likely to be improved. However, for image set classification, the way to produce additional bases for bSR is a problem concerned as there is no other bases than the query set itself. In this paper, we extend cooperative sparse representation (CoSR) method, which integrates fSR and bSR together, to image set classification. In this process, we propose two schemes, namely ‘Learning Bases’ and ‘Training Sets Division’, to produce the additional dictionary for bSR. And different from previous work, our work considers scene classification as a problem of image set classification, which will provide a new insight for scene classification. Experimental results show that the proposed model can obtain competitive recognition rates for image set classification. By combining information from these two opposite SRs, better results can be achieved. Also the feasibility for the formulation of image set classification on scene classification is validated.
international conference on image processing | 2009
Hervé Glotin; Zhong-Qiu Zhao; Stephane Ayache
We propose new efficient visual features called Profile Entropy Features (PEF), giving information on the structure of the image content, and defined as the entropy of the distribution of a projection of the pixels. We analyse two simple projection operators (arithmetic or harmonic mean), and two orientations (horizontal and vertical). PEF are fast to compute (10 images per sec. on a PentiumIV) and of small dimension. Moreover, we show on High Level Feature task in TrecVid2008 that PEF performs in average better than the features of the state of the art (usual color features, edge direction, Gabor, and Local Binary Pattern). Moreover, we show on another international image retrieval campaign, the Visual Concept Detection of ImageCLEF2008, that the arithmetic and harmonic projections give complementary informations, yielding to the third best rank system in the official run of this campaign. Other properties of the PEF are discussed.