Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhiwu Lu is active.

Publication


Featured researches published by Zhiwu Lu.


Pattern Recognition | 2015

Noise-robust semi-supervised learning via fast sparse coding

Zhiwu Lu; Liwei Wang

This paper presents a novel noise-robust graph-based semi-supervised learning algorithm to deal with the challenging problem of semi-supervised learning with noisy initial labels. Inspired by the successful use of sparse coding for noise reduction, we choose to give new L1-norm formulation of Laplacian regularization for graph-based semi-supervised learning. Since our L1-norm Laplacian regularization is explicitly defined over the eigenvectors of the normalized Laplacian matrix, we formulate graph-based semi-supervised learning as an L1-norm linear reconstruction problem which can be efficiently solved by sparse coding. Furthermore, by working with only a small subset of eigenvectors, we develop a fast sparse coding algorithm for our L1-norm semi-supervised learning. Finally, we evaluate the proposed algorithm in noise-robust image classification. The experimental results on several benchmark datasets demonstrate the promising performance of the proposed algorithm. HighlightsWe propose novel semi-supervised learning based on fast sparse coding.Our algorithm achieves promising results in noise-robust image classification.Our algorithm can readily be extended to many other challenging problems.


Pattern Recognition | 2015

Learning descriptive visual representation for image classification and annotation

Zhiwu Lu; Liwei Wang

Abstract This paper presents a novel semantic regularized matrix factorization method for learning descriptive visual bag-of-words (BOW) representation. Although very influential in image classification, the traditional visual BOW representation has one distinct drawback. That is, for efficiency purposes, this visual representation is often generated by directly clustering the low-level visual feature vectors extracted from local keypoints or regions, without considering the high-level semantics of images. In other words, it still suffers from the semantic gap and may lead to significant performance degradation in more challenging tasks, e.g., image classification over social collections with large intra-class variations. To learn descriptive visual BOW representation for such image classification task, we develop a semantic regularized matrix factorization method by adding Laplacian regularization defined with the tags (easy to access) of social images into matrix factorization. Moreover, given that image annotation only provides the tags of training images in advance (while the tags of all social images are available), we can readily apply the proposed method to image annotation by first running a round of image annotation to predict the tags (maybe incorrect) of test images and thus obtaining the tags of all images. Experimental results show the promising performance of the proposed method.


IEEE Transactions on Image Processing | 2015

Semantic Sparse Recoding of Visual Content for Image Applications

Zhiwu Lu; Peng Han; Liwei Wang; Ji-Rong Wen

This paper presents a new semantic sparse recoding method to generate more descriptive and robust representation of visual content for image applications. Although the visual bag-of-words (BOW) representation has been reported to achieve promising results in different image applications, its visual codebook is completely learnt from low-level visual features using quantization techniques and thus the so-called semantic gap remains unbridgeable. To handle such challenging issue, we utilize the annotations (predicted by algorithms or shared by users) of all the images to improve the original visual BOW representation. This is further formulated as a sparse coding problem so that the noise issue induced by the inaccurate quantization of visual features can also be handled to some extent. By developing an efficient sparse coding algorithm, we successfully generate a new visual BOW representation for image applications. Since such sparse coding has actually incorporated the high-level semantic information into the original visual codebook, we thus consider it as semantic sparse recoding of the visual content. Finally, we apply our semantic sparse recoding method to automatic image annotation and social image classification. The experimental results on several benchmark datasets show the promising performance of our semantic sparse recoding method in these two image applications.


Neurocomputing | 2016

Image classification by visual bag-of-words refinement and reduction

Zhiwu Lu; Liwei Wang; Ji-Rong Wen

This paper presents a new framework for visual bag-of-words (BOW) refinement and reduction to overcome the drawbacks associated with the visual BOW model which has been widely used for image classification. Although very influential in the literature, the traditional visual BOW model has two distinct drawbacks. Firstly, for efficiency purposes, the visual vocabulary is commonly constructed by directly clustering the low-level visual feature vectors extracted from local keypoints, without considering the high-level semantics of images. That is, the visual BOW model still suffers from the semantic gap, and thus may lead to significant performance degradation in more challenging tasks (e.g. social image classification). Secondly, typically thousands of visual words are generated to obtain better performance on a relatively large image dataset. Due to such large vocabulary size, the subsequent image classification may take sheer amount of time. To overcome the first drawback, we develop a graph-based method for visual BOW refinement by exploiting the tags (easy to access although noisy) of social images. More notably, for efficient image classification, we further reduce the refined visual BOW model to a much smaller size through semantic spectral clustering. Extensive experimental results show the promising performance of the proposed framework for visual BOW refinement and reduction.


Bioinformatics | 2016

CMsearch: simultaneous exploration of protein sequence space and structure space improves not only protein homology detection but also protein structure prediction

Xuefeng Cui; Zhiwu Lu; Sheng Wang; Jim Jing-Yan Wang; Xin Gao

Motivation: Protein homology detection, a fundamental problem in computational biology, is an indispensable step toward predicting protein structures and understanding protein functions. Despite the advances in recent decades on sequence alignment, threading and alignment-free methods, protein homology detection remains a challenging open problem. Recently, network methods that try to find transitive paths in the protein structure space demonstrate the importance of incorporating network information of the structure space. Yet, current methods merge the sequence space and the structure space into a single space, and thus introduce inconsistency in combining different sources of information. Method: We present a novel network-based protein homology detection method, CMsearch, based on cross-modal learning. Instead of exploring a single network built from the mixture of sequence and structure space information, CMsearch builds two separate networks to represent the sequence space and the structure space. It then learns sequence–structure correlation by simultaneously taking sequence information, structure information, sequence space information and structure space information into consideration. Results: We tested CMsearch on two challenging tasks, protein homology detection and protein structure prediction, by querying all 8332 PDB40 proteins. Our results demonstrate that CMsearch is insensitive to the similarity metrics used to define the sequence and the structure spaces. By using HMM–HMM alignment as the sequence similarity metric, CMsearch clearly outperforms state-of-the-art homology detection methods and the CASP-winning template-based protein structure prediction methods. Availability and implementation: Our program is freely available for download from http://sfb.kaust.edu.sa/Pages/Software.aspx. Contact: [email protected] Supplementary information: Supplementary data are available at Bioinformatics online.


IEEE Transactions on Geoscience and Remote Sensing | 2017

Zero-Shot Scene Classification for High Spatial Resolution Remote Sensing Images

Aoxue Li; Zhiwu Lu; Liwei Wang; Tao Xiang; Ji-Rong Wen

Due to the rapid technological development of various sensors, a huge volume of high spatial resolution (HSR) image data can now be acquired. How to efficiently recognize the scenes from such HSR image data has become a critical task. Conventional approaches to remote sensing scene classification only utilize information from HSR images. Therefore, they always need a large amount of labeled data and cannot recognize the images from an unseen scene class without any visual sample in the labeled data. To overcome this drawback, we propose a novel approach for recognizing images from unseen scene classes, i.e., zero-shot scene classification (ZSSC). In this approach, we first use the well-known natural language process model, word2vec, to map names of seen/unseen scene classes to semantic vectors. A semantic-directed graph is then constructed over the semantic vectors for describing the relationships between unseen classes and seen classes. To transfer knowledge from the images in seen classes to those in unseen classes, we make an initial label prediction on test images by an unsupervised domain adaptation model. With the semantic-directed graph and initial prediction, a label-propagation algorithm is then developed for ZSSC. By leveraging the visual similarity among images from the same scene class, a label refinement approach based on sparse learning is used to suppress the noise in the zero-shot classification results. Experimental results show that the proposed approach significantly outperforms the state-of-the-art approaches in ZSSC.


IEEE Transactions on Systems, Man, and Cybernetics | 2018

Large-Scale Sparse Learning From Noisy Tags for Semantic Segmentation

Aoxue Li; Zhiwu Lu; Liwei Wang; Peng Han; Ji-Rong Wen

In this paper, we present a large-scale sparse learning (LSSL) approach to solve the challenging task of semantic segmentation of images with noisy tags. Different from the traditional strongly supervised methods that exploit pixel-level labels for semantic segmentation, we make use of much weaker supervision (i.e., noisy tags of images) and then formulate the task of semantic segmentation as a weakly supervised learning (WSL) problem from the view point of noise reduction of superpixel labels. By learning the data manifolds, we transform the WSL problem into an LSSL problem. Based on nonlinear approximation and dimension reduction techniques, a linear-time-complexity algorithm is developed to solve the LSSL problem efficiently. We further extend the LSSL approach to visual feature refinement for semantic segmentation. The experiments demonstrate that the proposed LSSL approach can achieve promising results in semantic segmentation of images with noisy tags.


international conference on neural information processing | 2016

Segmentation with Selectively Propagated Constraints

Peng Han; Guangzhen Liu; Songfang Huang; Wenwu Yuan; Zhiwu Lu

This paper presents a novel selective constraint propagation method for constrained image segmentation. In the literature, many pairwise constraint propagation methods have been developed to exploit pairwise constraints for cluster analysis. However, since these methods mostly have a polynomial time complexity, they are not much suitable for segmentation of images even with a moderate size, which is equal to cluster analysis with a large data size. In this paper, we thus choose to perform pairwise constraint propagation only over a selected subset of pixels, but not over the whole image. Such a selective constraint propagation problem is then solved by an efficient graph-based learning algorithm. Finally, the selectively propagated constraints are exploited based on


international symposium on neural networks | 2017

Graph-boosted convolutional neural networks for semantic segmentation

Guangzhen Liu; Peng Han; Yulei Niu; Wenwu Yuan; Zhiwu Lu; Ji-Rong Wen


Pattern Recognition | 2017

Multi-instance dictionary learning via multivariate performance measure optimization

Jim Jing-Yan Wang; Ivor W. Tsang; Xuefeng Cui; Zhiwu Lu; Xin Gao

L_1

Collaboration


Dive into the Zhiwu Lu's collaboration.

Top Co-Authors

Avatar

Ji-Rong Wen

Renmin University of China

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peng Han

Renmin University of China

View shared research outputs
Top Co-Authors

Avatar

Tao Xiang

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xin Gao

King Abdullah University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jiechao Guan

Renmin University of China

View shared research outputs
Top Co-Authors

Avatar

Yulei Niu

Renmin University of China

View shared research outputs
Top Co-Authors

Avatar

An Zhao

Renmin University of China

View shared research outputs
Researchain Logo
Decentralizing Knowledge