Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhenbao Liu is active.

Publication


Featured researches published by Zhenbao Liu.


IEEE Transactions on Geoscience and Remote Sensing | 2015

Effective and Efficient Midlevel Visual Elements-Oriented Land-Use Classification Using VHR Remote Sensing Images

Gong Cheng; Junwei Han; Lei Guo; Zhenbao Liu; Shuhui Bu; Jinchang Ren

Land-use classification using remote sensing images covers a wide range of applications. With more detailed spatial and textural information provided in very high resolution (VHR) remote sensing images, a greater range of objects and spatial patterns can be observed than ever before. This offers us a new opportunity for advancing the performance of land-use classification. In this paper, we first introduce an effective midlevel visual elementsoriented land-use classification method based on “partlets,” which are a library of pretrained part detectors used for midlevel visual elements discovery. Taking advantage of midlevel visual elements rather than low-level image features, a partlets-based method represents images by computing their responses to a large number of part detectors. As the number of part detectors grows, a main obstacle to the broader application of this method is its computational cost. To address this problem, we next propose a novel framework to train coarse-to-fine shared intermediate representations, which are termed “sparselets,” from a large number of pretrained part detectors. This is achieved by building a single-hidden-layer autoencoder and a single-hidden-layer neural network with an L0-norm sparsity constraint, respectively. Comprehensive evaluations on a publicly available 21-class VHR landuse data set and comparisons with state-of-the-art approaches demonstrate the effectiveness and superiority of this paper.


IEEE Transactions on Multimedia | 2014

Learning High-Level Feature by Deep Belief Networks for 3-D Model Retrieval and Recognition

Shuhui Bu; Zhenbao Liu; Junwei Han; Jun Wu; Rongrong Ji

3-D shape analysis has attracted extensive research efforts in recent years, where the major challenge lies in designing an effective high-level 3-D shape feature. In this paper, we propose a multi-level 3-D shape feature extraction framework by using deep learning. The low-level 3-D shape descriptors are first encoded into geometric bag-of-words, from which middle-level patterns are discovered to explore geometric relationships among words. After that, high-level shape features are learned via deep belief networks, which are more discriminative for the tasks of shape classification and retrieval. Experiments on 3-D shape recognition and retrieval demonstrate the superior performance of the proposed method in comparison to the state-of-the-art methods.


Journal of Computer Science and Technology | 2013

A Survey on Partial Retrieval of 3D Shapes

Zhenbao Liu; Shuhui Bu; Kun Zhou; Shuming Gao; Junwei Han; Jun Wu

Content-based shape retrieval techniques can facilitate 3D model resource reuse, 3D model modeling, object recognition, and 3D content classification. Recently more and more researchers have attempted to solve the problems of partial retrieval in the domain of computer graphics, vision, CAD, and multimedia. Unfortunately, in the literature, there is little comprehensive discussion on the state-of-the-art methods of partial shape retrieval. In this article we focus on reviewing the partial shape retrieval methods over the last decade, and help novices to grasp latest developments in this field. We first give the definition of partial retrieval and discuss its desirable capabilities. Secondly, we classify the existing methods on partial shape retrieval into three classes by several criteria, describe the main ideas and techniques for each class, and detailedly compare their advantages and limits. We also present several relevant 3D datasets and corresponding evaluation metrics, which are necessary for evaluating partial retrieval performance. Finally, we discuss possible research directions to address partial shape retrieval.


IEEE Geoscience and Remote Sensing Letters | 2015

Weakly Supervised Learning for Target Detection in Remote Sensing Images

Dingwen Zhang; Junwei Han; Gong Cheng; Zhenbao Liu; Shuhui Bu; Lei Guo

In this letter, we develop a novel framework of leveraging weakly supervised learning techniques to efficiently detect targets from remote sensing images, which enables us to reduce the tedious manual annotation for collecting training data while maintaining the detection accuracy to large extent. The proposed framework consists of a weakly supervised training procedure to yield the detectors and an effective scheme to detect targets from testing images. Comprehensive evaluations on three benchmarks which have different spatial resolutions and contain different types of targets as well as the comparisons with traditional supervised learning schemes demonstrate the efficiency and effectiveness of the proposed framework.


Multidimensional Systems and Signal Processing | 2016

Weakly supervised target detection in remote sensing images based on transferred deep features and negative bootstrapping

Peicheng Zhou; Gong Cheng; Zhenbao Liu; Shuhui Bu; Xintao Hu

Target detection in remote sensing images (RSIs) is a fundamental yet challenging problem faced for remote sensing images analysis. More recently, weakly supervised learning, in which training sets require only binary labels indicating whether an image contains the object or not, has attracted considerable attention owing to its obvious advantages such as alleviating the tedious and time consuming work of human annotation. Inspired by its impressive success in computer vision field, in this paper, we propose a novel and effective framework for weakly supervised target detection in RSIs based on transferred deep features and negative bootstrapping. On one hand, to effectively mine information from RSIs and improve the performance of target detection, we develop a transferred deep model to extract high-level features from RSIs, which can be achieved by pre-training a convolutional neural network model on a large-scale annotated dataset (e.g. ImageNet) and then transferring it to our task by domain-specifically fine-tuning it on RSI datasets. On the other hand, we integrate negative bootstrapping scheme into detector training process to make the detector converge more stably and faster by exploiting the most discriminative training samples. Comprehensive evaluations on three RSI datasets and comparisons with state-of-the-art weakly supervised target detection approaches demonstrate the effectiveness and superiority of the proposed method.


Neurocomputing | 2015

A coarse-to-fine model for airport detection from remote sensing images using target-oriented visual saliency and CRF

Xiwen Yao; Junwei Han; Lei Guo; Shuhui Bu; Zhenbao Liu

This paper presents a novel computational model to detect airports in optical remote sensing images (RSI). It works in a hierarchical architecture with a coarse layer and a fine layer. At the coarse layer, a target-oriented saliency model is built by combing the cues of contrast and line density to rapidly localize the airport candidate areas. Furthermore, at the fine layer, a learned condition random field (CRF) model is applied to each candidate area to perform the fine detection of the airport target. The CRF model is learned based on sparse features of local patches in a multi-scale structure and it also takes the contextual information of target into consideration. Therefore, its detection is more accurate and is robust to target scale variation. Comprehensive evaluations on RSI database from the Google Earth and comparisons with state-of-the-art approaches demonstrate the effectiveness of the proposed model.


Computers & Graphics | 2013

SMI 2013: New evaluation metrics for mesh segmentation

Zhenbao Liu; Sicong Tang; Shuhui Bu; Hao Zhang

3D model segmentation avails to skeleton extraction, shape partial matching, shape correspondence, texture mapping, shape deformation, and shape annotation. Many excellent solutions have been proposed in the last decade. How to efficiently evaluate these methods and impartially compare their performances are important issues. Since the Princeton segmentation benchmark has been proposed, their four representative metrics have been extensively adopted to evaluate segmentation algorithms. However, comparison to only a fixed ground-truth is problematic because objects have many semantic segmentations, hence we propose two novel metrics to support comparison with multiple ground-truth segmentations, which are named Similarity Hamming Distance (SHD) and Adaptive Entropy Increment (AEI). SHD is based on partial similarity correspondences between automatic segmentation and ground-truth segmentations, and AEI measures entropy change when an automatic segmentation is added to a set of different ground-truth segmentations. A group of experiments demonstrates that the metrics are able to provide relatively higher discriminative power and stability when evaluating different hierarchical segmentations, and also provide an effective evaluation more consistent with human perception.


IEEE Transactions on Systems, Man, and Cybernetics | 2017

Template Deformation-Based 3-D Reconstruction of Full Human Body Scans From Low-Cost Depth Cameras

Zhenbao Liu; Jinxin Huang; Shuhui Bu; Junwei Han; Xiaojun Tang; Xuelong Li

Full human body shape scans provide valuable data for a variety of applications including anthropometric surveying, clothing design, human-factors engineering, health, and entertainment. However, the high price, large volume, and difficulty of operating professional 3-D scanners preclude their use in home entertainment. Recently, portable low-cost red green blue-depth cameras such as the Kinect have become popular for computer vision tasks. However, the infrared mechanism of this type of camera leads to noisy and incomplete depth images. We construct a stereo full-body scanning environment composed of multiple depth cameras and propose a novel registration algorithm. Our algorithm determines a segment constrained correspondence for two neighboring views, integrating them using rigid transformation. Furthermore, it aligns all of the views based on uniform error distribution. The generated 3-D mesh model is typically sparse, noisy, and even with holes, which makes it lose surface details. To address this, we introduce a geometric and topological fitting prior in the form of a professionally designed high-resolution template model. We formulate a template deformation optimization problem to fit the high-resolution model to the low-quality scan. Its solution overcomes the obstacles posed by different poses, varying body details, and surface noise. The entire process is free of body and template markers, fully automatic, and achieves satisfactory reconstruction results.Full human body shape scans provide valuable data for a variety of applications including anthropometric surveying, clothing design, human-factors engineering, health, and entertainment. However, the high price, large volume, and difficulty of operating professional 3-D scanners preclude their use in home entertainment. Recently, portable low-cost red green blue-depth cameras such as the Kinect have become popular for computer vision tasks. However, the infrared mechanism of this type of camera leads to noisy and incomplete depth images. We construct a stereo full-body scanning environment composed of multiple depth cameras and propose a novel registration algorithm. Our algorithm determines a segment constrained correspondence for two neighboring views, integrating them using rigid transformation. Furthermore, it aligns all of the views based on uniform error distribution. The generated 3-D mesh model is typically sparse, noisy, and even with holes, which makes it lose surface details. To address this, we introduce a geometric and topological fitting prior in the form of a professionally designed high-resolution template model. We formulate a template deformation optimization problem to fit the high-resolution model to the low-quality scan. Its solution overcomes the obstacles posed by different poses, varying body details, and surface noise. The entire process is free of body and template markers, fully automatic, and achieves satisfactory reconstruction results.


Pattern Recognition | 2016

Scene parsing using inference Embedded Deep Networks

Shuhui Bu; Pengcheng Han; Zhenbao Liu; Junwei Han

Effective features and graphical model are two key points for realizing high performance scene parsing. Recently, Convolutional Neural Networks (CNNs) have shown great ability of learning features and attained remarkable performance. However, most researches use CNNs and graphical model separately, and do not exploit full advantages of both methods. In order to achieve better performance, this work aims to design a novel neural network architecture called Inference Embedded Deep Networks (IEDNs), which incorporates a novel designed inference layer based on graphical model. Through the IEDNs, the network can learn hybrid features, the advantages of which are that they not only provide a powerful representation capturing hierarchical information, but also encapsulate spatial relationship information among adjacent objects. We apply the proposed networks to scene labeling, and several experiments are conducted on SIFT Flow and PASCAL VOC Dataset. The results demonstrate that the proposed IEDNs can achieve better performance. HighlightsWe design a novel structure of networks considering CRFs model as one type layer of deep neural networks.CRF is regarded as a layer of the network, therefore, the structural learning can be conducted explicitly.A novel feature encoding spatial relationship between objects in images is proposed.Feature fusing is adopted to learn intrinsic non-linear relationships between hierarchical and spatial features.


IEEE Transactions on Instrumentation and Measurement | 2017

Particle Learning Framework for Estimating the Remaining Useful Life of Lithium-Ion Batteries

Zhenbao Liu; Gaoyuan Sun; Shuhui Bu; Junwei Han; Xiaojun Tang; Michael Pecht

As an important part of prognostics and health management, accurate remaining useful life (RUL) prediction for lithium (Li)-ion batteries can provide helpful reference for when to maintain the batteries in advance. This paper presents a novel method to predict the RUL of Li-ion batteries. This method is based on the framework of improved particle learning (PL). The PL framework can prevent particle degeneracy by resampling state particles first with considering the current measurement information and then propagating them. Meanwhile, PL is improved by adjusting the number of particles at each iteration adaptively to reduce the running time of the algorithm, which makes it suitable for online application. Furthermore, the kernel smoothing algorithm is fused into PL to keep the variance of parameter particles invariant during recursive propagation with the battery prediction model. This entire method is referred to as PLKS in this paper. The model can then be updated by the proposed method when new measurements are obtained. Future capacities are iteratively predicted with the updated prediction model until the predefined threshold value is triggered. The RUL is calculated according to these predicted capacities and the predefined threshold value. A series of case studies that demonstrate the proposed method is presented in the experiment.

Collaboration


Dive into the Zhenbao Liu's collaboration.

Top Co-Authors

Avatar

Shuhui Bu

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar

Junwei Han

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar

Chao Zhang

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar

Xiaojun Tang

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar

Jun Wu

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar

Gong Cheng

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pengcheng Han

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar

Jinxin Huang

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar

Sicong Tang

Northwestern Polytechnical University

View shared research outputs
Researchain Logo
Decentralizing Knowledge