Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Huaping Liu is active.

Publication


Featured researches published by Huaping Liu.


IEEE Transactions on Instrumentation and Measurement | 2016

Object Recognition Using Tactile Measurements: Kernel Sparse Coding Methods

Huaping Liu; Di Guo; Fuchun Sun

Dexterous robots have emerged in the last decade in response to the need for fine-motor-control assistance in applications as diverse as surgery, undersea welding, and mechanical manipulation in space. Crucial to the fine operation and contact environmental perception are tactile sensors that are fixed on the robotic fingertips. These can be used to distinguish material texture, roughness, spatial features, compliance, and friction. In this paper, we regard the investigated tactile data as time sequences, of which dissimilarity can be evaluated by the popular dynamic time warping method. A kernel sparse coding method is therefore developed to address the tactile data representation and classification problem. However, the naive use of sparse coding neglects the intrinsic relation between individual fingers, which simultaneously contact the object. To tackle this problem, we develop a joint kernel sparse coding model to solve the multifinger tactile sequence classification problem. In this model, the intrinsic relations between fingers are explicitly taken into account using the joint sparse coding, which encourages all of the coding vectors to share the same sparsity support pattern. The experimental results show that the joint sparse coding achieves better performance than conventional sparse coding.


IEEE Transactions on Automation Science and Engineering | 2017

Visual–Tactile Fusion for Object Recognition

Huaping Liu; Yuanlong Yu; Fuchun Sun; Jason Gu

The camera provides rich visual information regarding objects and becomes one of the most mainstream sensors in the automation community. However, it is often difficult to be applicable when the objects are not visually distinguished. On the other hand, tactile sensors can be used to capture multiple object properties, such as textures, roughness, spatial features, compliance, and friction, and therefore provide another important modality for the perception. Nevertheless, effective combination of the visual and tactile modalities is still a challenging problem. In this paper, we develop a visual–tactile fusion framework for object recognition tasks. This paper uses the multivariate-time-series model to represent the tactile sequence and the covariance descriptor to characterize the image. Further, we design a joint group kernel sparse coding (JGKSC) method to tackle the intrinsically weak pairing problem in visual–tactile data samples. Finally, we develop a visual–tactile data set, composed of 18 household objects for validation. The experimental results show that considering both visual and tactile inputs is beneficial and the proposed method indeed provides an effective strategy for fusion.


IEEE Transactions on Neural Networks | 2015

Robust Exemplar Extraction Using Structured Sparse Coding

Huaping Liu; Yunhui Liu; Fuchun Sun

Robust exemplar extraction from the noisy sample set is one of the most important problems in pattern recognition. In this brief, we propose a novel approach for exemplar extraction through structured sparse learning. The new model accounts for not only the reconstruction capability and the sparsity, but also the diversity and robustness. To solve the optimization problem, we adopt the alternating directional method of multiplier technology to design an iterative algorithm. Finally, the effectiveness of the approach is demonstrated by experiments of various examples including traffic sign sequences.


IEEE Transactions on Systems, Man, and Cybernetics | 2017

An Efficient Method for Traffic Sign Recognition Based on Extreme Learning Machine

Zhiyong Huang; Yuanlong Yu; Jason Gu; Huaping Liu

This paper proposes a computationally efficient method for traffic sign recognition (TSR). This proposed method consists of two modules: 1) extraction of histogram of oriented gradient variant (HOGv) feature and 2) a single classifier trained by extreme learning machine (ELM) algorithm. The presented HOGv feature keeps a good balance between redundancy and local details such that it can represent distinctive shapes better. The classifier is a single-hidden-layer feedforward network. Based on ELM algorithm, the connection between input and hidden layers realizes the random feature mapping while only the weights between hidden and output layers are trained. As a result, layer-by-layer tuning is not required. Meanwhile, the norm of output weights is included in the cost function. Therefore, the ELM-based classifier can achieve an optimal and generalized solution for multiclass TSR. Furthermore, it can balance the recognition accuracy and computational cost. Three datasets, including the German TSR benchmark dataset, the Belgium traffic sign classification dataset and the revised mapping and assessing the state of traffic infrastructure (revised MASTIF) dataset, are used to evaluate this proposed method. Experimental results have shown that this proposed method obtains not only high recognition accuracy but also extremely high computational efficiency in both training and recognition processes in these three datasets.


IEEE Transactions on Systems, Man, and Cybernetics | 2017

Extreme Kernel Sparse Learning for Tactile Object Recognition

Huaping Liu; Jie Qin; Fuchun Sun; Di Guo

Tactile sensors play very important role for robot perception in the dynamic or unknown environment. However, the tactile object recognition exhibits great challenges in practical scenarios. In this paper, we address this problem by developing an extreme kernel sparse learning methodology. This method combines the advantages of extreme learning machine and kernel sparse learning by simultaneously addressing the dictionary learning and the classifier design problems. Furthermore, to tackle the intrinsic difficulties which are introduced by the representer theorem, we develop a reduced kernel dictionary learning method by introducing row-sparsity constraint. A globally convergent algorithm is developed to solve the optimization problem and the theoretical proof is provided. Finally, we perform extensive experimental validations on some public available tactile sequence datasets and show the advantages of the proposed method.


IEEE Transactions on Instrumentation and Measurement | 2017

Robotic Room-Level Localization Using Multiple Sets of Sonar Measurements

Huaping Liu; Fuchun Sun; Bin Fang; Xinyu Zhang

In this paper, we aim to achieve robust and cost-effective room-level localization for the indoor mobile robot. It is unrealistic to obtain precise localization information from the sonar sensors because of the sparseness and uncertainty. Our attempts show that the room-level localization can be achieved using sonar sensors by accumulating the sonar data to overcome the limitations of sensor performance. To this end, we formulate the room-level localization as a joint sparse coding problem, which encourages the coding vectors to share the common room sparsity, but different locations. We systematically evaluate the performance of the different coding strategies on the collected sonar measurement data set.


Science in China Series F: Information Sciences | 2012

Fusion tracking in color and infrared images using joint sparse representation

Huaping Liu; Fuchun Sun

Currently sparse signal reconstruction gains considerable interest and is applied in many fields. In this paper, a similarity induced by joint sparse representation is designed to construct the likelihood function of particle filter tracker so that the color visual spectrum and thermal spectrum images can be fused for object tracking. The proposed fusion scheme performs joint sparse representation calculation on both modalities and the resultant tracking results are fused using min operation on the sparse representation coefficients. In addition, a co-learning approach is proposed to update the reference templates of both modality and enhance the tracking robustness. The proposed fusion scheme outperforms state-of-the-art approaches, and its effectiveness is verified using OTCBVS database.


systems man and cybernetics | 2017

Structured Output-Associated Dictionary Learning for Haptic Understanding

Huaping Liu; Fuchun Sun; Di Guo; Bin Fang; Zhengchun Peng

Haptic sensing and feedback play extremely important roles for humans and robots to perceive, understand, and manipulate the world. Since many properties perceived by the haptic sensors can be characterized by adjectives, it is reasonable to develop a set of haptic adjectives for the haptic understanding. This formulates the haptic understanding as a multilabel classification problem. In this paper, we exploit the intrinsic relation between different adjective labels and develop a novel dictionary learning method which is improved by introducing the structured output association information. Such a method makes use of the label correlation information and is more suitable for the multilabel haptic understanding task. In addition, we develop two iterative algorithms to solve the dictionary learning and classifier design problems, respectively. Finally, we perform extensive experimental validations on the public available haptic sequence dataset Penn Haptic Adjective Corpus 2 and show the advantages of the proposed method.


IEEE Transactions on Industrial Informatics | 2014

Spatial Neighborhood-Constrained Linear Coding for Visual Object Tracking

Huaping Liu; Mingyi Yuan; Fuchun Sun; Jianwei Zhang

In this paper, a new spatial neighborhood-constrained linear coding strategy which realizes sparse representation is proposed for visual object tracking. Unlike conventional sparse and locality-constrained linear coding approaches that need an extra post-processing stage to incorporate the spatial layout information, the proposed coding strategy intrinsically embeds the spatial layout information into the coding stage. The proposed coding strategy can also be used to effectively realize joint sparse representation for different feature descriptors. In addition, based on the distance to the “ideal point” in the reconstruction error space, a new multicue integration approach for robust tracking is proposed and a co-learning approach is developed to update the dictionaries. Finally, the proposed tracking algorithm is compared with other state-of-the-art trackers on some challenging video sequences and shows promising results.


IEEE Transactions on Industrial Informatics | 2014

Diversified Key-Frame Selection Using Structured

Huaping Liu; Yunhui Liu; Yuanlong Yu; Fuchun Sun

In this paper, a structured L2,1 optimization model, which simultaneously characterizes the reconstruction capability and diversity, is proposed to provide a semantically meaningful representation of a short video clip acquired from digital cameras or a mobile robot. In this model, a mutual inhabitation penalty term is imposed to prevent similar samples from being selected simultaneously. The proposed model is highly flexible to incorporate different mutual inhabitation terms and the temporal redundancy in video is exploited to encourage the diversity. The constructed objective function is nonconvex and an iterative algorithm is developed to solve the optimization problem. The performance is evaluated using various video clips from YouTube and also based on practical video captured by an indoor mobile robot. The results clearly indicate that the proposed strategy helps the optimization model to achieve more diversified key frames than the other existing work method.

Collaboration


Dive into the Huaping Liu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Di Guo

Tsinghua University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jason Gu

Dalhousie University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge