Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhenguang Liu is active.

Publication


Featured researches published by Zhenguang Liu.


IEEE Transactions on Multimedia | 2017

Fusion of Magnetic and Visual Sensors for Indoor Localization: Infrastructure-Free and More Effective

Zhenguang Liu; Luming Zhang; Qi Liu; Yifang Yin; Li Cheng; Roger Zimmermann

Accurate and infrastructure-free indoor positioning can be very useful in a variety of applications. However, most existing approaches (e.g., WiFi and infrared-based methods) for indoor localization heavily rely on infrastructure, which is neither scalable nor pervasively available. In this paper, we propose a novel indoor localization and tracking approach, termed VMag, that does not require any infrastructure assistance. The user can be localized while simply holding a smartphone. To the best of our knowledge, the proposed method is the first exploration of fusing geomagnetic and visual sensing for indoor localization. More specifically, we conduct an in-depth study on both the advantageous properties and the challenges in leveraging the geomagnetic field and visual images for indoor localization. Based on these studies, we design a context-aware particle filtering framework to track the user with the goal of maximizing the positioning accuracy. We also introduce a neural-network-based method to extract deep features for the purpose of indoor positioning. We have conducted extensive experiments on four different indoor settings including a laboratory, a garage, a canteen, and an office building. Experimental results demonstrate the superior performance of VMag over the state of the art with these four indoor settings.


Multimedia Tools and Applications | 2017

Behavior pattern clustering in blockchain networks

Butian Huang; Zhenguang Liu; Jianhai Chen; Anan Liu; Qi Liu; Qinming He

Blockchain holds promise for being the revolutionary technology, which has the potential to find applications in numerous fields such as digital money, clearing, gambling and product tracing. However, blockchain faces its own problems and challenges. One key problem is to automatically cluster the behavior patterns of all the blockchain nodes into categories. In this paper, we introduce the problem of behavior pattern clustering in blockchain networks and propose a novel algorithm termed BPC for this problem. We evaluate a long list of potential sequence similarity measures, and select a distance that is suitable for the behavior pattern clustering problem. Extensive experiments show that our proposed algorithm is much more effective than the existing methods in terms of clustering accuracy.


Information Sciences | 2018

Perceptual multi-channel visual feature fusion for scene categorization

Xiao Sun; Zhenguang Liu; Yuxing Hu; Luming Zhang; Roger Zimmermann

Abstract Effectively recognizing sceneries from a variety of categories is an indispensable but challenging technique in computer vision and intelligent systems. In this work, we propose a novel image kernel based on human gaze shifting, aiming at discovering the mechanism of humans perceiving visually/semantically salient regions within a scenery. More specifically, we first design a weakly supervised embedding algorithm which projects the local image features (i.e., graphlets in this work) onto the pre-defined semantic space. Thereby, we describe each graphlet by multiple visual features at both low-level and high-level. It is generally acknowledged that humans attend to only a few regions within a scenery. Thus we formulate a sparsity-constrained graphlet ranking algorithm which incorporates visual clues at both the low-level and the high-level. According to human visual perception, these top-ranked graphlets are either visually or semantically salient. We sequentially connect them into a path which mimics human gaze shifting. Lastly, a so-called gaze shifting kernel (GSK) is calculated based on the learned paths from a collection of scene images. And a kernel SVM is employed for calculating the scene categories. Comprehensive experiments on a series of well-known scene image sets shown the competitiveness and robustness of our GSK. We also demonstrated the high consistency of the predicted path with real human gaze shifting path.


international conference on multimedia and expo | 2017

Geographic information use in weakly-supervised deep learning for landmark recognition

Yifang Yin; Zhenguang Liu; Roger Zimmermann

The successful deep convolutional neural networks for visual object recognition typically rely on a massive number of training images that are well annotated by class labels or object bounding boxes with great human efforts. Here we explore the use of the geographic metadata, which are automatically retrieved from sensors such as GPS and compass, in weakly-supervised learning techniques for landmark recognition. The visibility of a landmark in a frame can be calculated based on the cameras field-of-view and the landmarks geometric information such as location and height. Subsequently, a training dataset is generated as the union of the frames with presence of at least one target landmark. To reduce the impact of the intrinsic noise in the geo-metadata, we present a frame selection method that removes the mistakenly labeled frames with a two-step approach consisting of (1) Gaussian Mixture Model clustering based on camera location followed by (2) outlier removal based on visual consistency. We compare the classification results obtained from the ground truth labels and the noisy labels derived from the raw geo-metadata. Experiments show that training based on the raw geo-metadata achieves a Mean Average Precision (MAP) of 0.797. Moreover, by applying our proposed representative frame selection method, the MAP can be further improved by 6.4%, which indicates the promising use of the geo-metadata in weakly-supervised learning techniques.


acm multimedia | 2017

Multiview and Multimodal Pervasive Indoor Localization

Zhenguang Liu; Li Cheng; Anan Liu; Luming Zhang; Xiangnan He; Roger Zimmermann

Pervasive indoor localization (PIL) aims to locate an indoor mobile-phone user without any infrastructure assistance. Conventional PIL approaches employ a single probe (i.e., target) measurement to localize by identifying its best match out of a fingerprint gallery. However, a single measurement usually captures limited and inadequate location features. More importantly, the reliance on a single measurement bears the inherent risk of being inaccurate and unreliable, due to the fact that the measurement could be noisy and even corrupted. In this paper, we address the deficiency of using a single measurement by proposing the original idea of localization based on multi-view and multi-modal measurements. Specifically, a location is represented as a multi-view graph (MVG), which captures both local features and global contexts. We then formulate the location retrieval problem into an MVG matching problem. In MVG matching, a collaborative-reconstruction based measure is proposed to evaluate the node/edge similarity between two MVGs, which can explicitly address noisy measurements or outliers. Extensive experiments have been conducted on three different types of buildings with a total area of 18,719 m^2. We show that even with 30% noisy measurements or outliers, our method is able to achieve a promising accuracy of 1 meter. As another contribution, we construct a benchmark dataset for the PIL task and make it publicly available, which to our knowledge, is the first public dataset that is tailored for multi-view multi-modal indoor localization and contains both magnetic and visual signals.


international symposium on multimedia | 2016

Laplacian Sparse Coding of Scenes for Video Classification

Yifang Yin; Zhenguang Liu; Satyam; Roger Zimmermann

The challenging task of dynamic scene classification in unconstrained videos has drawn much research attention in recent years. Most existing work has focused on extracting local descriptors from spatiotemporal interesting points or subregions, followed by feature aggregation with advanced coding techniques. In this study, we analyse the effectiveness of global image descriptors and propose a novel Laplacian Sparse Coding of Scenes (LSCoS) method for video categorization. Previous methods neglect the semantic relationship among the visual scenes in the dictionary, resulting in generating different representations for videos with similar content. Intuitively, the coefficients assigned to the visual scenes of the same class should be promoted or demoted simultaneously for consistency concerns. To build upon the above ideas, we construct a Laplacian matrix by exploiting the connections between the representative scenes from each class and formulate the objective function with L1 and Laplacian regularizers to generate more robust semantically consistent sparse codes. Comprehensive experiments have been conducted on two public dynamic scene recognition datasets, namely Maryland and YUPENN. Experimental results demonstrate the effectiveness of our proposed approach, as our solution achieves the state-of-the-art classification rates and improves the accuracy by 2.86% ~ 16.93% compared with the existing methods.


Expert Systems With Applications | 2016

Rare category exploration via wavelet analysis

Zhenguang Liu; Kevin Chiew; Luming Zhang; Beibei Zhang; Qinming He; Roger Zimmermann

We propose a novel approach RCEWA for RCE which achieves a linear time complexity.We provide theoretical proofs for the effectiveness of using wavelet analysis for RCE.Experiments show that RCEWA outperforms the existing algorithms w.r.t. F-score. Rare category exploration (in short as RCE) aims to discover all the remaining data examples of a rare category from a known data example of the rare category. A few approaches have been proposed to address this problem. Most of them, however, are on quadratic or even cubic time complexities w.r.t. data set size n. More importantly, the F-scores (harmonic mean of precision and recall) of the existing approaches are not satisfactory. Compared with the existing solutions to RCE, this paper proposes a novel approach with a linear time complexity and achieves a higher F-score of mining results. The key steps of our approach are to reduce search space by performing wavelet analysis on the data density function, and then refine the coarse mining result in the reduced search space via fine-grained metrics. A solid theoretical analysis is conducted to prove the feasibility of our solution, and extensive experiments on real data sets further verify its effectiveness and efficiency.


Expert Systems With Applications | 2018

Rare category exploration with noisy labels

Haiqin Weng; Kevin Chiew; Zhenguang Liu; Qinming He; Roger Zimmermann

Abstract Starting from a few labelled data examples as the seeds, rare category exploration (RCE) aims to find out the target rare category hidden in the given dataset. However, the performance of conventional RCE approaches is very sensitive to noisy labels while the presence of noises in manually generated labels is almost inevitable. To address this deficiency of traditional RCE approaches, this paper investigates the RCE process in the presence of noisy labels, which to the best of our knowledge has not yet been intensively studied by previous research. Based on the assumption that only one labelled data example of the rare category is correctly labelled while the other few data examples may be wrongly labelled, we first propose a label propagation based algorithm SLP to extract the coarse shape of a rare category. Then, we refine the result by proposing a mixture-information based propagation model, RLP. Extensive experiments have been conducted on six real-world datasets, which show that our method outperforms the state-of-the-art RCE approaches. We also show that even with 20% noisy labels, our method is able to achieve a satisfactory accuracy.


ACM Transactions on Multimedia Computing, Communications, and Applications | 2018

Toward Personalized Activity Level Prediction in Community Question Answering Websites

Zhenguang Liu; Yingjie Xia; Qi Liu; Qinming He; Chao Zhang; Roger Zimmermann

Community Question Answering (CQA) websites have become valuable knowledge repositories. Millions of internet users resort to CQA websites to seek answers to their encountered questions. CQA websites provide information far beyond a search on a site such as Google due to (1) the plethora of high-quality answers, and (2) the capabilities to post new questions toward the communities of domain experts. While most research efforts have been made to identify experts or to preliminarily detect potential experts of CQA websites, there has been a remarkable shift toward investigating how to keep the engagement of experts. Experts are usually the major contributors of high-quality answers and questions of CQA websites. Consequently, keeping the expert communities active is vital to improving the lifespan of these websites. In this article, we present an algorithm termed PALP to predict the activity level of expert users of CQA websites. To the best of our knowledge, PALP is the first approach to address a personalized activity level prediction model for CQA websites. Furthermore, it takes into consideration user behavior change over time and focuses specifically on expert users. Extensive experiments on the Stack Overflow website demonstrate the competitiveness of PALP over existing methods.


IEEE Transactions on Multimedia | 2017

Media Quality Assessment by Perceptual Gaze-Shift Patterns Discovery

Yingjie Xia; Zhenguang Liu; Yan Yan; Yanxiang Chen; Luming Zhang; Roger Zimmermann

Collaboration


Dive into the Zhenguang Liu's collaboration.

Top Co-Authors

Avatar

Roger Zimmermann

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Luming Zhang

Hefei University of Technology

View shared research outputs
Top Co-Authors

Avatar

Qi Liu

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yifang Yin

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Xiangnan He

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Beibei Zhang

National University of Singapore

View shared research outputs
Researchain Logo
Decentralizing Knowledge