Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Junhui Hou is active.

Publication


Featured researches published by Junhui Hou.


IEEE Journal of Biomedical and Health Informatics | 2015

Fall Detection Based on Body Part Tracking Using a Depth Camera

Zhen-Peng Bian; Junhui Hou; Lap-Pui Chau; Nadia Magnenat-Thalmann

The elderly population is increasing rapidly all over the world. One major risk for elderly people is fall accidents, especially for those living alone. In this paper, we propose a robust fall detection approach by analyzing the tracked key joints of the human body using a single depth camera. Compared to the rivals that rely on the RGB inputs, the proposed scheme is independent of illumination of the lights and can work even in a dark room. In our scheme, a pose-invariant randomized decision tree algorithm is proposed for the key joint extraction, which requires low computational cost during the training and test. Then, the support vector machine classifier is employed to determine whether a fall motion occurs, whose input is the 3-D trajectory of the head joint. The experimental results demonstrate that the proposed fall detection method is more accurate and robust compared with the state-of-the-art methods.


IEEE Transactions on Visualization and Computer Graphics | 2015

Human Motion Capture Data Tailored Transform Coding

Junhui Hou; Lap-Pui Chau; Nadia Magnenat-Thalmann; Ying He

Human motion capture (mocap) is a widely used technique for digitalizing human movements. With growing usage, compressing mocap data has received increasing attention, since compact data size enables efficient storage and transmission. Our analysis shows that mocap data have some unique characteristics that distinguish themselves from images and videos. Therefore, directly borrowing image or video compression techniques, such as discrete cosine transform, does not work well. In this paper, we propose a novel mocap-tailored transform coding algorithm that takes advantage of these features. Our algorithm segments the input mocap sequences into clips, which are represented in 2D matrices. Then it computes a set of data-dependent orthogonal bases to transform the matrices to frequency domain, in which the transform coefficients have significantly less dependency. Finally, the compression is obtained by entropy coding of the quantized coefficients and the bases. Our method has low computational cost and can be easily extended to compress mocap databases. It also requires neither training nor complicated parameter setting. Experimental results demonstrate that the proposed scheme significantly outperforms state-of-the-art algorithms in terms of compression performance and speed.


international conference on image processing | 2013

Human motion capture data recovery via trajectory-based sparse representation

Junhui Hou; Lap-Pui Chau; Ying He; Jie Chen; Nadia Magnenat-Thalmann

Motion capture is widely used in sports, entertainment and medical applications. An important issue is to recover motion capture data that has been corrupted by noise and missing data entries during acquisition. In this paper, we propose a new method to recover corrupted motion capture data through trajectory-based sparse representation. The data is firstly represented as trajectories with fixed length and high correlation. Then, based on the sparse representation theory, the original trajectories can be recovered by solving the sparse representation of the incomplete trajectories through the OMP algorithm using a dictionary learned by K-SVD. Experimental results show that the proposed algorithm achieves much better performance, especially when significant portions of data is missing, than the existing algorithms.


IEEE Transactions on Circuits and Systems for Video Technology | 2015

Compressing 3-D Human Motions via Keyframe-Based Geometry Videos

Junhui Hou; Lap-Pui Chau; Nadia Magnenat-Thalmann; Ying He

This paper presents keyframe-based geometry video (KGV), a novel framework for compressing 3-D human motion data by using geometry videos. Given a motion data encoded in a geometry video (GV) format, our method extracts the keyframes and produces a reconstruction matrix. Then it applies the video compression technique (e.g., H.264/Advanced Video Coding) to the reordered keyframes, which can significantly reduce the spatial and temporal redundancy in the KGV. We develop a rate distortion-based optimization algorithm to determine the parameters (i.e., the number of keyframes and quantization parameter) leading to optimal performance. Experimental results show that the proposed KGV framework significantly outperforms the existing GV techniques in terms of both the rate distortion performance and visual quality. Besides, the computational cost of the KGV is rather low at the decoder, making it highly desirable for power-constrained devices. Last but not least, our method can be easily extended to progressive compression with heterogeneous communication network.


IEEE Transactions on Circuits and Systems for Video Technology | 2014

A Highly Efficient Compression Framework for Time-Varying 3-D Facial Expressions

Junhui Hou; Lap-Pui Chau; Minqi Zhang; Nadia Magnenat-Thalmann; Ying He

The rapid recent development of 3-DTV technology has led to an increase in studies on mesh-based 3-D scene representation. Compressing 3-D time-varying meshes is critical for the storage and transmission of 3-D contents. This paper proposes a highly efficient framework for compressing time-varying 3-D facial expressions. We use the near-isometric property of human facial expressions to parameterize the 3-D dynamic faces into an expression-invariant 2-D canonical domain that will naturally generate 2-D geometry videos (GVs). Considering the intrinsic properties of GVs, we apply low-rank and sparse matrix decomposition (LRSMD) separately to three dimensions of GVs (namely, (X, Y,) and (Z) ). Based on our high precision rate and distortion models for GVs, we further compress the components from LRSMD using a video encoder in which bitrates of all components are assigned optimally according to the target bitrate. Experimental results show that the proposed scheme can significantly improve compression performance in terms of rate-distortion performance and visual quality compared with the state-of-the-art algorithms.


international conference on image processing | 2014

A fast learning algorithm for multi-layer extreme learning machine

Jiexiong Tang; Chenwei Deng; Guang-Bin Huang; Junhui Hou

Extreme learning machine (ELM) is an efficient training algorithm originally proposed for single-hidden layer feedforward networks (SLFNs), of which the input weights are randomly chosen and need not to be fine-tuned. In this paper, we present a new stack architecture for ELM, to further improve the learning accuracy of ELM while maintaining its advantage of training speed. By exploiting the hidden information of ELM random feature space, a recovery-based training model is developed and incorporated into the proposed ELM stack architecture. Experimental results of the MNIST handwriting dataset demonstrate that the proposed algorithm achieves better and much faster convergence than the state-of-the-art ELM and deep learning methods.


IEEE Signal Processing Letters | 2014

Scalable and Compact Representation for Motion Capture Data Using Tensor Decomposition

Junhui Hou; Lap-Pui Chau; Nadia Magnenat-Thalmann; Ying He

Motion capture (mocap) technology is widely used in movie and game industries. Compact representation of the mocap data is critical to efficient storage and transmission. In this letter, we propose a novel tensor decomposition based scheme for compact and progressive representation of the mocap data. Our method segments and stacks the mocap sequence locally, and generates a 3rd-order tensor, which has strong correlation within and across slices of the tensor. Then, our method iteratively applies tensor decomposition in a multi-layer structure to explore the correlation characteristic. Experimental results demonstrate that the proposed scheme significantly outperforms existing algorithms in terms of scalability and storage requirement.


IEEE Transactions on Circuits and Systems for Video Technology | 2017

Sparse Low-Rank Matrix Approximation for Data Compression

Junhui Hou; Lap-Pui Chau; Nadia Magnenat-Thalmann; Ying He

Low-rank matrix approximation (LRMA) is a powerful technique for signal processing and pattern analysis. However, its potential for data compression has not yet been fully investigated. In this paper, we propose sparse LRMA (SLRMA), an effective computational tool for data compression. SLRMA extends conventional LRMA by exploring both the intra and inter coherence of data samples simultaneously. With the aid of prescribed orthogonal transforms (e.g., discrete cosine/wavelet transform and graph transform), SLRMA decomposes a matrix into a product of two smaller matrices, where one matrix is made up of extremely sparse and orthogonal column vectors and the other consists of the transform coefficients. Technically, we formulate SLRMA as a constrained optimization problem, i.e., minimizing the approximation error in the least-squares sense regularized by the


IEEE Journal of Biomedical and Health Informatics | 2016

Facial Position and Expression-Based Human–Computer Interface for Persons With Tetraplegia

Zhen-Peng Bian; Junhui Hou; Lap-Pui Chau; Nadia Magnenat-Thalmann

ell _{0}


IEEE Transactions on Broadcasting | 2013

Consistent Video Quality Control in Scalable Video Coding Using Dependent Distortion Quantization Model

Junhui Hou; Shuai Wan; Zhan Ma; Lap-Pui Chau

-norm and orthogonality, and solve it using the inexact augmented Lagrangian multiplier method. Through extensive tests on real-world data, such as 2D image sets and 3D dynamic meshes, we observe that: 1) SLRMA empirically converges well; 2) SLRMA can produce approximation error comparable to LRMA but in a much sparse form; and 3) SLRMA-based compression schemes significantly outperform the state of the art in terms of rate–distortion performance.

Collaboration


Dive into the Junhui Hou's collaboration.

Top Co-Authors

Avatar

Lap-Pui Chau

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Ying He

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhen-Peng Bian

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Shuai Wan

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar

Jie Chen

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cheen-Hau Tan

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Hui Liu

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Minqi Zhang

Nanyang Technological University

View shared research outputs
Researchain Logo
Decentralizing Knowledge