Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhiqin Liang is active.

Publication


Featured researches published by Zhiqin Liang.


IEEE Signal Processing Letters | 2007

Security Analysis of Multimedia Encryption Schemes Based on Multiple Huffman Table

Jiantao Zhou; Zhiqin Liang; Yan Chen; Oscar C. Au

This letter addresses the security issues of the multimedia encryption schemes using multiple Huffman table (MHT). A known-plaintext attack is presented to show that the MHTs used for encryption should be carefully selected to avoid the weak keys problem. We then propose chosen-plaintext attacks on the basic MHT algorithm as well as the enhanced scheme with random bit insertion. In addition, we suggest two empirical criteria for Huffman table selection, based on which we can simplify the stream cipher integrated scheme, while ensuring a high level of security


IEEE Transactions on Circuits and Systems for Video Technology | 2007

Temporal Video Denoising Based on Multihypothesis Motion Compensation

Liwei Guo; Oscar Chi Lim Au; Mengyao Ma; Zhiqin Liang

Denoising module is required by any practical video processing systems. Most existing denoising schemes are spatio-temporal filters which operate on data over three dimensions. However, to limit the number of inputs, these filters only utilize one reference frame and cannot fully exploit temporal correlation. In this paper, a recursive temporal denoising filter named multihypothesis motion compensated filter (MHMCF) is proposed. To fully exploit temporal correlation, MHMCF performs motion estimation in a number of reference frames to construct multiple hypotheses (temporal predictions) of the current pixel. These hypotheses are combined by weighted averaging to suppress noise and estimate the actual current pixel value. Based on the multihypothesis motion compensated residue model presented in this paper, we investigate the efficiency of MHMCF, and some numerical evaluations are revealed. Experimental results show that MHMCF demonstrates quite good denoising performance while the inputs are much fewer than spatio-temporal filters. Moreover, as a purely temporal filter, it can well preserve spatial details and achieve satisfactory visual quality.


IEEE Transactions on Circuits and Systems for Video Technology | 2009

A Novel Analytic Quantization-Distortion Model for Hybrid Video Coding

Liwei Guo; Oscar C. Au; Mengyao Ma; Zhiqin Liang; Peter H. W. Wong

A proper theoretical quantization-distortion model for hybrid video coding is always desirable, since this allows us to explain the behavior of existing codecs and to design better ones. However, due to the existence of motion-compensated prediction, hybrid video coding introduces interframe dependency into the encoded video, which makes its quantization-distortion characteristics difficult to analyze. In this paper, a joint analysis of quantization and motion-compensated prediction is presented. For a complete analysis, we investigate not only the distortion that quantization introduces into video signal, but also its effect on motion-compensated prediction. Based on the joint analysis, a quantization-distortion model of hybrid video coding is proposed. Our extensive experimental results show that the proposed model can estimate the quantization-distortion curve of hybrid video coding with high accuracy. Furthermore, the estimation accuracy remains high for various video sequences and encoder configurations.


signal processing systems | 2007

Fast Multi-Hypothesis Motion Compensated Filter for Video Denoising

Liwei Guo; Oscar Chi Lim Au; Mengyao Ma; Zhiqin Liang

Multi-Hypothesis motion compensated filter (MHMCF) utilizes a number of hypotheses (temporal predictions) to estimate the current pixel which is corrupted with noise. While showing remarkable denoising results, MHMCF is computationally intensive as full search is employed in the expectation of finding good temporal predictions in the presence of noise. In the frame of MHMCF, a fast denoising algorithm FMHMCF is proposed in this paper. With edge preserved low-pass prefiltering and noise-robust fast multihypothesis search, FMHMCF could find reliable hypotheses while checking very few search locations, so that the denoising process can be dramatically accelerated. Experimental results show that FMHMCF can be 10 to 14 times faster than MHMCF, while achieving the same or even better denoising performance with up to 1.93 dB PSNR (peak-signal-noise-ratio) improvement.


multimedia signal processing | 2006

Content-adaptive Temporal Search Range Control Based on Frame Buffer Utilization

Zhiqin Liang; Jiantao Zhou; Oscar C. Au; Liwei Guo

Multiple reference frame selection adopted by the state-of-art H.264 video compression standard offers substantial performance gain. The temporal search range control, as a consequence, is crucial for maintaining the coding performance with minimum complexity. In this paper, we investigate the relationships between the reference frame buffer utilization and the optimal search range. A content-adaptive algorithm is proposed to control the search range dynamically during the encoding process. Experimental results show that our algorithm can rapidly adapt to the video characteristics and effectively reduce the complexity with negligible coding performance penalty


international conference on multimedia and expo | 2006

An Encoder-Embedded Video Denoising Filter Based on the Temporal LMMSE Estimator

Liwei Guo; Oscar C. Au; Mengyao Ma; Zhiqin Liang

Noise not only degrades the visual quality of video contents, but also significantly affects the coding efficiency. Based on the temporal linear minimum mean square error (LMMSE) estimator, an innovative denoising filter is proposed in this paper. The proposed filter only requires simple operations manipulating on the individual residue coefficients and can be seamlessly integrated into video encoders. Compared to traditional filter-encoder cascaded scheme, embedding the proposed filter into the video encoder can save a large amount of computation. The experimental results show that with the proposed filter embedded, both the noise suppression capability and the coding efficiency of the video encoder can be dramatically improved. Furthermore, as a purely temporal filter, it can well preserve the fine details of video contents and satisfactory visual quality can be achieved


international symposium on circuits and systems | 2009

A novel multiple description video coding based on H.264/AVC video coding standard

Xing Wen; Oscar C. Au; Jiang Xu; Zhiqin Liang; Yi Yang; Weiran Tang

Multiple description coding (MDC) is a source coding technique that exploits path diversity to solve packet losses over error-prone channels. In this paper, we propose an improved drift-free multi-state MDC method based on H.264/AVC coding scheme. At the encoder side, we compress original video into multiple independent H.264 streams with different coding parameters, which can help us to control correlations between the descriptions. At the decoder side, each description is considered as a noisy observation of the original video, and a linear minimum mean square error (LMMSE) based merge algorithm is proposed to combine the descriptions. Experimental results show that the proposed algorithm can achieve better coding efficiency and visual quality than temporary MDC present in [1]. The error resilience ability is also improved by the fact that each frame is coded twice with different parameters.


advances in multimedia | 2007

A novel multiple description approach to predictive video coding

Zhiqin Liang; Jiantao Zhou; Liwei Guo; Mengyao Ma; Oscar C. Au

Multiple description coding (MDC) is a source coding technique that exploits path diversity to combat packet losses over errorprone channels. In this paper, we proposed a novel drift-free multistate MDC method. At the encoder side, the original video is compressed into multiple independently decodable H.263 streams, each with its own coding structure and prediction process, such that if one stream is lost, the other stream can still be used to produce video with acceptable quality. At the decoder side, each description is considered as a noisy observation of the original video. A Least square-error (LSE) based merge algorithm is proposed to combine the descriptions. The experimental results show that the proposed algorithm has similar coding efficiency to [1], yet with improved error resilience.


international conference on acoustics, speech, and signal processing | 2008

Temporal search range prediction based on a linear model of motion-compensated residue

Liwei Guo; Oscar Chi Lim Au; Mengyao Ma; Zhiqin Liang; Peter H. W. Wong

An efficient temporal search range prediction method is proposed to reduce the complexity of multiple reference frames motion estimation (MRFME) in video coding. Based on a linear model of motion-compensated residue, the behavior of residues under MRFME is investigated, and the gain of multiple reference frames is analyzed. The proposed method utilizes the current residue to estimate the gain of searching more reference frames, and predicts the temporal search range that maintains the coding performance with minimum complexity. Experimental results show that the proposed scheme can significantly reduce the complexity in motion estimation while the degradation of the coding performance is negligible.


international conference on image processing | 2006

A Multihypothesis Motion-Compensated Temporal Filter for Video Denoising

Liwei Guo; Oscar Chi Lim Au; Mengyao Ma; Zhiqin Liang; Carman K. M. Yuk

Most existing filters for video denoising are spatio-temporal filters which operate on data over 3 dimensions. This paper presents a purely linear temporal filter with multihypothesis motion compensation (MC). Compared to the spatio-temporal filters, the proposed one needs much fewer inputs and much simpler operation. Experimental results show that just by involving 3 pixels, the proposed filter can achieve very good noise suppression performance. Moreover, as a purely temporal filter, it can well preserve spatial details and achieve satisfactory visual quality.

Collaboration


Dive into the Zhiqin Liang's collaboration.

Top Co-Authors

Avatar

Liwei Guo

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Mengyao Ma

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Oscar Chi Lim Au

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Oscar C. Au

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Peter H. W. Wong

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yan Chen

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Carman K. M. Yuk

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Gary Shueng Han Chan

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jiang Xu

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

S.-H. Gary Chan

Hong Kong University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge