Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andy C. Yu is active.

Publication


Featured researches published by Andy C. Yu.


IEEE Transactions on Circuits and Systems for Video Technology | 2008

Fast Inter-Mode Selection in the H.264/AVC Standard Using a Hierarchical Decision Process

Andy C. Yu; Graham R. Martin; Heechan Park

A complexity reduction algorithm tailored for the H.264/AVC encoder is described. It aims to alleviate the computational burden imposed by Lagrangian rate distortion optimization in the inter-mode selection process. The proposed algorithm is described as a hierarchical structure comprising three levels. Each level targets different types of macroblocks according to the complexity of the search process. Early termination of mode selection is triggered at any of the levels to avoid a full cycle of Lagrangian examination. The algorithm is evaluated using a wide range of test sequences of different classes. The results demonstrate a reduction in encoding time of at least 40%, regardless of the class of sequence. Despite the reduction in computational complexity, picture quality is maintained at all bit rates.


international conference on image processing | 2004

Advanced block size selection algorithm for inter frame coding in H.264/MPEG-4 AVC

Andy C. Yu; Graham R. Martin

A fast inter-mode selection algorithm is proposed to improve the encoder efficiency of the H.264/MPEG-4 AVC standard, but with insignificant degradation in picture quality. The modified fast inter-mode selection (MFInterms) algorithm extends previous work to provide a. more efficient prediction of mode decision. The strategy incorporates temporal similarity detection and the detection of different moving features within a macroblock. Simulation results demonstrate a speed up in encoding time of up to 73% compared with the H.264 benchmark.


Journal of Visual Communication and Image Representation | 2006

Efficient intra- and inter-mode selection algorithms for H.264/ AVC

Andy C. Yu; Ngan King Ngi; Graham R. Martin

H.264/AVC standard is one of the most popular video formats for the next generation video coding. It provides a better performance in compression capability and visual quality compared to any existing video coding standards. Intra-frame mode selection and inter-frame mode selection are new features introduced in the H.264/ AVC standard. Intra-frame mode selection dramatically reduces spatial redundancy in I-frames, while inter-frame mode selection significantly affects the output quality of P-/B-frames by selecting an optimal block size with motion vector(s) or a mode for each macroblock. Unfortunately, this feature requires a myriad amount of encoding time especially when a brute force full-search method is utilised. In this report, we propose fast mode-selection algorithms tailored for both intra-frames and inter-frames. The proposed fast intra-frame mode algorithm is achieved by reducing the computational complexity of the Lagrangian rate-distortion optimisation evaluation. Two proposed fast inter-frame mode algorithms incorporate several robust and reliable predictive factors, including intrinsic complexity of the macroblock, mode knowledge from the previous frame(s), temporal similarity detection and the detection of different moving features within a macroblock, to effectively reduce the number of search operations. Complete and extensive simulations are provided respectively in these two chapters to demonstrate the performances. Finally, we combine our contributions to form two novel fast mode algorithms for H.264/AVC video coding. The simulations on different classes of test sequences demonstrate a speed up in encoding time of up to 86% compared with the H.264/AVC benchmark. This is achieved without any significant degradation in picture quality and compression ratio.


Lecture Notes in Computer Science | 2005

Progressive mesh-based motion estimation using partial refinement

Heechan Park; Andy C. Yu; Graham R. Martin

A technique for performing progressive mesh-based motion estimation in a layered fashion is presented. Motion compensation based on image warping provides a block prediction free of block artifacts. The smooth prediction can be used to identify motion-active regions by comparing with the reference frame and generate a partial denser mesh, thus forming layers of mesh. This approach provides a hierarchical partial refinement according to motion activity without additional cost. Experimental results indicate that the technique shows improvement over a single-layered uniform mesh and advantages over block-based techniques, particularly in scalable and very low bitrate video coding.


Signal Processing-image Communication | 2008

Compact representation of contours using directional grid chain code

Heechan Park; Graham R. Martin; Andy C. Yu

An efficient contour-based method for the coding of binary shape information is described. Conventional chain coding techniques show high coding efficiency for lossless compression, but they exploit the coherence of the contour in only a restricted manner. Higher coding efficiency can be achieved by realising the neighbourhood relation as a Markov chain, and this is exploited in a new coding scheme, the directional grid chain coding (DGCC). The method is computationally efficient and the coding process adapts to the inherent changes in the contour. Two schemes are proposed, a lossless and a quasi-lossless method. The lossless scheme achieves 32% saving in bit rate compared with the conventional differential chain code (DCC). The second, quasi-lossless technique achieves 44% bit reduction compared with the DCC and the distortions present in the reconstructed contour are hardly noticeable to the human eye.


international conference on mobile multimedia communications | 2006

An affine symmetric approach to natural image compression

Heechan Park; Abhir Bhalerao; Graham R. Martin; Andy C. Yu

We approach image compression using an affine symmetric image representation that exploits rotation and scaling as well as the translational redundancy present between image blocks. It resembles fractal theory in the sense that a single prototypical block is needed to represent other similar blocks. Finding the optimal prototypes is not a trivial task particularly for a natural image. We propose an efficient technique utilizing independent component analysis that results in near-optimal prototypical blocks. A reliable affine model estimation method based on Gaussian mixture models and modified expectation maximization is presented. For completeness, a parameter entropy coding strategy is suggested that achieves as low as 0.14 bpp. This study provides an interesting approach to image compression although the reconstruction quality is slightly below that of some other methods. However the high frequency details are well-preserved at low bitrates, making the technique potentially useful in low bandwidth mobile applications.


international conference on image processing | 2005

Improved schemes for inter-frame coding in the H.264/AVC standard

Andy C. Yu; Graham R. Martin; Heechan Park

An efficient algorithm for inter-frame coding in the H.264/AVC standard is extended to provide more significant speedup in computational performance for sequences containing high spatial correlation and motion. The proposed scheme features a more sophisticated search process and robust predictions to achieve better PSNR-rate performance for a large range of compression levels. Extensive simulation results demonstrate speedups of between 41% and 68%, with no noticeable deterioration in picture quality or compression ratio, even for the coding of complex video sequences.


international conference on multimedia and expo | 2004

Arbitrarily-shaped video coding: smart padding versus MPEG-4 LPE/zero padding

Andy C. Yu; Guobin Shen; Bing Zeng; Oscar C. Au

An effective padding scheme, called the smart padding (SmartPad), has been developed recently for the DCT coding of arbitrarily-shaped image/video objects; whereas its superior performance over the MPEG-4 LPE padding has been confirmed solidly. In the present paper, we propose to extend the use of SmartPad to all INTER frames (of arbitrary shapes), i.e., to use SmartPad to replace the zero padding scheme (as recommended in MPEG-4). Our simulation results show that a very substantial performance gain (3-7 dB) has been achieved, as compared to the MPEG-4 LPE/zero padding scheme.


international symposium on circuits and systems | 2006

Fast mesh-based motion estimation employing an embedded block model

Andy C. Yu; Heechan Park; Graham R. Martin

A fast algorithm for mesh-based motion estimation employing uniform triangular patches is proposed. The technique utilises an embedded block model to estimate the motion of the mesh grid points. Without the need for time-consuming evaluation, the algorithm reduces the number of search iterations according to the inherent motion. A block-wise coding approach is taken for the motion information, permitting any picture degradation caused by the fast algorithm to be successfully compensated by the residue coding. Simulations on three classes of test sequence show that the proposed algorithm results in a better PSNR-rate performance than the hexagonal matching algorithm. Moreover, a reduction of up to 91% in mesh iterations is obtained


european signal processing conference | 2005

A frequency domain approach to intra mode selection in H.264/AVC

Andy C. Yu; Graham R. Martin; Heechan Park

Collaboration


Dive into the Andy C. Yu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guobin Shen

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Ngan King Ngi

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Oscar C. Au

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Bing Zeng

University of Electronic Science and Technology of China

View shared research outputs
Researchain Logo
Decentralizing Knowledge