Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chun-Ho Cheung is active.

Publication


Featured researches published by Chun-Ho Cheung.


IEEE Transactions on Circuits and Systems for Video Technology | 2002

A novel cross-diamond search algorithm for fast block motion estimation

Chun-Ho Cheung; Lai-Man Po

In block motion estimation, search patterns with different shapes or sizes and the center-biased characteristics of motion-vector distribution have a large impact on the searching speed and quality of performance. We propose a novel algorithm using a cross-search pattern as the initial step and large/small diamond search (DS) patterns as the subsequent steps for fast block motion estimation. The initial cross-search pattern is designed to fit the cross-center-biased motion vector distribution characteristics of the real-world sequences by evaluating the nine relatively higher probable candidates located horizontally and vertically at the center of the search grid. The proposed cross-diamond search (CDS) algorithm employs the halfway-stop technique and finds small motion vectors with fewer search points than the DS algorithm while maintaining similar or even better search quality. The improvement of CDS over DS can be up to a 40% gain on speedup. Experimental results show that the CDS is much more robust, and provides faster searching speed and smaller distortions than other popular fast block-matching algorithms.


IEEE Transactions on Multimedia | 2005

Novel cross-diamond-hexagonal search algorithms for fast block motion estimation

Chun-Ho Cheung; Lai-Man Po

We propose two cross-diamond-hexagonal search (CDHS) algorithms, which differ from each other by their sizes of hexagonal search patterns. These algorithms basically employ two cross-shaped search patterns consecutively in the very beginning steps and switch using diamond-shaped patterns. To further reduce the checking points, two pairs of hexagonal search patterns are proposed in conjunction with candidates found located at diamond corners. Experimental results show that the proposed CDHSs perform faster than the diamond search (DS) by about 144% and the cross-diamond search (CDS) by about 73%, whereas similar prediction quality is still maintained.


IEEE Transactions on Circuits and Systems for Video Technology | 2003

Adjustable partial distortion search algorithm for fast block motion estimation

Chun-Ho Cheung; Lai-Man Po

The quality control for video coding usually absents from many traditional fast block motion estimators. A novel block-matching algorithm for fast motion estimation named the adjustable partial distortion search algorithm (APDS) is proposed. It is a new normalized partial distortion comparison method capable of adjusting the prediction accuracy against searching speed by a quality factor k. With adjustability, APDS could act as the normalized partial distortion search algorithm (NPDS) when k is equal to 0, and the conventional partial distortion search algorithm (PDS) when k is equal to 1. In addition, it uses a halfway-stop technique with progressive partial distortions (PPD) to increase early rejection rate of impossible candidate motion vectors at very early stages. Simulations with PPD reduce computations up to 38 times with less than 0.50-dB degradation in PSNR performance, as compared to the full-search algorithm (FS). Experimental results show that APDS could provide peak signal-to-noise ratio performance very close to that of FS with speedup ratios of 7 to 16 times, and close to that of NPDS from 22 to 32 times, respectively, as compared to FS.


international conference on neural networks and signal processing | 2003

Center-biased frame selection algorithms for fast multi-frame motion estimation in H.264

Chi-Wang Ting; Lai-Man Po; Chun-Ho Cheung

The new upcoming video coding standard, H.264, allows motion estimation performing on multiple reference frames. This new feature improves the prediction accuracy of inter-coding blocks significantly, but it is extremely computational intensive. Its reference software adopts a full search scheme. The complexity of multi-frame motion estimation increases linearly with the number of used reference frames. However, the distortion gain given by each reference frame varies with the motion content of the video sequence, and it is not efficient to search through all the candidate frames. In this paper, a novel center-biased frame selection method is proposed to speed up the multi-frame motion estimation process in H.264. We apply a center-biased frame selection path to identify the ultimate reference frame from all the candidates. Simulation results show that our proposed method can save about 77% computations constantly while keeping similar picture quality as compared to full search.


international conference on image processing | 2002

A novel small-cross-diamond search algorithm for fast video coding and videoconferencing applications

Chun-Ho Cheung; Lai-Man Po

Search patterns and the center-biased characteristics of motion vector distribution (MVD) have a large impact on both searching speed and quality of block motion estimation. We propose a novel algorithm using two cross-shaped search patterns as the first two initial steps and large/small diamond-shaped patterns as the subsequent steps for fast block motion estimation (BME). The first small cross-shaped pattern is to fit the cross-center-biased MVD characteristics of the real-world sequences by evaluating the 5 relatively higher probable candidates located as a cross-shaped pattern at the search-grid center. The proposed small-cross-diamond search algorithm (SCDS) employs a halfway-stop technique and could find small motion vectors with much fewer points than the diamond search algorithm (DS) while maintains similar or even better quality. The speedup improvement of SCDS over DS can be up to 146%, i.e. 2.46 times faster than DS. Simulations show that SCDS is much more robust, provides faster searching speed and smaller distortions than other fast algorithms, typically very suitable for videoconferencing applications.


international conference on image processing | 2002

Merged-color histogram for color image retrieval

Ka-Man Wong; Chun-Ho Cheung; Lai-Man Po

Conventional histogram-based image retrieval algorithms usually find only intersecting areas of the color-component distributions of images, and thus work well in matching images with exact colors instead of similar colors, especially for computer generated pictures. This could be greatly affected by overall variations, such as intensity changes. A novel merged-color histogram (MCH) method for color image retrieval is proposed to facilitate matching between similar colors by means of color quantization and palette merging. Color quantization compacts the color information and matches each color instead of color components, and matching of similar colors is accomplished using palette merging. Experimental results show that the proposed MCH method is about 11-32% more precise in the first 20 retrievals for the same image query, and is able to recall 14%-23% more relevant images in the first 100 retrievals, as compared to the conventional RGB-based histogram method.


Signal Processing-image Communication | 2013

Depth map misalignment correction and dilation for DIBR view synthesis

Xuyuan Xu; Lai-Man Po; Ka-Ho Ng; Litong Feng; Kwok-Wai Cheung; Chun-Ho Cheung; Chi-Wang Ting

The quality of the synthesized views by Depth Image Based Rendering (DIBR) highly depends on the accuracy of the depth map, especially the alignment of object boundaries of texture image. In practice, the misalignment of sharp depth map edges is the major cause of the annoying artifacts at the disoccluded regions of the synthesized views. Conventional smooth filter approach blurs the depth map to reduce the disoccluded regions. The drawbacks are the degradation of 3D perception of the reconstructed 3D videos and the destruction of the texture in background regions. Conventional edge preserving filter utilizes the color image information in order to align the depth edges with color edges. Unfortunately, the characteristics of color edges and depth edges are very different which causes annoying boundaries artifacts in the synthesized virtual views. Recent solution of reliability-based approach uses reliable warping information from other views to fill the holes. However, it is not suitable for the view synthesis in video-plus-depth based DIBR applications. In this paper, a new depth map preprocessing approach is proposed. It utilizes Watershed color segmentation method to correct the depth map misalignment and then the depth map object boundaries are extended to cover the transitional edge regions of color image. This approach can handle the sharp depth map edges lying inside or outside the object boundaries in 2D sense. The quality of the disoccluded regions of the synthesized views can be significantly improved and unknown depth values can also be estimated. Experimental results show that the proposed method achieves superior performance for view synthesis by DIBR especially for generating large baseline virtual views.


IEEE Transactions on Circuits and Systems for Video Technology | 2016

No-Reference Video Quality Assessment With 3D Shearlet Transform and Convolutional Neural Networks

Yuming Li; Lai-Man Po; Chun-Ho Cheung; Xuyuan Xu; Litong Feng; Fang Yuan; Kwok-Wai Cheung

In this paper, we propose an efficient general-purpose no-reference (NR) video quality assessment (VQA) framework that is based on 3D shearlet transform and convolutional neural network (CNN). Taking video blocks as input, simple and efficient primary spatiotemporal features are extracted by 3D shearlet transform, which are capable of capturing natural scene statistics properties. Then, CNN and logistic regression are concatenated to exaggerate the discriminative parts of the primary features and predict a perceptual quality score. The resulting algorithm, which we name shearlet- and CNN-based NR VQA (SACONVA), is tested on well-known VQA databases of Laboratory for Image & Video Engineering, Image & Video Processing Laboratory, and CSIQ. The testing results have demonstrated that SACONVA performs well in predicting video quality and is competitive with current state-of-the-art full-reference VQA methods and general-purpose NR-VQA algorithms. Besides, SACONVA is extended to classify different video distortion types in these three databases and achieves excellent classification accuracy. In addition, we also demonstrate that SACONVA can be directly applied in real applications such as blind video denoising.


international conference on neural networks and signal processing | 2003

A novel ordered-SPIHT for embedded color image coding

Chun-Ling Yang; Lai-Man Po; Chun-Ho Cheung; Kwok-Wai Cheung

The magnitude distribution law of DWT coefficients of color image data in Karhunen-Loeve space is investigated. Considering the KLT matrix formed, the magnitude of DWT coefficients in components K/sub 1/, K/sub 2/ and K/sub 3/, in general, have the relation K/sub 1/>K/sub 2/>K/sub 3/ where K/sub 1/, K/sub 2/, K/sub 3/ are the three components of a color image in KL space. Based on this characteristic and other DWT coefficient features, a novel ordered-SPIHT for embedded color image coding (OCSPIHT) is proposed in this paper. The set of DWT coefficients of an image in KL space are split into several sets according to their magnitudes. For each specific threshold, only the sets with coefficients equal to or larger than the threshold will be encoded. This can save many bits and improve coding performance especially at low bit rate.


international symposium on circuits and systems | 2013

Depth-aided exemplar-based hole filling for DIBR view synthesis

Xuyuan Xu; Lai-Man Po; Chun-Ho Cheung; Litong Feng; Ka-Ho Ng; Kwok-Wai Cheung

Quality of synthesized view by Depth-Image-Based Rendering (DIBR) highly depends on hole filling, especially for synthesized view with large disocclusion. Many hole filling methods are proposed to improve the synthesized view quality and inpainting is the most popular approach to recover the disocclusions. However, the conventional inpainting either makes the hole regions blurred via diffusion or propagates the foreground information to the disoclusion regions. Annoying artifacts are created in the synthesized virtual views. This paper proposes a depth-aided exemplar-based inpainting method for recovering large disoclusion. It consists of two processes, warped depth map filling and warped color image filling. Since depth map can be considered as a grey-scale image without texture, it is much easier to be filled. Disoccluded regions of color image are predicted based on its associated filled depth map information. Regions with texture lying around the background have higher priority to be filled than other regions and disoccluded regions are filled by propagating the background texture through the exemplar-based inpainting. Thus artifacts created by diffusion or using foreground information for prediction can be eliminated. Experimental results show texture can be recovered in large disocclusions and the proposed method has better visual quality compared to existing methods.

Collaboration


Dive into the Chun-Ho Cheung's collaboration.

Top Co-Authors

Avatar

Lai-Man Po

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Kwok-Wai Cheung

Chu Hai College of Higher Education

View shared research outputs
Top Co-Authors

Avatar

Xuyuan Xu

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Litong Feng

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Ka-Ho Ng

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Chi-Wang Ting

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Yuming Li

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Fang Yuan

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Ka-Man Wong

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Sheung-Yeung Wang

City University of Hong Kong

View shared research outputs
Researchain Logo
Decentralizing Knowledge