Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tiesong Zhao is active.

Publication


Featured researches published by Tiesong Zhao.


IEEE Journal of Selected Topics in Signal Processing | 2013

Flexible Mode Selection and Complexity Allocation in High Efficiency Video Coding

Tiesong Zhao; Zhou Wang; Sam Kwong

To improve compression performance, High Efficiency Video Coding (HEVC) employs a quad-tree based block representation, namely Coding Tree Unit (CTU), which can support larger partitions and more coding modes than a traditional macroblock. Despite its high compression efficiency, the number of combinations of coding modes increases dramatically, which results in high computational complexity at the encoder. Here we propose a flexible framework for HEVC coding mode selection, with a user-defined global complexity factor. Based on linear programming, a hierarchical complexity allocation scheme is developed to allocate computational complexities among frames and Coding Units (CUs) to maximize the overall Rate-Distortion (RD) performance. In each CU, with the allocated complexity factor, a mode mapping based approach is employed for coding mode selection. Extensive experiments demonstrate that, with a series of global complexity factors, the proposed model can achieve good trade-offs between computational complexity and RD performance.


IEEE Transactions on Circuits and Systems for Video Technology | 2011

Rate Control Optimization for Temporal-Layer Scalable Video Coding

Sudeng Hu; Hanli Wang; Sam Kwong; Tiesong Zhao; C.-C.J. Kuo

A novel frame-level rate control (RC) algorithm is presented in this paper for temporal scalability of scalable video coding. First, by introducing a linear quality dependency model, the quality dependency between a coding frame and its references is investigated for the hierarchical B-picture prediction structure. Second, linear rate-quantization (R-Q) and distortion-quantization (D-Q) models are introduced based on different characteristics of temporal layers. Third, according to the proposed quality dependency model and R-Q and D-Q models for each temporal layer, adaptive weighting factors are derived to allocate bits efficiently among temporal layers. Experimental results on not only traditional quarter common intermediate format/common intermediate format but also standard definition and high definition sequences demonstrate that the proposed algorithm achieves excellent coding efficiency as compared to other benchmark RC schemes.


IEEE Transactions on Image Processing | 2015

Objective Quality Assessment for Color-to-Gray Image Conversion

Kede Ma; Tiesong Zhao; Kai Zeng; Zhou Wang

Color-to-gray (C2G) image conversion is the process of transforming a color image into a grayscale one. Despite its wide usage in real-world applications, little work has been dedicated to compare the performance of C2G conversion algorithms. Subjective evaluation is reliable but is also inconvenient and time consuming. Here, we make one of the first attempts to develop an objective quality model that automatically predicts the perceived quality of C2G converted images. Inspired by the philosophy of the structural similarity index, we propose a C2G structural similarity (C2G-SSIM) index, which evaluates the luminance, contrast, and structure similarities between the reference color image and the C2G converted image. The three components are then combined depending on image type to yield an overall quality measure. Experimental results show that the proposed C2G-SSIM index has close agreement with subjective rankings and significantly outperforms existing objective quality metrics for C2G conversion. To explore the potentials of C2G-SSIM, we further demonstrate its use in two applications: 1) automatic parameter tuning for C2G conversion algorithms and 2) adaptive fusion of C2G converted images.


Proceedings of SPIE | 2014

Characterizing perceptual artifacts in compressed video streams

Kai Zeng; Tiesong Zhao; Abdul Rehman; Zhou Wang

To achieve optimal video quality under bandwidth and power constraints, modern video coding techniques employ lossy coding schemes, which often create compression artifacts that may lead to degradation of perceptual video quality. Understanding and quantifying such perceptual artifacts play important roles in the development of effective video compression, streaming and quality enhancement systems. Moreover, the characteristics of compression artifacts evolve over time due to the continuous adoption of novel coding structures and strategies during the development of new video compression standards. In this paper, we reexamine the perceptual artifacts created by standard video compression, summarizing commonly observed spatial and temporal perceptual distortions in compressed video, with emphasis on the perceptual temporal artifacts that have not been well identified or accounted for in previous studies. Furthermore, a floating effect detection method is proposed that not only detects the existence of floating, but also segments the spatial regions where floating occurs∗.


IEEE Transactions on Image Processing | 2013

Multiview Coding Mode Decision With Hybrid Optimal Stopping Model

Tiesong Zhao; Sam Kwong; Hanli Wang; Zhou Wang; Zhaoqing Pan; C.-C. Jay Kuo

In a generic decision process, optimal stopping theory aims to achieve a good tradeoff between decision performance and time consumed, with the advantages of theoretical decision-making and predictable decision performance. In this paper, optimal stopping theory is employed to develop an effective hybrid model for the mode decision problem, which aims to theoretically achieve a good tradeoff between the two interrelated measurements in mode decision, as computational complexity reduction and rate-distortion degradation. The proposed hybrid model is implemented and examined with a multiview encoder. To support the model and further promote coding performance, the multiview coding mode characteristics, including predicted mode probability and estimated coding time, are jointly investigated with inter-view correlations. Exhaustive experimental results with a wide range of video resolutions reveal the efficiency and robustness of our method, with high decision accuracy, negligible computational overhead, and almost intact rate-distortion performance compared to the original encoder.


IEEE Transactions on Image Processing | 2012

H.264/SVC Mode Decision Based on Optimal Stopping Theory

Tiesong Zhao; Sam Kwong; Hanli Wang; C.-C.J. Kuo

Fast mode decision algorithms have been widely used in the video encoder implementation to reduce encoding complexity yet without much sacrifice in the coding performance. Optimal stopping theory, which addresses early termination for a generic class of decision problems, is adopted in this paper to achieve fast mode decision for the H.264/Scalable Video Coding standard. A constrained model is developed with optimal stopping, and the solutions to this model are employed to initialize the candidate mode list and predict the early termination. Comprehensive simulation results are conducted to demonstrate that the proposed method strikes a good balance between low encoding complexity and high coding efficiency.


international conference on multimedia and expo | 2010

Frame level rate control for H.264/AVC with novel Rate-Quantization model

Sudeng Hu; Hanli Wang; Sam Kwong; Tiesong Zhao

In this paper, a frame level rate control algorithm is proposed with a novel Rate-Quantization (R-Q) model for H.264/AVC. Firstly, a two-stage rate control scheme is adopted to decouple the inter-dependency between Rate Distortion Optimization (RDO) and rate control. Secondly, in order to predict the frame complexity accurately, instead of the Mean Absolute Difference (MAD) of the residual signal, bits information in the RDO-based mode decision process is employed to predict the frame complexity. Thirdly, a self-adaptive exponential R-Q model is proposed for rate control. Experimental results reveal that the proposed R-Q model can estimate the actual output bits very well, and the novel rate control scheme has excellent performance both in bit rate accuracy and coding efficiency as compared to JVT-W043 and the FixedQp tool in the Joint Scalable Video Model reference software.


asilomar conference on signals, systems and computers | 2013

On the use of SSIM in HEVC

Tiesong Zhao; Kai Zeng; Abdul Rehman; Zhou Wang

The Structural SIMilarity (SSIM) index has been attracting an increasing amount of attention recently in the video coding community as a perceptual criterion for testing and optimizing video codecs. Meanwhile, the arrival of the new MPEG-H/H.265 High Efficiency Video Coding (HEVC) standard creates new opportunities and challenges in perceptual video coding. In this paper, we first elaborate what are the attributes that make SSIM a good candidate for perception-based development of HEVC and future video coding standards for both testing and optimization purposes. We then address the computational issues in practical applications of SSIM in HEVC, in particular the trade-off between efficient computation and accurate estimation of SSIM when working with video codecs that have sophisticated block partitioning structures and aim for encoding videos with a wide range of spatial resolutions.


pacific rim conference on multimedia | 2010

Fast inter-mode decision based on rate-distortion cost characteristics

Sudeng Hu; Tiesong Zhao; Hanli Wang; Sam Kwong

In this paper, a new fast mode decision (FMD) algorithm is proposed for the state-of-the-art video coding standard H.264/AVC. Firstly, based on Rate-Distortion (RD) cost characteristics, all inter modes are classified into two groups, one is Skip mode (including both Skip and Direct modes) and all the other inter modes are called non-Skip modes. In order to select the best mode for coding a Macroblock (MB), minimum RD costs of these two mode groups are predicted respectively. Then for Skip mode, an early Skip mode detection scheme is proposed; for non-Skip modes, a three-stage scheme is developed to speed up the mode decision process. Experimental results demonstrate that the proposed algorithm has good robustness in coding efficiency with different Quantization parameters (Qps) and various video sequences and is able to achieve about 54% time saving on average while with negligible degradation in Peak-Signal-to-Noise-Ratio (PSNR) and acceptable increase in bit rate.


IEEE Transactions on Broadcasting | 2015

SSIM-Based Coarse-Grain Scalable Video Coding

Tiesong Zhao; Jiheng Wang; Zhou Wang; Chang Wen Chen

We propose an improved coarse-grain scalable video coding (SVC) approach based on the structural similarity (SSIM) index as the visual quality criterion, aiming at maximizing the overall coding performance constrained by user-defined quality weightings for all scalable layers. First, we develop an interlayer rate-SSIM dependency model, by investigating bit rate and SSIM relationships between different layers. Second, a reduced-reference SSIM-Q model and a Laplacian R-Q model are introduced for SVC, by incorporating the characteristics of hierarchical prediction structure in each layer. Third, based on the user-defined weightings and the proposed models, we design a rate-distortion optimization approach to adaptively adjust Lagrange multipliers for all layers to maximize the overall rate-SSIM performance of the scalable encoder. Experiments with multiple layers, different layer weightings, and various videos demonstrate that the proposed framework can achieve better rate-SSIM performance than single layer optimization method, and provide better coding efficiency as compared to the conventional SVC scheme. Subjective tests further demonstrate the benefits of the proposed scheme.

Collaboration


Dive into the Tiesong Zhao's collaboration.

Top Co-Authors

Avatar

Sam Kwong

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhou Wang

University of Waterloo

View shared research outputs
Top Co-Authors

Avatar

Sudeng Hu

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Jia Zhang

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yun Zhang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Zhaoqing Pan

Nanjing University of Information Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Kai Zeng

University of Waterloo

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge