Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jiunn-Tsair Fang is active.

Publication


Featured researches published by Jiunn-Tsair Fang.


data compression conference | 2015

SVM-Based Fast Intra CU Depth Decision for HEVC

Yen-Chun Liu; Zong-Yi Chen; Jiunn-Tsair Fang; Pao-Chi Chang

In this paper, a fast CU depth decision algorithm based on support vector machine (SVM) is proposed to reduce the computational complexity of HEVC intra coding. It is systematic to develop the criterion of early CU splitting and termination by applying SVM. Appropriate features for training SVM models are extracted from spatial domain and pixel domain. Artificial neural network is used to analyze the impact of each extracted feature on CU size decision, and different weights are assigned to the output of SVMs. The experimental results show that the proposed fast algorithm provides 58.9% encoding time saving at most, and 46.5% time saving on average compared with HM 12.1.


IEEE Transactions on Broadcasting | 2014

Quantization-Distortion Models for Interlayer Predictions in H.264/SVC Spatial Scalability

Ren-Jie Wang; Jiunn-Tsair Fang; Yan-Ting Jiang; Pao-Chi Chang

H.264 scalable extension (H.264/SVC) is the current state-of-the-art standard of the scalable video coding. Its interlayer prediction provides higher coding efficiency than previous standards. Since the standard was proposed, several attempts have been made to improve the performance based on its coding structure. Quantization-distortion (Q-D) modeling is a fundamental issue in video coding; therefore, this paper proposes new Q-D models for three interlayer predictions in 264/SVC spatial scalability, that is, interlayer motion prediction, intraprediction, and residual prediction. An existing single layer offline Q-D model is extended to H.264/SVC spatial scalable coding. In the proposed method, the residual power from the interlayer prediction is decomposed into the coding distortion and the prediction distortion. The prediction distortion is the mean square error (MSE) between two original signals that can be obtained by preprocessing with low complexity. Therefore, the coding distortion can be estimated based on both the quantization parameter (QP) and a precalculated prediction distortion before the encoding process. Consequently, the estimated quality based on the proposed models achieved a high accuracy of over 90% for the three interlayer predictions in average.


Journal of Visual Communication and Image Representation | 2016

Computational complexity allocation and control for inter-coding of high efficiency video coding with fast coding unit split decision

Jiunn-Tsair Fang; Zong-Yi Chen; Chang-Rui Lai; Pao-Chi Chang

A computational complexity allocation and control method for the low-delay P-frame configuration of the HEVC encoder.The complexity allocation includes the group of pictures layer, the frame layer, and the CU layer in the HEVC encoder.Motion vector estimation information is applied for CU complexity allocation and depth split determination. HEVC provides the quadtree structure of the coding unit (CU) with four coding-tree depths to facilitate high coding efficiency. However, compared with previous standards, the HEVC encoder increases computational complexity considerably, thus making it inappropriate for applications in power-constrained devices. This study therefore proposes a computational complexity allocation and control method for the low-delay P-frame configuration of the HEVC encoder. The complexity allocation includes the group of pictures (GOP) layer, the frame layer, and the CU layer in the HEVC encoder. Each layer involved uses individual method to distribute the complexity. In particular, motion vector estimation information is applied for CU complexity allocation and depth split determination. The total computational complexity can thus be reduced to 80% and 60% or even lower. Experiment results revealed that the average BD-PSNR exhibited a decrease of approximately 0.1dB and a BD-bitrate increment of 2% when the target complexity was reduced to 60%.


pacific-rim symposium on image and video technology | 2011

Quality estimation for H.264/SVC inter-layer residual prediction in spatial scalability

Ren-Jie Wang; Yan-Ting Jiang; Jiunn-Tsair Fang; Pao-Chi Chang

Scalable Video Coding (SVC) provides an efficient compression for the video bitstream equipped with various scalable configurations. H.264 scalable extension (H.264/SVC) is the most recent scalable coding standard. It involves the state-of-the-art inter-layer prediction to provide higher coding efficiency than previous standards. Moreover, the requirements for the video quality on distinct situations like link conditions or video contents are usually different. Therefore, it is very desirable to be able to construct a model so that the target quality can be estimated in advance. This work proposes a Quantization-Distortion (Q-D) model for H.264/SVC spatial scalability, and then we can estimate video quality before the actual encoding is performed. In particular, we further decompose the residual from the inter-layer residual prediction into the previous distortion and Prior-Residual so that the residual can be estimated. In simulations, based on the proposed model, we estimate the actual Q-D curves, and its average accuracy is 88.79%.


Journal of Electronic Imaging | 2014

Computation reduction in high-efficiency video coding based on the similarity of transform unit blocks

Zong-Yi Chen; Jiunn-Tsair Fang; Chung-Shian Chiang; Pao-Chi Chang

Abstract. The new video coding standard, high-efficiency video coding, adopts a quadtree structure to provide variable transform sizes in the transform coding process. The heuristic examination of transform unit (TU) modes substantially increases the computational complexity, compared to previous video coding standards. Thus, efficiently reducing the TU candidate modes is crucial. In the proposed similarity-check scheme, sub-TU blocks are categorized into a strongly similar case or a weakly similar case, and the early TU termination or early TU splitting procedure is performed. For the strongly similar case, a property called zero-block inheritance combined with a zero-block detection technique is applied to terminate the TU search process early. For the weakly similar case, the gradients of residuals representing the similarity of coefficients are used to skip the current TU mode or stop the TU splitting process. In particular, the computation time is further reduced because all the required information for the proposed mode decision criteria is derived before performing the transform coding. The experimental results revealed that the proposed algorithm can save ~64% of the TU encoding time on average in the interprediction, with a negligible rate-distortion loss.


Multimedia Tools and Applications | 2017

Deep feature learning for cover song identification

Jiunn-Tsair Fang; Chi-Ting Day; Pao-Chi Chang

The identification of a cover song, which is an alternative version of a previously recorded song, for music retrieval has received increasing attention. Methods for identifying a cover song typically involve comparing the similarity of chroma features between a query song and another song in the data set. However, considerable time is required for pairwise comparisons. In this study, chroma features were patched to preserve the melody. An intermediate representation was trained to reduce the dimension of each patch of chroma features. The training was performed using an autoencoder, commonly used in deep learning for dimensionality reduction. Experimental results showed that the proposed method achieved better accuracy for identification and spent less time for similarity matching in both covers80 dataset and Million Song Dataset as compared with traditional approaches.


Journal of Electronic Imaging | 2016

Complexity control for high-efficiency video coding by coding layers complexity allocations

Jiunn-Tsair Fang; Kai-Wen Liang; Zong-Yi Chen; Wei Hsieh; Pao-Chi Chang

Abstract. The latest video compression standard, high-efficiency video coding (HEVC), provides quad-tree structures of coding units (CUs) and four coding tree depths to facilitate coding efficiency. The HEVC encoder considerably increases the computational complexity to levels inappropriate for video applications of power-constrained devices. This work, therefore, proposes a complexity control method for the low-delay P-frame configuration of the HEVC encoder. The complexity control mechanism is among the group of pictures layer, frame layer, and CU layer, and each coding layer provides a distinct method for complexity allocation. Furthermore, the steps in the prediction unit encoding procedure are reordered. By allocating the complexity to each coding layer of HEVC, the proposed method can simultaneously satisfy the entire complexity constraint (ECC) for entire sequence encoding and the instant complexity constraint (ICC) for each frame during real-time encoding. Experimental results showed that as the target complexity under both the ECC and ICC was reduced to 80% and 60%, respectively, the decrease in the average Bjøntegaard delta peak signal-to-noise ratio was ∼0.1  dB with an increase of 1.9% in the Bjøntegaard delta rate, and the complexity control error was ∼4.3% under the ECC and 4.3% under the ICC.


ieee global conference on consumer electronics | 2015

Fast CU algorithm and complexity control for HEVC

Jiunn-Tsair Fang; Chien-Hao Kuo; Chang-Rui Lai; Pao-Chi Chang

The latest video compression standard HEVC provides the coding unit (CU), defined by quad-tree structures, to achieve high coding efficiency. Compared with previous standards, HEVC encoder increases much computational complexity to levels inappropriate for applications of power-constrained devices. This work thus proposes a fast CU algorithm to improve coding efficiency, and distributes the complexity to the CU layer. Experimental results show that the loss of average BD-PSNR is about 0.1 dB with BD-bitrate 2 % increment as the complexity was reduced to 60%.


international symposium on consumer electronics | 2013

Robust rate control mechanism against scene change for H.264/AVC

Jiunn-Tsair Fang; Chen-Cheng Chan; Shin-Ru Hsu; Pao-Chi Chang

Rate control is critical to time sensitive video applications over networks. However, the H.264/AVC standard takes no particularly response to the scene change which causes transmission quality deterioration significantly. In this work, we propose a robust mechanism of the rate control that can quickly respond to a scene change. In our proposed mechanism, it first allocates the remaining frames as a transition GOP. Then, according to the buffer fullness, it estimates the target bits so that a QP (quantization parameter) value can be determined. Simulation results show that our proposed method improves the average PSNR (peak signal noise ratio) about 1.1 dB with less buffer size, compared with the performance of JM in version 17.2.


Journal of Electronic Imaging | 2011

Dynamic stopping criteria of turbo codes for clustered set partitioning in hierarchical trees encoded image transmission

Jiunn-Tsair Fang; Cheng-Shong Wu

Turbo codes adopt iterative decoding to increase the ability of error correction. However, the iterative method increases the decoding delay and power consumption. An effective approach is to decrease the number of iterations while tolerating slight performance degradation. We apply the clustered set partitioning in hierarchical trees for image coding. Different from other early stop criteria, we use the bit-error sensitivities from the image data. Then, the stop criterion is directly determined by the importance of image data. Simulation results show that our scheme can reduce more number of iterations with less degradation for peak-signal-to-noise ratio or structure similar performance.

Collaboration


Dive into the Jiunn-Tsair Fang's collaboration.

Top Co-Authors

Avatar

Pao-Chi Chang

National Central University

View shared research outputs
Top Co-Authors

Avatar

Zong-Yi Chen

National Central University

View shared research outputs
Top Co-Authors

Avatar

Chang-Rui Lai

National Central University

View shared research outputs
Top Co-Authors

Avatar

Chen-Cheng Chan

National Central University

View shared research outputs
Top Co-Authors

Avatar

Ren-Jie Wang

National Central University

View shared research outputs
Top Co-Authors

Avatar

Yan-Ting Jiang

National Central University

View shared research outputs
Top Co-Authors

Avatar

Yen-Chun Liu

National Central University

View shared research outputs
Top Co-Authors

Avatar

Cheng-Shong Wu

National Chung Cheng University

View shared research outputs
Top Co-Authors

Avatar

Chi-Ting Day

National Central University

View shared research outputs
Top Co-Authors

Avatar

Chien-Hao Kuo

National Central University

View shared research outputs
Researchain Logo
Decentralizing Knowledge