Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Woo-shik Kim is active.

Publication


Featured researches published by Woo-shik Kim.


IEEE Transactions on Circuits and Systems for Video Technology | 2015

Cross-Component Prediction in HEVC

Woo-shik Kim; Wei Pu; Ali Khairat; Mischa Siekmann; Joel Sole; Jianle Chen; Marta Karczewicz; Tung Nguyen; Detlev Marpe

Video coding in the YCbCr color space has been widely used, since it is efficient for compression, but it can result in color distortion due to conversion error. Meanwhile, coding in the RGB color space maintains high color fidelity, having the drawback of a substantial bitrate increase with respect to YCbCr coding. Cross-component prediction (CCP) efficiently compresses video content by decorrelating color components while keeping high color fidelity. In this scheme, the chroma residual signal is predicted from the luma residual signal inside the coding loop. This paper gives a description of the CCP scheme from several points of view, from theoretical background to practical implementation. The proposed CCP scheme has been evaluated in standardization communities and adopted into H.265/High Efficiency Video Coding (HEVC) Range Extensions. The experimental results show significant coding performance improvements for both natural and screen content video, while the quality of all color components is maintained. The average coding gains for natural video are 17% and 5% bitrate reduction in the case of intra coding and 11% and 4% in the case of inter coding for RGB and YCbCr coding, respectively, while the average increment of encoding and decoding times in the HEVC reference software implementation are 10% and 4%, respectively.


Signal, Image and Video Processing | 2017

Sample adaptive color space transform for screen content video coding

Je-Won Kang; Woo-shik Kim; Kei Kawamura

In this paper, an in-loop color space transform is proposed for screen content video coding to improve coding efficiency. The transform converts the color space of an input block to a better color space to improve rate-distortion performance by decorrelating among color components. Specifically, to derive the optimal color transform the principal component analysis is performed using spatially or temporally adjacent pixels in each block, and the derived transform is applied to the residual samples after intra or inter prediction. Then, rate-distortion optimization is performed to select the better color space between the original color space of the input signal and the derived one. It is demonstrated with the experimental results that the proposed method provides significant coding gains.


Archive | 2014

Palette prediction in palette-based video coding

Liwei Guo; Marta Karczewicz; Joel Sole Rojals; Rajan Laxman Joshi; Woo-shik Kim; Wei Pu


Archive | 2014

Intra prediction from a predictive block

Liwei Guo; Chao Pang; Woo-shik Kim; Wei Pu; Joel Sole Rojals; Rajan Laxman Joshi; Marta Karczewicz


Archive | 2014

ADAPTIVE COLOR TRANSFORMS FOR VIDEO CODING

Woo-shik Kim; Joel Sole Rojals; Marta Karczewicz


Archive | 2014

Inter-color component residual prediction

Wei Pu; Woo-shik Kim; Jianle Chen; Joel Sole Rojals; Liwei Guo; Chao Pang; Rajan Laxman Joshi; Marta Karczewicz


Archive | 2014

ADAPTIVE INTER-COLOR COMPONENT RESIDUAL PREDICTION

Woo-shik Kim; Rajan Laxman Joshi; Pu Wei; Rojals Joel Sole; Jianle Chen; Marta Karczewicz


Archive | 2014

VIDEO CODING USING SAMPLE PREDICTION AMONG COLOR COMPONENTS

Woo-shik Kim; Joel Sole Rojals; Marta Karczewicz


Archive | 2014

Disabling intra prediction filtering

Rajan Laxman Joshi; Joel Sole Rojals; Marta Karczewicz; Jewon Kang; Woo-shik Kim


Archive | 2014

Adaptive filtering in video coding

Woo-shik Kim; Joel Sole Rojals; Marta Karczewicz

Collaboration


Dive into the Woo-shik Kim's collaboration.

Researchain Logo
Decentralizing Knowledge