Woo-shik Kim
Qualcomm
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Woo-shik Kim.
IEEE Transactions on Circuits and Systems for Video Technology | 2015
Woo-shik Kim; Wei Pu; Ali Khairat; Mischa Siekmann; Joel Sole; Jianle Chen; Marta Karczewicz; Tung Nguyen; Detlev Marpe
Video coding in the YCbCr color space has been widely used, since it is efficient for compression, but it can result in color distortion due to conversion error. Meanwhile, coding in the RGB color space maintains high color fidelity, having the drawback of a substantial bitrate increase with respect to YCbCr coding. Cross-component prediction (CCP) efficiently compresses video content by decorrelating color components while keeping high color fidelity. In this scheme, the chroma residual signal is predicted from the luma residual signal inside the coding loop. This paper gives a description of the CCP scheme from several points of view, from theoretical background to practical implementation. The proposed CCP scheme has been evaluated in standardization communities and adopted into H.265/High Efficiency Video Coding (HEVC) Range Extensions. The experimental results show significant coding performance improvements for both natural and screen content video, while the quality of all color components is maintained. The average coding gains for natural video are 17% and 5% bitrate reduction in the case of intra coding and 11% and 4% in the case of inter coding for RGB and YCbCr coding, respectively, while the average increment of encoding and decoding times in the HEVC reference software implementation are 10% and 4%, respectively.
Signal, Image and Video Processing | 2017
Je-Won Kang; Woo-shik Kim; Kei Kawamura
In this paper, an in-loop color space transform is proposed for screen content video coding to improve coding efficiency. The transform converts the color space of an input block to a better color space to improve rate-distortion performance by decorrelating among color components. Specifically, to derive the optimal color transform the principal component analysis is performed using spatially or temporally adjacent pixels in each block, and the derived transform is applied to the residual samples after intra or inter prediction. Then, rate-distortion optimization is performed to select the better color space between the original color space of the input signal and the derived one. It is demonstrated with the experimental results that the proposed method provides significant coding gains.
Archive | 2014
Liwei Guo; Marta Karczewicz; Joel Sole Rojals; Rajan Laxman Joshi; Woo-shik Kim; Wei Pu
Archive | 2014
Liwei Guo; Chao Pang; Woo-shik Kim; Wei Pu; Joel Sole Rojals; Rajan Laxman Joshi; Marta Karczewicz
Archive | 2014
Woo-shik Kim; Joel Sole Rojals; Marta Karczewicz
Archive | 2014
Wei Pu; Woo-shik Kim; Jianle Chen; Joel Sole Rojals; Liwei Guo; Chao Pang; Rajan Laxman Joshi; Marta Karczewicz
Archive | 2014
Woo-shik Kim; Rajan Laxman Joshi; Pu Wei; Rojals Joel Sole; Jianle Chen; Marta Karczewicz
Archive | 2014
Woo-shik Kim; Joel Sole Rojals; Marta Karczewicz
Archive | 2014
Rajan Laxman Joshi; Joel Sole Rojals; Marta Karczewicz; Jewon Kang; Woo-shik Kim
Archive | 2014
Woo-shik Kim; Joel Sole Rojals; Marta Karczewicz