Yung-Lyul Lee
Electronics and Telecommunications Research Institute
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yung-Lyul Lee.
Journal of Broadcast Engineering | 2012
Kyung-Soo Moon; Jeong-Pil Kim; Yung-Lyul Lee
When interpolating chrominance signal, the H.264/AVC standard uses linear interpolation. In this paper, we suggest more effective method that uses a high precision filter, 6-tap FIR filter, 2-tap linear filter for chroma interpolation. The experimental result shows that the proposed method achieves the BD-Rate decrease without the PSNR decrease compared with Jm11.0kta2.7. The maximum BD-rate improvements on Y component are 1.3%, those of Cb and Cr components are 19.8%, 25.0%, respectively. The average BD-rate improvements on Y component are 0.5%, those of Cb and Cr components are 6.1%, 6.9%, respectively.
Journal of Broadcast Engineering | 2008
Dong-Hoon Han; Suk-Hee Cho; Namho Hur; Yung-Lyul Lee
Multi-view video coding (MVC) based on H.264/AVC encodes multiple views efficiently by using a prediction scheme that exploits inter-view correlation among multiple views. However, with the increase of the number of views and use of inter-view prediction among views, total encoding time will be increased in multiview video coding. In this paper, we propose a fast mode decision using both MB(Macroblock)-based region segmentation information corresponding to each view in multiple views and global disparity vector among views in order to reduce encoding time. The proposed method achieves on average 40% reduction of total encoding time with the objective video quality degradation of about 0.04 dB peak signal-to-noise ratio (PSNR) by using joint multi-view video model (JMVM) 4.0 that is the reference software of the multiview video coding standard.
Journal of Broadcast Engineering | 2007
Dae-Yeon Kim; Jin-Soo Choi; Yung-Lyul Lee
In this paper, an improved CABAC is proposed for the lossless compression in H.264/AVC. CABAC in the lossless coding is not as efficient as that in the lossy compression since it was developed for lossy coding. CABAC for the lossless coding in H.26과/AVC Advanced 4:4:4 Profile is applied without the change of the conventional binarization method. Thus, a binarization method considering the statistical characteristic of residual signals is proposed for the lossless coding in 0.264/AVC Advanced 4:4:4 Profile. The experimental results show that the proposed method obtains approximately 3.4% bitrate reduction in comparison to that of the conventional lossless coding.
Journal of Broadcast Engineering | 2014
Sungwook Hong; Yung-Lyul Lee
A new modified lossless intra-coding method based on a cross residual transform is applied to HEVC(High Efficiency Video Coding). The HEVC standard including a multi-directional spatial prediction method to reduce spatial redundancy encodes the pixels in a PU (Prediction Unit) by using neighboring pixels. In the new modified lossless intra-coding method, the spatial prediction is performed by pixel-based DPCM but is implemented by block-based manner by using cross residual transform on the HEVC standard. The experimental results show that the new lossless intra-coding method reduces the bit rate of approximately 8.4% in comparison with the lossless-intra coding method in the HEVC standard and the proposed method results in slightly better compression ratio than the JPEG2000 lossless coding.
Journal of Broadcast Engineering | 2013
Sungwook Hong; Yung-Lyul Lee
The latest video coding standard HEVC was developed by the joint work of JCT-VC(Joint Collaborative Team on Video Coding) from ITU-T VCEG and ISO/IEC MPEG. The HEVC standard reduces the BD-Bitrate of about 50% compared with the H.264/AVC standard. However, using the various methods for obtaining the coding gains has increased complexity problems. The proposed method reduces the complexity of HEVC by using both CPU parallel processing and GPU-accelerated processing. The experiment result for UHD(3840x2144) video sequences achieves 15fps encoding/decoding performance by applying the proposed method. Sooner or later, we expect that the H/W speedup of data transfer rates between CPU and GPU will result in reducing the encoding/decoding times much more.
JOURNAL OF BROADCAST ENGINEERING | 2011
Jeong-Pil Kim; Yung-Lyul Lee
Recently, the JCT-VC is developing the next generation video coding standard that is called HEVC. HEVC has adopted many coding technologies increasing coding efficiency. For chroma interpolation, DCT-based interpolation filter showing better performance than the linear filter in H.264/AVC was adopted in HEVC. In this paper, a combined filter that utilizes the FIR filter and the linear filter in H.264/AVC is proposed to increase coding efficiency. When the proposed method is compared with DCT-based interpolation filter, the experimental results for various sequences show that the average BD-rate improvements on chroma U and V components are 0.9% and 1.1%, respectively, in the high efficiency case of random access structure, those on U and V components are 1.1% and 1.1%, respectively, in the low complexity case of random access structure, those on U and V components are 0.9% and 1.4%, respectively, in the high efficiency case of low delay structure, and those on U and V components are 1.8% and 1.8%, respectively, in the low complexity case of low delay structure.
Journal of Broadcast Engineering | 2009
Dae-Yeon Kim; Dong-Kyun Kim; Yung-Lyul Lee
The Moving Picture Experts Group (MPEG) and Video Coding Experts Group (VCEG) have developed a new standard that promises to outperform the earlier MPEG-4 and H.263 standards. The new standard is called H.264/AVC (Advanced Video Coding) and is published jointly as MPEG-4 Part 10 and ITU-T Recommendation H.264. In particular, the H.264/AVC intra prediction coding provides nine directional prediction modes for every 4×4 block in order to reduce spatial redundancies. In this paper, an ABS (Adaptive Bit Skip) mode is proposed. In order to achieve coding efficiency, the proposed method can remove the mode bits to represent the prediction mode by using the similarity of adjacent pixels. Experimental results show that the proposed method achieves the PSNR gain of about 0.2 dB in R-D curve and reduces the bit rates about 3.6% compared with H.264/AVC.
Archive | 2006
Jeongil Seo; Wook-Joong Kim; Kyu-Heon Kim; Kyeongok Kang; Jin-Woo Hong; Yung-Lyul Lee; Ki-Hun Han; Jae-Ho Hur; Dong-Gyu Sim; Seoung-Jun Oh
Archive | 2006
Daehee Kim; Namho Hur; Soo-In Lee; Yung-Lyul Lee; Jong-Ryul Kim; Suk-Hee Cho
Archive | 2007
Seyoon Jeong; Jeongil Seo; Kyuheon Kim; Kyeongok Kang; Jin-Woo Hong; Yung-Lyul Lee; Dae-Yeon Kim; Dong-Gyun Kim; Seoung-Jun Oh; Dong-Gyu Sim; Chang-Beom Ahn