Guan‐Ming Su
Dolby Laboratories
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Guan‐Ming Su.
International Journal of Communication Systems | 2011
Guan‐Ming Su; Yu-Chi Lai; Andres Kwasinski; Haohong Wang
This paper surveys major techniques in 3D communications area, which covers the whole pipeline of the 3D video communication framework, including 3D content creation, data representation, compression, delivery, decompression, post-processing, and 3D scene rendering stages. Both the current state-of-the-art, stereo 3D, and future trend, free-viewpoint 3D, are demonstrated in details. On the other hand, the paper highlights a few features in the emerging 4G wireless systems that are critical for 3D communications system design. At the end, the topics with potential but challenges, for example 3D over 4G networks, distributed 3D video coding, 3D multi-user communication, scalability and universal 3D access, are discussed and pointed out to audiences for further investigation. Copyright
Proceedings of SPIE | 2013
Adhatus Solichah Ahmadiyah; Guan‐Ming Su; Kai-Lung Hua; Yu-Chi Lai
This paper presents two novel end-to-end stereo video compression pipelines consisting of single-sensor digital camera pairs, the legacy consumer-grade video decoders, and anaglyph displays. As 3D videos contain a large amount of data, efficient compression methods to distribute streams over the current communication infrastructure are highly required. In addition, low computation complexity algorithms to reconstruct the 3D scenes using the existing hardware are also preferred. We proposed two methods to transmit a single encoded stream containing only required data to create anaglyph video from single-sensor camera pairs. Our first proposed method packs and encodes only the required demosaicked color channels used in the anaglyph display in YCbCr 4:4:4 format, whereas the second proposed method repacks the color filter array stereo image pairs into the legacy video format YCbCr 4:2:0 mono and leaves the demosaicking operations at the decoder side. The experimental results demonstrate the superior performance of our proposed methods over the traditional one by achieving up to 4.66 dB improvement in terms of Composite Peak-to-Signal Noise Ratio (CPSNR).
Proceedings of SPIE | 2014
Qian Chen; Guan‐Ming Su; Yin Peng
In this paper, we propose an adaptive upsampling filter to spatially upscale HDR image based on luminance range of the HDR picture in each color channel. It first searches for the optimal luminance range values to partition an HDR image to three different parts: dark, mid-tone and highlight. Then we derive the optimal set of filter coefficients both vertically and horizontally for each part. When the HDR pixel is within the dark area, we apply one set of filter coefficients to vertically upsample the pixel. If the HDR pixel falls in mid-tone area, we apply another set of filter for vertical upsampling. Otherwise the HDR pixel is in highlight area, another set of filter will be applied for vertical upsampling. Horizontal upsampling will be carried out likewise based on its luminance. The inherent idea to partition HDR image to different luminance areas is based on the fact that most HDR images are created from multiple exposures. Different exposures usually demonstrate slight variation in captured signal statistics, such as noise level, subtle misalignment etc. Hence, to group different regions to three luminance partitions actually helps to eliminate the variation between signals, and to derive optimal filter for each group with signals of lesser variation is certainly more efficient than for the entire HDR image. Experimental results show that the proposed adaptive upsampling filter based on luminance ranges outperforms the optimal upsampling filter around 0.57dB for R channel, 0.44dB for G channel and 0.31dB for B channel.
2013 International Conference on Computing, Networking and Communications (ICNC) | 2013
Kai-Lung Hua; Ge-Ming Chiu; Tai-Lin Chin; Hsing-Kua Pao; Yi-Chi Cheng; Guan‐Ming Su
Peer-to-peer (P2P) media streaming systems have recently become a major type of application traffic. In these applications, an important issue has been the block scheduling problem, which determines how each peer exchanges the data blocks from others. Scalable streaming in P2P networks has recently been proposed to address the heterogeneity of the network environment. In this paper, we first define a priority function for each block according to the blocks significance for video content. The block scheduling problem is then transformed to an optimization problem that maximizes the priority sum of the delivered video blocks. The simulation results show that the proposed algorithm offers excellent performance for P2P streaming service.
Archive | 2013
Guan‐Ming Su; Yu-Chi Lai; Andres Kwasinski; Haohong Wang
3D Visual Communications | 2012
Guan‐Ming Su; Yu-Chi Lai; Andres Kwasinski; Haohong Wang
Archive | 2012
Guan‐Ming Su; Yu-Chi Lai; Andres Kwasinski; Haohong Wang
3D Visual Communications | 2012
Guan‐Ming Su; Yu-Chi Lai; Andres Kwasinski; Haohong Wang
3D Visual Communications | 2012
Guan‐Ming Su; Yu-Chi Lai; Andres Kwasinski; Haohong Wang
3D Visual Communications | 2012
Guan‐Ming Su; Yu-Chi Lai; Andres Kwasinski; Haohong Wang