Kyu-young Hwang
Samsung
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kyu-young Hwang.
Optics Letters | 2011
Gae-hwang Lee; Kyu-young Hwang; Jae Eun Jang; Y. W. Jin; Suok Lee; Jun-Young Jung
The optical properties and the theoretical prediction of color optical shutter with dye-doped polymer network liquid crystal (PNLC) were investigated. The view-angle dependence of reflectance according to the bias conditions showed distinctive characteristics, which could be explained from the effects of dye absorption and path length. It was also shown that the thickness dependence of reflectance was strongly influenced by the light-scattering coefficient. Our experimental results matched up well with the theoretical prediction based on the light scattering of liquid crystals in polymer network and the absorption of dichroic dye. This work indicates potential to improve the optical device using dye-doped liquid crystal-polymer composite.
international conference on image processing | 2010
Chang-Hyun Kim; Kyuha Choi; Ho-Young Lee; Kyu-young Hwang; Jong Beom Ra
Learning-based super-resolution algorithms synthesize a high-resolution image based on learning patch pairs of low- and high-resolution images. However, since a low-resolution patch is usually mapped to multiple high-resolution patches, unwanted artifacts or blurring can appear in super-resolved images. In this paper, we propose a novel approach to generate a high quality, high-resolution image without introducing noticeable artifacts. Introducing robust statistics to a learning-based super-resolution, we efficiently reject outliers which cause artifacts. Global and local constraints are also applied to produce a more reliable high-resolution image. Experimental results demonstrate that the proposed algorithm can synthesize higher quality, higher-resolution images compared to the existing algorithms.
international conference on image processing | 2010
Yang-Ho Cho; Kyu-young Hwang; Ho-Young Lee; Du-sik Park
The proposed method creates a high-resolution(HR) image on the basis of the frame registration of multiple low-resolution(LR) images. Not only does the super-resolution(SR) method based on using multiple LR images generally enhance the restored HR image quality compared to that based on using a single LR image, but it also increases the complexity and frame memory for hardware implementation. In order to generate an HR image, the multi-frame SR method has to estimate all motion vectors(MVs) between the target LR image and all the reference LR images. Additionally, the total frame memories used for storing LR images have to be preset according to the number of all the reference LR images. Therefore, the proposed multi-frame SR method focuses on a real-time and low frame memory system, thereby reducing the number of motion estimation(ME) operations and the total frame memory required, and preserving the image quality in an HR image restoration. First, we classify the input LR image into a feature and a uniform region in order to reduce the frame memory because the performance of SR algorithms is predominantly affected by restoring a feature region rather than a uniform region. Accordingly, we only save and use the feature region of the multiple LR images and not the uniform region for restoring an HR image. Next, the MV of each feature is estimated frame-wise to reduce the complexity of ME, and these MVs are accumulated as the feature trajectories through multiple LR frames. In the proposed method, the ME operation is conducted once between the reference LR image and the target LR image, and the estimated MVs are linked to the feature trajectories. These accumulated feature trajectories are used for generating an HR image. Experimental results show that the proposed multi-frame SR method can reduce the complexity and frame memory to one-third, while the quality of the restored HR image is equal to that obtained by using the conventional SR methods.
Proceedings of SPIE | 2012
Kyu-young Hwang; Yang-Ho Cho; Ho-Young Lee; Du-sik Park; Chang-Yeong Kim
In this paper, we propose a novel multi-view generation framework that considers the spatiotemporal consistency of each synthesized multi-view. Rather than independently filling in the holes of individual generated images, the proposed framework gathers hole information from each synthesized multi-view image to a reference viewpoint. The method then constructs a hole map and a SVRL (single view reference layer) at the reference viewpoint before restoring the holes in the SVRL, thereby generating a spatiotemporally consistent view. A hole map is constructed using depth information of the reference viewpoint and the input/output baseline length ratio. Thus, the holes in the SVRL can also represent holes in other multi-view images. To achieve temporally consistent hole filling in the SVRL, the restoration of holes in the current SVRL is performed by propagating the pixel value of the previous SVRL. Further hole filling is performed using a depth- and exemplar-based inpainting method. The experimental results showed that the proposed method generates high-quality spatiotemporally consistent multi-view images in various input/output environments. In addition, the proposed framework decreases the complexity of the hole-filling process by reducing repeated hole filling.
Archive | 2012
Gae-hwang Lee; Jae-eun Jung; Kyu-young Hwang
Archive | 2010
Jae Eun Jang; Gae-hwang Lee; Kyu-young Hwang; Jae-eun Jung
Archive | 2010
Gae-hwang Lee; Jae Eun Jang; Jae-eun Jung; Kyu-young Hwang
Archive | 2012
Kyu-young Hwang; Gae-hwang Lee; Jae-eun Jung; Chil-Sung Choi
Archive | 2011
Kyu-young Hwang; Gae-hwang Lee; Jae-eun Jung; Jae Eun Jang
Archive | 2011
Gae-hwang Lee; Jae Eun Jang; Jae-eun Jung; Kyu-young Hwang