Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hak Gu Kim is active.

Publication


Featured researches published by Hak Gu Kim.


IEEE Transactions on Circuits and Systems for Video Technology | 2016

Critical Binocular Asymmetry Measure for the Perceptual Quality Assessment of Synthesized Stereo 3D Images in View Synthesis

Yong Ju Jung; Hak Gu Kim; Yong Man Ro

In human vision, excessive binocular asymmetry between the left- and right-eye images can be very problematic to perceive single binocular vision (causing visual discomfort) in the viewing of stereoscopic images. In this paper, we propose a critical binocular asymmetry (CBA) measure for objectively assessing the perceptual quality of the synthesized stereo 3D images generated through a depth-image-based rendering (DIBR) process. The proposed method detects critical regions that are likely to induce excessive binocular asymmetry. In particular, this paper considers view extrapolation since it introduces much more artifacts due to the lack of data. We measure structural similarity on the critical regions between the left and right images to quantify the perceptual effects of binocular asymmetry. The effectiveness of the proposed quality measure has been successfully evaluated by subjective assessment experiments using various types of synthesized stereoscopic images generated by four different DIBR-based view synthesis algorithms. We demonstrate the validity of the proposed quality measure by the comparison of subjective ratings and existing objective methods. Experimental results show that the combined use of the proposed binocular asymmetry measure and existing quality measures substantially improves the performance of the quality measures by explicitly considering the perceptual effects of CBA.


international conference on digital signal processing | 2014

Investigating experienced quality factors in synthesized multi-view stereo images

Hak Gu Kim; Yong Ju Jung; Soo Sung Yoon; Yong Man Ro

In this study, we investigated the quality factors in view synthesis that might lead to the visual discomfort and degradation of the overall viewing quality of the synthesized multi-view stereo or free viewpoint images. In particular, we focused on the investigation of the left and right (LR) image mismatch that might be one of the most important quality factors in the stereoscopic viewing. In order to measure how severely this factor influences the visual comfort and overall quality, we conducted a series of subjective assessment experiments on the visual comfort and viewing preference. In the subjective experiments, we compared the stereo view synthesis results of three different hole filling methods, because incorrectly filled holes could cause severe mismatches between the left- and right-view images. The subjective results revealed that the LR image mismatch could induce more visual discomfort and lower overall quality of stereoscopic images. Also, the results indicated that the visual comfort and overall quality were able to be improved by considering the inter-view consistency in multi-view stereo generation.


Optics Express | 2016

Acceleration of the calculation speed of computer-generated holograms using the sparsity of the holographic fringe pattern for a 3D object

Hak Gu Kim; Hyunwook Jeong; Yong Man Ro

In computer-generated hologram (CGH) calculations, a diffraction pattern needs to be calculated from all points of a 3-D object, which requires a heavy computational cost. In this paper, we propose a novel fast computer-generated hologram calculation method using sparse fast Fourier transform. The proposed method consists of two steps. First, the sparse dominant signals of CGHs are measured by calculating a wavefront on a virtual plane between the object and the CGH plane. Second, the wavefront on CGH plane is calculated by using the measured sparsity with sparse Fresnel diffraction. Experimental results proved that the proposed method is much faster than existing works while it preserving the visual quality.


IEEE Transactions on Circuits and Systems for Video Technology | 2017

Multiview Stereoscopic Video Hole Filling Considering Spatiotemporal Consistency and Binocular Symmetry for Synthesized 3D Video

Hak Gu Kim; Yong Man Ro

This paper proposes a new hole-filling method with spatiotemporal consistency and binocular symmetry for synthesized 3D videos in view extrapolation. Disocclusion regions in the synthesized views at virtual viewpoints result in regions with missing content. These regions will be referred to as hole regions. To provide the high-quality synthesized 3D videos via 3D display, the hole regions need to be filled considering the characteristics of human visual perception. From the perceptual point of view, binocular asymmetry between synthesized left- and right-eye videos (i.e., stereo pair video) is one of the most important factors that induce visual discomfort in stereoscopic viewing. In addition, binocular symmetry without temporal consistency between temporally neighboring frames could cause visual discomfort by annoying flickering artifacts. In this paper, to maintain the spatiotemporal consistency and binocular symmetry in synthesized 3D videos at multiple virtual viewpoints, we propose a global optimization-based hole-filling method using the information from the already filled adjacent view and previous frame. Furthermore, to reduce the computational cost of the global optimization, we propose a label propagation method, which propagates reliable labels used in the adjacent view and previous frame to the target image to be filled. The performance of the proposed method has been evaluated by objective assessments of 3D image quality, temporal consistency, and computational efficiency. In addition, subjective assessment is conducted for measuring the visual comfort and overall quality. The experimental results proved that the proposed method provides hole filling results with spatiotemporal consistency and binocular symmetry.


international conference on image processing | 2016

Measurement of critical temporal inconsistency for quality assessment of synthesized video

Hak Gu Kim; Yong Man Ro

This paper proposes a new temporal consistency measure for quality assessment of synthesized video. Disocclusion regions appear hole regions of the synthesized video at virtual viewpoints. Filling hole regions could be problematic when the synthesized video is perceived through multi-view displays. In particular, the temporal inconsistency caused by hole filling process in view synthesis could affect the perceptual quality of the synthesized video. In the proposed method, we extract excessive flicker regions between consecutive frames and quantify the perceptual effects of the temporal inconsistency on them by measuring the structural similarity. We have demonstrated the validity of the proposed quality measure by comparisons of subjective ratings and existing objective metrics. Experimental results have shown that the proposed temporal inconsistency measure is highly correlated with the overall quality of the synthesized video.


international conference on image processing | 2015

Temporally consistent hole filling method based on global optimization with label propagation for 3D video

Hak Gu Kim; Soo Sung Yoon; Yong Man Ro

This paper presents a new temporally consistent hole filling method based on global optimization for a synthesized 3D vid-eo.1 For the temporal consistency, the proposed method adaptively utilizes the already filled region in a previous frame under the guidance of motion vectors to fill a hole region in a current frame (i.e., target frame to-be-filled). In addition, when filling the hole region in the target frame, reliable labels are stored and propagated to a next target frame in order to reduce the computational cost of the global optimization. Experimental results show that the proposed method can achieve the temporal consistency and a high computational gain than existing hole filling methods.


Optics Express | 2017

Ultrafast layer based computer-generated hologram calculation with sparse template holographic fringe pattern for 3-D object

Hak Gu Kim; Yong Man Ro

In this paper, we propose a new ultrafast layer based CGH calculation that exploits the sparsity of hologram fringe pattern in 3-D object layer. Specifically, we devise a sparse template holographic fringe pattern. The holographic fringe pattern on a depth layer can be rapidly calculated by adding the sparse template holographic fringe patterns at each object point position. Since the size of sparse template holographic fringe pattern is much smaller than that of the CGH plane, the computational load can be significantly reduced. Experimental results show that the proposed method achieves 10-20 msec for 1024x1024 pixels providing visually plausible results.


conference on multimedia modeling | 2018

Teacher and Student Joint Learning for Compact Facial Landmark Detection Network.

Hong Joo Lee; Wissam J. Baddar; Hak Gu Kim; Seong Tae Kim; Yong Man Ro

Compact neural networks with limited memory and computation are demanding in recently popularized mobile applications. The reduction of network parameters is an important priority. In this paper, we address a compact neural network for facial landmark detection. The facial landmark detection is a frontal module that is mandatorily required for face analysis applications. We propose a new teacher and student joint learning method applicable to a compact facial landmark detection network. In the proposed learning scheme, the compact architecture of student regression network is learned jointly with the fully connected layer of the teacher regression network so that they are mimicked each other. To demonstrate the effectiveness of the proposed learning method, experiments were performed on a public database. The experimental results showed that the proposed method could reduce network parameters while maintaining comparable performance to state-of-the-art methods.


arXiv: Computer Vision and Pattern Recognition | 2018

ICADx: interpretable computer aided diagnosis of breast masses.

Seong Tae Kim; Hakmin Lee; Hak Gu Kim; Yong Man Ro

In this study, a novel computer aided diagnosis (CADx) framework is devised to investigate interpretability for classifying breast masses. Recently, a deep learning technology has been successfully applied to medical image analysis including CADx. Existing deep learning based CADx approaches, however, have a limitation in explaining the diagnostic decision. In real clinical practice, clinical decisions could be made with reasonable explanation. So current deep learning approaches in CADx are limited in real world deployment. In this paper, we investigate interpretability in CADx with the proposed interpretable CADx (ICADx) framework. The proposed framework is devised with a generative adversarial network, which consists of interpretable diagnosis network and synthetic lesion generative network to learn the relationship between malignancy and a standardized description (BI-RADS). The lesion generative network and the interpretable diagnosis network compete in an adversarial learning so that the two networks are improved. The effectiveness of the proposed method was validated on public mammogram database. Experimental results showed that the proposed ICADx framework could provide the interpretability of mass as well as mass classification. It was mainly attributed to the fact that the proposed method was effectively trained to find the relationship between malignancy and interpretations via the adversarial learning. These results imply that the proposed ICADx framework could be a promising approach to develop the CADx system.


virtual reality software and technology | 2017

Measurement of exceptional motion in VR video contents for VR sickness assessment using deep convolutional autoencoder

Hak Gu Kim; Wissam J. Baddar; Heountaek Lim; Hyunwook Jeong; Yong Man Ro

This paper proposes a new objective metric of exceptional motion in VR video contents for VR sickness assessment. In VR environment, VR sickness can be caused by several factors which are mismatched motion, field of view, motion parallax, viewing angle, etc. Similar to motion sickness, VR sickness can induce a lot of physical symptoms such as general discomfort, headache, stomach awareness, nausea, vomiting, fatigue, and disorientation. To address the viewing safety issues in virtual environment, it is of great importance to develop an objective VR sickness assessment method that predicts and analyses the degree of VR sickness induced by the VR content. The proposed method takes into account motion information that is one of the most important factors in determining the overall degree of VR sickness. In this paper, we detect the exceptional motion that is likely to induce VR sickness. Spatio-temporal features of the exceptional motion in the VR video content are encoded using a convolutional autoencoder. For objectively assessing the VR sickness, the level of exceptional motion in VR video content is measured by using the convolutional autoencoder as well. The effectiveness of the proposed method has been successfully evaluated by subjective assessment experiment using simulator sickness questionnaires (SSQ) in VR environment.

Collaboration


Dive into the Hak Gu Kim's collaboration.

Researchain Logo
Decentralizing Knowledge