Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hyoseok Hwang is active.

Publication


Featured researches published by Hyoseok Hwang.


international conference on consumer electronics | 1993

Interlaced to progressive scan conversion with double smoothing

Hyoseok Hwang; M.H. Lee; D.K. Ryu; D.I. Song

An interlaced to progressive (I/P) scan conversion scheme based on a three-point vertical median filtering is described. The three points are the interlaced data passed through double smoothing (DS). The DS based on order statistic filtering preceded by censoring eliminates impulsive and/or nonimpulsive noise while preserving signal edges. The three-point vertical median operation for I/P conversion controls the adaptive switching between a moving scene mode and a stationary scene mode without complex motion and edge detection. A structure for implementation of the proposed I/P conversion circuit is described. This circuit features an integrated dynamic RAM based on horizontal lines, a field memory, and a high-speed bubble sorter. Test results confirm that the proposed scheme is effective in I/P conversion for impulsive and/or nonimpulsive noise-contaminated images. >


intelligent robots and systems | 2012

Robust descriptors for 3D point clouds using Geometric and Photometric Local Feature

Hyoseok Hwang; Seungyong Hyung; Sukjune Yoon; Kyung Shik Roh

The robust perception of robots is strongly needed to handle various objects skillfully. In this paper, we propose a novel approach to recognize objects and estimate their 6-DOF pose using 3D feature descriptors, called Geometric and Photometric Local Feature (GPLF). The proposed descriptors use both the geometric and photometric information of 3D point clouds from RGB-D camera and integrate those information into efficient descriptors. GPLF shows robust discriminative performance regardless of characteristics such as shapes or appearances of objects in cluttered scenes. The experimental results show how well the proposed approach classifies and identify objects. The performance of pose estimation is robust and stable enough for the robot to manipulate objects. We also compare the proposed approach with previous approaches that use partial information of objects with a representative large-scale RGB-D object dataset.


IEEE Transactions on Image Processing | 2017

3D Display Calibration by Visual Pattern Analysis

Hyoseok Hwang; Hyun Sung Chang; Dongkyung Nam; In So Kweon

Nearly all 3D displays need calibration for correct rendering. More often than not, the optical elements in a 3D display are misaligned from the designed parameter setting. As a result, 3D magic does not perform well as intended. The observed images tend to get distorted. In this paper, we propose a novel display calibration method to fix the situation. In our method, a pattern image is displayed on the panel and a camera takes its pictures twice at different positions. Then, based on a quantitative model, we extract all display parameters (i.e., pitch, slanted angle, gap or thickness, and offset) from the observed patterns in the captured images. For high accuracy and robustness, our method analyzes the patterns mostly in the frequency domain. We conduct two types of experiments for validation; one with optical simulation for quantitative results and the other with real-life displays for qualitative assessment. Experimental results demonstrate that our method is quite accurate, about a half order of magnitude higher than prior work; is efficient, spending less than 2s for computation; and is robust to noise, working well in the SNR regime as low as 6dB.


Optics Express | 2017

Local deformation calibration for autostereoscopic 3D display

Hyoseok Hwang; Hyun Sung Chang; In So Kweon

Calibration is vital to autostereoscopic 3D displays. This paper proposes a local calibration method that copes with any type of deformation in the optical layer. The proposed method is based on visual pattern analysis. Given the observations, we manage to localize the optical slits by matching the observations to the input pattern. In a principled optimization framework, we find an efficient calibration algorithm. Experimental validation follows. The local calibration shows significant improvement in 3D visual quality over the global calibration method. This paper also finds a new intuitive insight on the calibration in terms of the light field theory.


Proceedings of the IEEE | 2017

Flat Panel Light-Field 3-D Display: Concept, Design, Rendering, and Calibration

Dongkyung Nam; Jin-Ho Lee; Yang Ho Cho; Young Ju Jeong; Hyoseok Hwang; Du Sik Park

Recent autostereoscopic 3-D (A3D) displays suffer from many limitations such as narrow viewing angle, low resolution, and shallow depth effects. As these limitations mainly originate from the insufficiency of pixel resources, it is not easy to obtain a feasible solution that can solve all the limitations simultaneously. In many cases, it will be better to find a good compromising design. Generally, the multiview display and the integral imaging display are the representative designs of A3D. However, as they are too canonical and lack flexibility in design, they tend to be a tradeoff. To address these design issues, we have analyzed the multiview display and the integral image display in a light-field coordinate and developed a 3-D display design framework in a light-field space. The developed framework does not use the “view” concept anymore. Instead, it considers the spatial distribution of rays of the 3-D display and provides more flexible and sophisticated design methods. In this paper, the developed design method is explained using a new pixel value assigning algorithm, called the light-field rendering, and vision-based parameter calibration methods for 3-D displays. We have also analyzed the blur effects caused by the depth and display characteristics. By implementing the proposed method, we have designed a 65-in 96-view display with a 4K panel. The developed prototype has showed almost seamless parallax with a high-resolution comparable to the conventional four to five views displays. This paper will be useful to readers interested in A3D displays, especially in the multiview and the integral imaging displays.


Digital Holography and Three-Dimensional Imaging | 2016

Eye Tracking based Glasses-free 3D Display by Dynamic Light Field Rendering

Seok Lee; Juyong Park; Jingu Heo; Byungmin Kang; Dongwoo Kang; Hyoseok Hwang; Jin-Ho Lee; Yoon-sun Choi; Kyu-hwan Choi

Glasses-free 3D display is developed using dynamic light field rendering algorithm in which light field information is mapped in real time based on 3D eye position. We implemented 31.5″ and 10.1″ prototypes.


international conference on consumer electronics | 1994

A New Ghost Cancellation System

Kwang-hyuk Kim; Jisung Oh; M.H. Lee; Hyoseok Hwang; D.I. Song

We have developed a new ghost cancellation system for NTSC television. The essential elements in this system are a highly integrated transversal filter, an unique ghost canceling reference(GCR) signal and a high performance algorithm. Laboratory and field test results confirm that the system is effective in canceling several combination of ghosts, which exist in real situation.


Optical Engineering | 2017

Uncalibrated multiview synthesis

Young Ju Jeong; Hyun Sung Chang; Hyoseok Hwang; Dongkyung Nam; C.-C. Jay Kuo

Abstract. Nonideal stereo videos do not hinder viewing experience in stereoscopic displays. However, for autostereoscopic displays nonideal stereo videos are the main cause of reduced three-dimensional quality causing calibration artifacts and multiview synthesis artifacts. We propose an efficient multiview rendering algorithm for autostereoscopic displays that takes uncalibrated stereo as input. First, the epipolar geometry of multiple viewpoints is analyzed for multiview displays. The uncalibrated camera poses for multiview display viewpoints are then estimated by algebraic approximation. The multiview images of the approximated uncalibrated camera poses do not contain any projection or warping distortion. Finally, by the exploiting rectification homographies and disparities of rectified stereo, one can determine the multiview images with their estimated camera poses. The experimental results show that the multiview synthesis algorithm can provide results that are both temporally consistent and well-calibrated without warping distortion.


Novel Optical Systems Design and Optimization XX | 2017

Glasses-free 2D/3D switchable display using an integrated single light-guide plate (LGP) with a trapezoidal light-extraction (TLE) film

Jin-Ho Lee; Yoon-sun Choi; Igor Yanusik; Alexander Morozov; Hyoseok Hwang; Dongkyung Nam; Du Sik Park

A 10.1-inch 2D/3D switchable display using an integrated single light-guide plate (LGP) with a trapezoidal lightextraction (TLE) film was designed and fabricated. The integrated single LGP was composed of inverted trapezoidal line structures made by attaching a TLE film on its top surface and cylindrical lens structures on its bottom surface. The top surface of the TLE film was also bonded to the bottom surface of an LCD panel to maintain a 3D image quality, which can be seriously deteriorated by the gap variations between the LCD panel and the LGP. The inverted trapezoidal line structures act as slit apertures of parallax barriers for 3D mode. Light beams from LED light sources placed along the left and right edges of the LGP bounce between the top and bottom surfaces of the LGP, and when they collide with the inclined surfaces of the inverted trapezoidal structures, they are emitted toward the LCD panel. Light beams from LED light sources arranged on the top and bottom edges of the LGP are emitted to the lower surface while colliding with the cylindrical lens structures, and are reflected to the front surface by a reflective film for 2D mode. By applying the integrated single LGP with a TLE film, we constructed a 2D/3D switchable display prototype with a 10.1-inch tablet panel of WUXGA resolution (1,200×1,920). Consequently, we showed light-field 3D and 2D display images without interference artifacts between both modes, and also achieved luminance uniformity of over 80%. This display easily generates both 2D and 3D images without increasing the thickness and power consumption of the display device.


biomedical engineering systems and technologies | 2016

Feasibility of Eye-tracking based Glasses-free 3D Autostereoscopic Display Systems for Medical 3D Images

Dongwoo Kang; Seok Lee; Hyoseok Hwang; Juyong Park; Jingu Heo; Byongmin Kang; Jin-Ho Lee; Yoon-sun Choi; Kyu-hwan Choi; Dongkyung Nam

Medical image diagnosis processes with stereoscopic depth by 3D display have not been developed widely yet and remain understudied Many stereoscopic displays require glasses that are inappropriate for use in clinical diagnosis/explanation/operating processes in hospitals. An eye-tracking based glasses-free three-dimensional autostereoscopic display monitor system has been developed, and its feasibility for medical 3D images was investigated, as a cardiac CT 3D navigator. Our autostereoscopic system uses slit-barrier with BLU, and it is combined with our vision-based eye tracking system to display 3D images. Dynamic light field rendering technique is applied with the 3D coordinates calculated by the eye-tracker, in order to provide a single viewer the best 3D images with less x-talk. To investigate the feasibility of our autostereoscopic system, 3D volume was rendered from 3D coronary CTA images (512 by 512 by 400). One expert reader identified the three main artery structures (LAD, LCX and RCA) in shorter time than existing 2D display. The reader did not report any eye fatigue or discomfort. In conclusion, we proposed a 3D cardiac CT navigator system with a new glasses-free 3D autostereoscopy, which may improve diagnosis accuracy and fasten diagnosis process.

Collaboration


Dive into the Hyoseok Hwang's collaboration.

Researchain Logo
Decentralizing Knowledge