Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kee-Hyon Park is active.

Publication


Featured researches published by Kee-Hyon Park.


color imaging conference | 2005

Color decomposition method for multiprimary display using 3D-LUT in linearized LAB space

Dong-Woo Kang; Yun-Tae Kim; Yang-Ho Cho; Kee-Hyon Park; Won-Hee Choe; Yeong-Ho Ha

This paper proposes a color decomposition method for a multi-primary display (MPD) using a 3-dimensional look-up-table (3D-LUT) in linearized LAB space. The proposed method decomposes the conventional three primary colors into multi-primary control values for a display device under the constraints of tristimulus matching. To reproduce images on an MPD, the color signals are estimated from a device-independent color space, such as CIEXYZ and CIELAB. In this paper, linearized LAB space is used due to its linearity and additivity in color conversion. First, the proposed method constructs a 3-D LUT containing gamut boundary information to calculate the color signals for the MPD in linearized LAB space. For the image reproduction, standard RGB or CIEXYZ is transformed to linearized LAB, then the hue and chroma are computed with reference to the 3D-LUT. In linearized LAB space, the color signals for a gamut boundary point are calculated to have the same lightness and hue as the input point. Also, the color signals for a point on the gray axis are calculated to have the same lightness as the input point. Based on the gamut boundary points and input point, the color signals for the input point are then obtained using the chroma ratio divided by the chroma of the gamut boundary point. In particular, for a change of hue, the neighboring boundary points are also employed. As a result, the proposed method guarantees color signal continuity and computational efficiency, and requires less memory.


Machine vision and its optomechatronic applications. Conference | 2004

RGB look-up table design for color matching between monitor and mobile display

Kee-Hyon Park; Myong-Young Lee; Yang-Ho Cho; Yeong-Ho Ha

This paper proposes a color-matching 3D look-up table that simplifies the complex color-matching procedure between a monitor and a mobile display device, where the image colors are processed in a device-independent color space, such as CIEXYZ or CIELAB, and gamut mapping performed to compensate the gamut difference. When compared with a monitor, mobile displays are unable to display images with good color fidelity due to their smaller gamut, dimmer luminance, and worse color reproduction ability related to their low power consumption. As such, the image colors displayed on a monitor and mobile display can be significantly different for the same input digital values. Thus, to solve this problem, a color matching process between a monitor and a mobile display is needed that includes both color management in a device-independent color space1 and gamut mapping to compensate for the significant gamut difference. Yet, since these procedures involve many complex arithmetic functions, simplification is required for realization in mobile devices. Accordingly, this paper proposes a color-matching look-up to simplify the complex color-matching procedures for use in a mobile display. Moreover, the performance of the proposed color-matching look-up table is evaluated with different sizes of look-up table to determine the minimum size. Color-matching experiments between a monitor and a mobile display show that the images on the mobile display reflect the monitor images better after color matching than without color matching.


color imaging conference | 2008

Efficient HDR image acquisition using estimation of scenic dynamic range in camera images with different exposures

Dae-Keun Park; Kee-Hyon Park; Tae-Hyoung Lee; Myong-Hui Choi; Yeong-Ho Ha

Generally, to acquire an HDR image, many images that cover the entire dynamic range of the scene with different exposure times are required, then these images are fused into one HDR image. This paper proposes an efficient method for the HDR image acquisition with small number of images. First, we estimated scenic dynamic range using two images with different exposure times. These two images contain the upper and lower limit of the scenic dynamic range. Independently of the scene, according to varied exposure times, similar characteristics for both the maximum gray levels in images that include the upper limit and the minimum gray levels in images that include the lower limit are identified. After modeling these characteristics, the scenic dynamic range is estimated using the modeling results. This estimated scenic dynamic range is then used to select the proper exposure times for the acquisition of an HDR image. We selected only three proper exposure times because entire dynamic range of the cameras could be covered by three dynamic range of the cameras with different exposure times. To evaluate the error of the HDR image, experiments using virtual digital camera images were carried out. For several test images, the error of the HDR image using proposed method was comparable to that of the HDR image which utilize more than ten images for the HDR image acquisition.


international conference on image processing | 2007

Hue-Shift Modeling and Correction Method for High-Luminance Display

Tae-Hyoung Lee; Oh-Seol Kwon; Kee-Hyon Park; Yeong-Ho Ha

The human eye usually experiences a loss of color sensitivity when it is subjected to high levels of luminance, and perceives a discrepancy in color between high and normal-luminance displays, generally known as a hue shift. Accordingly, this paper models the hue-shift phenomenon and proposes a hue-correction method to provide perceptual matching between high and normal-luminance displays. To quantify the hue-shift phenomenon for the whole hue angle, 24 color patches with the same lightness are first created and equally spaced inside the hue angle. These patches are then displayed one-by-one on both displays with different luminance levels. Next, the hue value for each patch appearing on the high-luminance display is adjusted by observers until the perceived hue for the patches on both displays appear the same visually. After obtaining the hue-shift values from the color matching experiment, these values are fit piecewisely into seven sinusoidal functions to allow hue-shift amounts to be approximately determined for arbitrary hue values of pixels in a high-luminance display and then used for correction. Essentially, an input RGB image is converted to CIELAB LCh (lightness, chroma, and hue) color space to obtain the hue values for all the pixels, then these hue values are shifted according to the amount calculated by the functions of the hue-shift model. Finally, the corrected image is inversely converted to an output RGB image. For evaluation, a matching experiment was performed using several test images and z-score comparisons.


color imaging conference | 2007

Modeling for hue shift effect of human visual system on high luminance display

Tae-Hyoung Lee; Myong-Young Lee; Kee-Hyon Park; Yeong-Ho Ha

This paper proposes a color correction method based on modeling the hue shift phenomenon of human visual system (HVS). Observers tend to perceive same color stimuli, but of different intensity, as different hues, what is referred to as the hue shift effect. Although the effect can be explained with the Bezold-Brücke (B-B) effect, it is not enough to apply the B-B model on high luminance displays because most displays have a broad-band spectrum distribution and results vary according to type of display. In the proposed method, the quantities of hue shift between a high luminance display and a normal luminance display were first modeled by a color matching experiment with color samples along the hue angle of the LCH color space. Based on the results, the hue shift was then modeled piecewise and was finally applied to the inverse characterization of display to compensate the original input image. From evaluating the proposed method using the psychophysical experiment with some test images, we confirmed that the proposed modeling method is effective for color correction on high luminance displays.


electronic imaging | 2006

Illuminant-adaptive color reproduction for mobile display

Jong-Man Kim; Kee-Hyon Park; Oh-Seol Kwon; Yang-Ho Cho; Yeong-Ho Ha

This paper proposes an illuminant-adaptive reproduction method using light adaptation and flare conditions for a mobile display. Mobile displays, such as PDAs and cellular phones, are viewed under various lighting conditions. In particular, images displayed in daylight are perceived as quite dark due to the light adaptation of the human visual system, as the luminance of a mobile display is considerably lower than that of an outdoor environment. In addition, flare phenomena decrease the color gamut of a mobile display by increasing the luminance of dark areas and de-saturating the chroma. Therefore, this paper presents an enhancement method composed of lightness enhancement and chroma compensation. First, the ambient light intensity is measured using a lux-sensor, then the flare is calculated based on the reflection ratio of the display device and the ambient light intensity. The relative cone response is nonlinear to the input luminance. This is also changed by the ambient light intensity. Thus, to improve the perceived image, the displayed luminance is enhanced by lightness linearization. In this paper, the images luminance is transformed by linearization of the response to the input luminance according to the ambient light intensity. Next, the displayed image is compensated according to the physically reduced chroma, resulting from flare phenomena. The reduced chroma value is calculated according to the flare for each intensity. The chroma compensation method to maintain the original images chroma is applied differently for each hue plane, as the flare affects each hue plane differently. At this time, the enhanced chroma also considers the gamut boundary. Based on experimental observations, the outer luminance-intensity generally ranges from 1,000 lux to 30,000 lux. Thus, in the case of an outdoor environment, i.e. greater than 1,000 lux, this study presents a color reproduction method based on an inverse cone response curve and flare condition. Consequently, the proposed algorithm improves the quality of the perceived image adaptive to an outdoor environment.


Journal of Imaging Science and Technology | 2009

Color Correction by Estimation of Dominant Chromaticity in Multi-Scaled Retinex

In-Su Jang; Kee-Hyon Park; Yeong-Ho Ha


Journal of Imaging Science and Technology | 2007

Surface Reflectance Estimation Using the Principal Components of Similar Colors

Oh-Seol Kwon; Cheol-Hee Lee; Kee-Hyon Park; Yeong-Ho Ha


color imaging conference | 2008

Modified Multi-scaled Retinex Using Chromaticity of Highlight Region for Correcting Color Distortion

In-Su Jang; Kee-Hyon Park; Yeong-Ho Ha


Journal of Imaging Science and Technology | 2007

Banding-Artifact Reduction Using an Improved Threshold Scaling Function in Multitoning with Stochastic Screen

Tae-Yong Park; Kee-Hyon Park; In-Su Jang; Oh-Seol Kwon; Yeong-Ho Ha

Collaboration


Dive into the Kee-Hyon Park's collaboration.

Top Co-Authors

Avatar

Yeong-Ho Ha

Kyungpook National University

View shared research outputs
Top Co-Authors

Avatar

Oh-Seol Kwon

Kyungpook National University

View shared research outputs
Top Co-Authors

Avatar

In-Su Jang

Kyungpook National University

View shared research outputs
Top Co-Authors

Avatar

Yang-Ho Cho

Kyungpook National University

View shared research outputs
Top Co-Authors

Avatar

Tae-Hyoung Lee

Kyungpook National University

View shared research outputs
Top Co-Authors

Avatar

Cheol-Hee Lee

Kyungpook National University

View shared research outputs
Top Co-Authors

Avatar

Myong-Young Lee

Kyungpook National University

View shared research outputs
Top Co-Authors

Avatar

Chang-Hwan Son

Kyungpook National University

View shared research outputs
Top Co-Authors

Avatar

Dae-Geun Park

Kyungpook National University

View shared research outputs
Top Co-Authors

Avatar

Tae-Yong Park

Kyungpook National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge