Oh-Seol Kwon
Kyungpook National University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Oh-Seol Kwon.
IEEE Transactions on Consumer Electronics | 2010
Oh-Seol Kwon; Yeong-Ho Ha
The resolutions offered by todays multimedia vary significantly owing to the development of video technology. For example, there is a huge gap between the resolution of cellular phones as small input devices and beam projectors as large output devices. Thus, panoramic video technology is one method that can convert a small resolution into a large resolution to lend realism and wide vision to a scene. Yet, transforming the resolution of an image requires feature or object matching based on extracting important information from the image, where the scale-invariant feature transform (SIFT) is one of the most robust and widely used methods. However, identifying corresponding points becomes difficult in the case of changing illumination or two surfaces with a similar intensity, as SIFT extracts features using only gray information. Therefore, this paper proposes a method of image stitching based on color-invariant features for automated panoramic videos. Color-invariant features can discount the illumination, highlights, and shadows in a scene, as they include the property of the surface reflectance independent of illumination changes. The effectiveness and accuracy of the feature matching with the proposed algorithm are verified using objects and illuminations in a booth, followed by panoramic videos.
international conference on image processing | 2007
Tae-Hyoung Lee; Oh-Seol Kwon; Kee-Hyon Park; Yeong-Ho Ha
The human eye usually experiences a loss of color sensitivity when it is subjected to high levels of luminance, and perceives a discrepancy in color between high and normal-luminance displays, generally known as a hue shift. Accordingly, this paper models the hue-shift phenomenon and proposes a hue-correction method to provide perceptual matching between high and normal-luminance displays. To quantify the hue-shift phenomenon for the whole hue angle, 24 color patches with the same lightness are first created and equally spaced inside the hue angle. These patches are then displayed one-by-one on both displays with different luminance levels. Next, the hue value for each patch appearing on the high-luminance display is adjusted by observers until the perceived hue for the patches on both displays appear the same visually. After obtaining the hue-shift values from the color matching experiment, these values are fit piecewisely into seven sinusoidal functions to allow hue-shift amounts to be approximately determined for arbitrary hue values of pixels in a high-luminance display and then used for correction. Essentially, an input RGB image is converted to CIELAB LCh (lightness, chroma, and hue) color space to obtain the hue values for all the pixels, then these hue values are shifted according to the amount calculated by the functions of the hue-shift model. Finally, the corrected image is inversely converted to an output RGB image. For evaluation, a matching experiment was performed using several test images and z-score comparisons.
electronic imaging | 2006
Jong-Man Kim; Kee-Hyon Park; Oh-Seol Kwon; Yang-Ho Cho; Yeong-Ho Ha
This paper proposes an illuminant-adaptive reproduction method using light adaptation and flare conditions for a mobile display. Mobile displays, such as PDAs and cellular phones, are viewed under various lighting conditions. In particular, images displayed in daylight are perceived as quite dark due to the light adaptation of the human visual system, as the luminance of a mobile display is considerably lower than that of an outdoor environment. In addition, flare phenomena decrease the color gamut of a mobile display by increasing the luminance of dark areas and de-saturating the chroma. Therefore, this paper presents an enhancement method composed of lightness enhancement and chroma compensation. First, the ambient light intensity is measured using a lux-sensor, then the flare is calculated based on the reflection ratio of the display device and the ambient light intensity. The relative cone response is nonlinear to the input luminance. This is also changed by the ambient light intensity. Thus, to improve the perceived image, the displayed luminance is enhanced by lightness linearization. In this paper, the images luminance is transformed by linearization of the response to the input luminance according to the ambient light intensity. Next, the displayed image is compensated according to the physically reduced chroma, resulting from flare phenomena. The reduced chroma value is calculated according to the flare for each intensity. The chroma compensation method to maintain the original images chroma is applied differently for each hue plane, as the flare affects each hue plane differently. At this time, the enhanced chroma also considers the gamut boundary. Based on experimental observations, the outer luminance-intensity generally ranges from 1,000 lux to 30,000 lux. Thus, in the case of an outdoor environment, i.e. greater than 1,000 lux, this study presents a color reproduction method based on an inverse cone response curve and flare condition. Consequently, the proposed algorithm improves the quality of the perceived image adaptive to an outdoor environment.
international conference on image processing | 2004
Oh-Seol Kwon; Yang-Ho Cho; Yun-Tae Kim; Yeong-Ho Ha
This paper proposes a method for estimating the illuminant chromaticity using the distributions of camera responses obtained by a CCD camera in a real-world scene. Illuminant estimation using a highlight method is based on the geometric relation between a body and its surface reflection. In general, the pixels in a highlight region are affected by an illuminant geometric difference, camera quantization errors and the nonuniformity of the CCD sensor. As such, this leads to inaccurate results if an illuminant is estimated using the pixels of a CCD camera without any preprocessing. Accordingly, to solve this problem, the proposed method analyzes the distribution of the CCD camera responses and selects pixels using the Mahalanobis distance in highlight regions. The use of the Mahalanobis distance based on the camera responses enables the adaptive selection of valid pixels among the pixels distributed in the highlight regions. Lines are then determined based on the selected pixels with r-g chromaticity coordinates using a principal component analysis (PCA). Thereafter, the illuminant chromaticity is estimated based on the intersection points of the lines. Experimental results using the proposed method demonstrated a reduced estimation error compared with the conventional method.
electronic imaging | 2008
Dong-Chang Lee; Oh-Seol Kwon; Kyung-Woo Ko; Ho-Young Lee; Yeong-Ho Ha
In the field of computer vision, image mosaicking is achieved using image features, such as textures, colors, and shapes between corresponding images, or local descriptors representing neighborhoods of feature points extracted from corresponding images. However, image mosaicking based on feature points has attracted more recent attention due to the simplicity of the geometric transformation, regardless of distortion and differences in intensity generated by camera motion in consecutive images. Yet, since most feature-point matching algorithms extract feature points using gray values, identifying corresponding points becomes difficult in the case of changing illumination and images with a similar intensity. Accordingly, to solve these problems, this paper proposes a method of image mosaicking based on feature points using color information of images. Essentially, the digital values acquired from a real digital color camera are converted to values of a virtual camera with distinct narrow bands. Values based on the surface reflectance and invariant to the chromaticity of various illuminations are then derived from the virtual camera values and defined as color-invariant values invariant to changing illuminations. The validity of these color-invariant values is verified in a test using a Macbeth Color-Checker under simulated illuminations. The test also compares the proposed method using the color-invariant values with the conventional SIFT algorithm. The accuracy of the matching between the feature points extracted using the proposed method is increased, while image mosaicking using color information is also achieved.
color imaging conference | 2007
Kyung-Woo Ko; Oh-Seol Kwon; Chang-Hwan Son; Eun-Young Kwon; Yeong-Ho Ha
This paper proposes a colorization method that uses wavelet packet sub-bands to embed color components. The proposed method, firstly, involves a color-to-gray process, in which an input RGB image is converted into Y, Cb, and Cr images, and a wavelet packet transform applied to Y image to divide it into 16 sub-bands. The Cb and Cr images are then embedded into two sub-bands that include minimum information on the Y image. Once the inverse wavelet packet transform is carried out, a new gray image with texture is obtained, where the color information appears as texture patterns that are changed according to the Cb and Cr components. Secondly, a gray-to-color process is performed. The printed textured-gray image is scanned and divided into 16 sub-bands using a wavelet packet transform to extract the Cb and Cr components, and an inverse wavelet packet transform is used to reconstruct the Y image. At this time, the original information is lost in the color-to-gray process. Nonetheless, the details of the reconstructed Y image are almost the same as those in the original Y image because it uses sub-bands with minimum information to embed the Cb and Cr components. The RGB image is then reconstructed by combining the Y image with the Cb and Cr images. In addition, to recover color saturations more accurately, gray patches for compensating the characteristics of printers and scanners are used. As a result, the proposed method can improve both the boundary details and the color saturations in recovered color images.
Journal of Imaging Science and Technology | 2013
Wang-Jun Kyung; Dae-Chul Kim; Oh-Seol Kwon; Yeong-Ho Ha
The correction of faded colors in old pictures, prints, and paintings is an interesting issue for color image processing. Several techniques have already been introduced to enhance faded color images, many of which approach the problem as color cast removal and use global illuminant estimation methods, such as the gray world or Von Kries assumptions. However, the use of simple global operators to eliminate the illuminant effects is not always suitable for enhancing faded images. Therefore, this article presents a color correction algorithm based on a multi-scale gray world algorithm for faded color images. First, the proposed method adopts a local process using multi-scale filters, and coefficients are obtained for each filtered image. Integration of the coefficients using weights is then performed to calculate the correction ratio for the red and blue channels in the gray world assumption. Finally, the corrected image is obtained by applying the integrated coefficients to the gray world algorithm. In experiments, the proposed method is able to reproduce corrected colors for both wholly and partially faded images, in contrast to previous methods. The proposed method also enhances the visibility of the input images using multi-scale processing. c
Journal of Imaging Science and Technology | 2008
Kyung-Woo Ko; Oh-Seol Kwon; Chang-Hwan Son; Yeong-Ho Ha
Journal of Imaging Science and Technology | 2005
Oh-Seol Kwon; Yang-Ho Cho; Yeong-Ho Halt; Yun-Tae Kim
Journal of Imaging Science and Technology | 2007
Oh-Seol Kwon; Cheol-Hee Lee; Kee-Hyon Park; Yeong-Ho Ha