Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chang-Yeong Kim is active.

Publication


Featured researches published by Chang-Yeong Kim.


human factors in computing systems | 2010

3D user interface combining gaze and hand gestures for large-scale display

ByungIn Yoo; Jae-Joon Han; Changkyu Choi; Kwonju Yi; Sungjoo Suh; Du-sik Park; Chang-Yeong Kim

In this paper, we present a novel attentive and immersive user interface based on gaze and hand gestures for interactive large-scale displays. The combination of gaze and hand gestures provide more interesting and immersive ways to manipulate 3D information.


IEEE Electron Device Letters | 2010

A Three-Dimensional Time-of-Flight CMOS Image Sensor With Pinned-Photodiode Pixel Structure

Seong-Jin Kim; Sang-Wook Han; Byongmin Kang; Keechang Lee; James D. K. Kim; Chang-Yeong Kim

A pixel architecture for providing not only normal 2-D images but also depth information by using a conventional pinned photodiode is presented. This pixel architecture allows the sensor to generate a real-time 3-D image of an arbitrary object. The operation of the pixel is based on the time-of-flight principle detecting the time delay between the emitted and reflected infrared light pulses in a depth image mode. The pixel contains five transistors. Compared to the conventional 4-T CMOS image sensor, the new pixel includes an extra optimized transfer gate for high-speed charge transfer. A fill factor of more than 60% is achieved with a 12 × 12 μm2 size for increasing the sensitivity. A fabricated prototype sensor successfully captures 64 × 16 depth images between 1 and 4 m at a 5-MHz modulation frequency. The depth inaccuracy is measured under 2% at 1 m and 4% at 4 m and is verified by noise analysis.


international conference on image processing | 2010

Range unfolding for Time-of-Flight depth cameras

Ouk Choi; Hwasup Lim; Byongmin Kang; Yong Sun Kim; Keechang Lee; James D. K. Kim; Chang-Yeong Kim

Time-of-Flight depth cameras provide a direct way to acquire range images, using the phase delay of the incoming reflected signal with respect to the emitted signal. These cameras, however, have a challenging problem called range folding, which occurs due to the modular error in phase delay—ranges are modulo the maximum range. To our best knowledge, we exploit the first approach to estimate the number of mods at each pixel from only a single range image. The estimation is recasted into an optimization problem in the Markov random field framework, where the number of mods is considered as a label. The actual range is then recovered using the optimal number of mods at each pixel, so-named range unfolding. As demonstrated in the experiments with various range images of real scenes, the proposed method accurately determines the number of mods. In result, the maximum range is practically extended at least twice of that specified by the modulation frequency.


Displays | 2008

Luminance contrast and chromaticity contrast preference on the colour display for young and elderly users

Gábor Kutas; Youngshin Kwak; Peter Bodrogi; Du-sik Park; Seong-deok Lee; Heui-keun Choh; Chang-Yeong Kim

Abstract The human visual system changes with aging and one of the most important changes is the decrease of spatial contrast sensitivity. We investigated this change both for luminance contrast and chromaticity contrast, and both for threshold contrast and preferred contrast, (preferred by users to carry out a visual recognition task), in a series of psycho-physical experiments with achromatic and chromatic sinusoid gratings of different values of spatial frequency, hue, and luminance level, and with two observer groups: young and elderly observers. We investigated the spatial frequency range of 0.1–10 cycles per degrees. Our results indicate that, beyond the expected luminance contrast sensitivity decline of the elderly observers, the difference between the preferred luminance contrast of the elderly and the preferred luminance contrast of the young is even more significant than the threshold difference. The small preference differences between the age groups for chromaticity contrast compared to luminance contrast suggests that while with increasing age both the chromatic and the achromatic contrast sensitivity drops, preferred contrast stays more stable for chromaticity contrast than for luminance contrast.


Journal of The Optical Society of America A-optics Image Science and Vision | 1998

Illuminant direction and shape of a bump

Chang-Yeong Kim; A. P. Petrov; Heui-keun Choh; Yang-Seck Seo; In So Kweon

An algorithm for recovering illuminant direction from the image data of a smooth Lambertian surface illuminated with a distant pointwise light source is presented. The algorithm is based on analysis of intensity distributions around structural elements of the image, such as image regions corresponding to bumps. After recognition of a bump in the image and estimation of the illuminant direction, the image data are integrated in order to recover the shape of the surface patch. The shape integration algorithm, which is based on a novel theoretical approach, is used to compute the normal vector field in the bump region without use of any explicitly given initial curve or initial data about the bump region. The theoretical consideration is illustrated with some results achieved on simulated and on realistic images.


Color Research and Application | 1998

Perceived illumination measured

A. P. Petrov; Chang-Yeong Kim; In So Kweon; Yang-Seok Seo

Authors thank Du Sik Park for development of the software used in the experiments; they are indebted to all the colleagues from SP Lab of SAIT who participated in the experiments; they are grateful to J. McCann, O. Orlov, V. Maximov, and A. Gilchrist for inspiring and helpful discussions. Significant improvement of the first version of the article was done by T. Schapiro and D. Petrov in Sleepy Hollow, MI, for which the authors are very thankful.


Proceedings of SPIE | 2012

Parametric model-based noise reduction for ToF depth sensors

Yong Sun Kim; Byongmin Kang; Hwasup Lim; Ouk Choi; Keechang Lee; James D. K. Kim; Chang-Yeong Kim

This paper presents a novel Time-of-Flight (ToF) depth denoising algorithm based on parametric noise modeling. ToF depth image includes space varying noise which is related to IR intensity value at each pixel. By assuming ToF depth noise as additive white Gaussian noise, ToF depth noise can be modeled by using a power function of IR intensity. Meanwhile, nonlocal means filter is popularly used as an edge-preserving denoising method for removing additive Gaussian noise. To remove space varying depth noise, we propose an adaptive nonlocal means filtering. According to the estimated noise, the search window and weighting coefficient are adaptively determined at each pixel so that pixels with large noise variance are strongly filtered and pixels with small noise variance are weakly filtered. Experimental results demonstrate that the proposed algorithm provides good denoising performance while preserving details or edges compared to the typical nonlocal means filtering.


international conference on image processing | 2011

Separable bilateral nonlocal means

Yong Sun Kim; Hwasup Lim; Ouk Choi; Keechang Lee; James D. K. Kim; Chang-Yeong Kim

Nonlocal means filtering is an edge-preserving denoising method whose filter weights are determined by Gaussian weighted patch similarities. The nonlocal means filter shows superior performance in removing additive Gaussian noise at the expense of high computational complexity. In this paper, we propose an efficient and effective denoising method by introducing a separable implementation of the nonlocal means filter and adopting a bilateral kernel for computing patch similarities. Experimental results demonstrate that the proposed method provides comparable performance to the original nonlocal means, with lower computational complexity.


Journal of The Society for Information Display | 2007

Image-color-quality modeling under various surround conditions for a 2-in. mobile transmissive LCD

Youn-Jin Kim; M. Ronnier Luo; Won-Hee Choe; Seong-deok Lee; Seung Sin Lee; Youngshin Kwak; Dus-Sik Park; Chang-Yeong Kim

— This study aims to develop an image-color-quality (ICQ) model for a 2-in. mobile transmissive liquid-crystal display (LCD). A hypothetical framework for ICQ judgment was made to visually assess ICQ based the cognitive processes of the human visual system (HVS), and then an illumination adaptive ICQ model applicable for various surround conditions was developed. The memory color reproduction ratio (MCRR) of a locally adapted region of interest in a complex image reproduced on a mobile display was first computed. The colorfulness index and luminance contrast for all of the pixels in the image were then calculated by a global adaptation process. Finally, an ICQ model including all of the three attributes was developed under dark conditions using an assessed set of psychophysical data. The model gave more accurate performance than the mean accuracy for all of the observers. It was also visually tested under three different outdoor conditions, including overcast, bright, and very bright conditions, and the illuminance level range was from 7000 to 35,000 to 70,000 lx. The effect of outdoor illumination could be quantified as an exponential decay function and the ICQ model could be extended to cover a wide variety of outdoor illuminations conditions.


Signal Processing-image Communication | 2013

Connecting users to virtual worlds within MPEG-V standardization

Seungju Han; Jae-Joon Han; James D. K. Kim; Chang-Yeong Kim

Virtual world such as Second life and 3D internet/broadcasting services have been increasingly popular. A life-scale virtual world presentation and the intuitive interaction between the users and the virtual worlds would provide more natural and immersive experience for users. The emergence of novel interaction technologies, such as facial-expression/body-motion tracking and remote interaction for virtual object manipulation, could be used to provide a strong connection between users in the real world and avatars in the virtual world. For the wide acceptance and the use of the virtual world, various types of novel interaction devices should have a unified interaction format between the real world and the virtual world. Thus, MPEG-V Media Context and Control (ISO/IEC 23005) standardizes such connecting information. The paper provides an overview and its usage example of MPEG-V from the real world to the virtual world (R2V) on interfaces for controlling avatars and virtual objects in the virtual world by the real world devices. In particular, we investigate how the MPEG-V framework can be applied for the facial animation and hand-based 3D manipulation using intelligent camera. In addition, in order to intuitively manipulate objects in a 3D virtual environment, we present two interaction techniques using motion sensors such as a two-handed spatial 3D interaction approach and a gesture-based interaction approach.

Collaboration


Dive into the Chang-Yeong Kim's collaboration.

Researchain Logo
Decentralizing Knowledge