Dong-Chan Cho
Hanyang University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dong-Chan Cho.
IEEE Transactions on Consumer Electronics | 2012
Dong-Chan Cho; Wah-Seng Yap; Hee-Kyung Lee; Injae Lee; Whoi-Yul Kim
Eye gaze tracking system has been widely researched for the replacement of the conventional computer interfaces such as the mouse and keyboard. In this paper, we propose the long range binocular eye gaze tracking system that works from 1.5 m to 2.5 m with allowing a head displacement in depth. The 3D position of the users eye is obtained from the two wide angle cameras. A high resolution image of the eye is captured using the pan, tilt, and focus controlled narrow angle camera. The angles for maneuvering the pan and tilt motor are calculated by the proposed calibration method based on virtual camera model. The performance of the proposed calibration method is verified in terms of speed and convenience through the experiment. The narrow angle camera keeps tracking the eye while the user moves his head freely. The point-of-gaze (POG) of each eye onto the screen is calculated by using a 2D mapping based gaze estimation technique and the pupil center corneal reflection (PCCR) vector. PCCR vector modification method is applied to overcome the degradation in accuracy with displacements of the head in depth. The final POG is obtained by the average of the two POGs. Experimental results show that the proposed system robustly works for a large screen TV from 1.5 m to 2.5 m distance with displacements of the head in depth (+20 cm) and the average angular error is 0.69°.
international soc design conference | 2013
Chang-Hoon Kum; Dong-Chan Cho; Moonsoo Ra; Whoi-Yul Kim
A lane detection system using around view monitoring (AVM) images is presented in this paper. To provide safe driving condition, many lane detection approaches have been proposed. However, previous approaches cannot detect lanes stably in low visibility condition such as foggy or rainy days because of the use of frontal camera. The proposed lane detection system uses ego-vehicles surrounding road information to overcome this problem. The proposed method can be split into two stages: generation of AVM images from four fisheye cameras and lane detection using AVM images. To generate AVM images, we use four fisheye cameras mounted on sides, front, and rear of the vehicle. Top-view images covering the surround area of the vehicle are generated from four fisheye images by calibrations of each camera and their relative camera pose. The lane detection procedure consists of detecting and grouping lane responses, fitting lane responses by a linear model, and tracking lanes with Kalman filter to smooth the estimates. Experimental results on full lanes and dashed lanes show that the proposed method can achieve the detection accuracies of 98.78% and 90.88% respectively and processing speed of 1 ms per frame in a desktop computer.
IEEE Transactions on Biomedical Engineering | 2013
Dong-Chan Cho; Whoi-Yul Kim
In the vision-based remote gaze tracking systems, the most challenging topics are to allow natural movement of a user and to increase the working volume and distance of the system. Several eye gaze estimation methods considering the natural movement of a user have been proposed. However, their working volume and distance are narrow and close. In this paper, we propose a novel 2-D mapping-based gaze estimation method that allows large-movement of user. Conventional 2-D mapping-based methods utilize mapping function between calibration points on the screen and pupil center corneal reflection (PCCR) vectors obtained in user calibration step. However, PCCR vectors and their associated mapping function are only valid at or near to the position where the user calibration is performed. The proposed movement mapping function, complementing the users movement, estimates scale factors between two PCCR vector sets: one obtained at the user calibration position and another obtained at the new user position. The proposed system targets a longer range gaze tracking which operates from 1.4 to 3 m. A narrow-view camera mounted on a pan and tilt unit is used by the proposed system to capture high-resolution eye image, providing a wide and long working volume of about 100 cm × 40 cm × 100 cm. The experimental results show that the proposed method successfully compensated the poor performance due to users large movement. Average angular error was 0.8° and only 0.07° of angular error was increased while the user moved around 81 cm.
Signal Processing-image Communication | 2013
Hee-Kyung Lee; Seong Yong Lim; Injae Lee; Jihun Cha; Dong-Chan Cho; Sunyoung Cho
This paper presents a gaze tracking technology which provides a convenient human-centric interface for multimedia consumption without any wearable device. It enables a user to interact with various multimedia on a large display in distance by tracking user movement and acquiring high resolution eye images. This paper also presents a gesture recognition technology which is helpful to interact with scene descriptions in terms of controlling and rendering scene objects. It is based on Hidden Markov Model and CRF using a commercial depth sensor. And then, this paper shows a collaboration method with those new sensors and MPEG standards in order to achieve interoperability among interactive applications, new user interaction devices and users.
international conference on computer vision | 2009
Yeul-Min Baek; Joong-Geun Kim; Dong-Chan Cho; Jin-Aeon Lee; Whoi-Yul Kim
Most of image processing algorithms assume that an image has an additive white Gaussian noise (AWGN). However, since the real noise is not AWGN, such algorithms are not effective with real images acquired by image sensors for digital camera. In this paper, we present an integrated noise model for image sensors that can handle shot noise, dark-current noise and fixed-pattern noise together. In addition, unlike most noise modeling methods, parameters for the model do not need to be re-configured depending on input images once it is made. Thus the proposed noise model is best suitable for various imaging devices. We introduce two applications of our noise model: edge detection and noise reduction in image sensors. The experimental results show how effective our noise model is for both applications.
Journal of Broadcast Engineering | 2010
Dong-Chan Cho; Hyung-Sub Kang; Whoi-Yul Kim
A unified image enhancement method is proposed for high-resolution image which based on color constancy and histogram equalization using edge region. To speed up the method, smaller image is used when parameters of color constancy and histogram equalization are determined. In the color constancy process, nth-derivative of gaussian is applied to x and y axis separately in order to estimate the color of the illumination rapidly. In the histogram equalization process, the histogram obtained from near-edge region is used for the histogram equalization. In the experiments, high-resolution images taken by digital camcorder are used for verifying the performance of the proposed method.
Journal of Broadcast Engineering | 2010
Dong-Chan Cho; Hyung-Sub Kang; Whoi-Yul Kim
As high-definition video is broadly used in various system such as broadcast system and digital camcorder the proper method in order to improve the quality of high-definition video is needed. In this paper, we propose an efficient method to improve color and contrast of high-definition video. In order to apply the image enhancement method to high-definition video, scale-down video of high-definition video is used and the parameter for image enhancement method is computed from small size video. To enhance the color of high-definition video, we apply color constancy method. First, we separate the video into several scenes by cut detection method. Then, we apply color constancy to each scene with same parameter. To improve the contrast of high-definition video, we use union of original image and histogram equalized image, and weight is calculated based on sorting of histogram bins. Finally, the performance of proposed method is demonstrated in experiment section.
international conference on hybrid information technology | 2008
Yeul-Min Baek; Dong-Chan Cho; Jin-Aeon Lee; Whoi-Yul Kim
Multimedia 2013 | 2013
Hoon Jo; Dong-Chan Cho; Hee-Kyung Lee; Jihun Cha; Whoi-Yul Kim
한국콘텐츠학회 종합학술대회 논문집 | 2011
Wah-Seng Yap; Dong-Chan Cho; Whoi-Yul Kim