Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeongho Shin is active.

Publication


Featured researches published by Jeongho Shin.


Real-time Imaging | 2005

Optical flow-based real-time object tracking using non-prior training active feature model

Jeongho Shin; Sangjin Kim; Sangkyu Kang; Seong-Won Lee; Joon Ki Paik; Besma R. Abidi; Mongi A. Abidi

This paper presents a feature-based object tracking algorithm using optical flow under the non-prior training (NPT) active feature model (AFM) framework. The proposed tracking procedure can be divided into three steps: (i) localization of an object-of-interest, (ii) prediction and correction of the objects position by utilizing spatio-temporal information, and (iii) restoration of occlusion using NPT-AFM. The proposed algorithm can track both rigid and deformable objects, and is robust against the objects sudden motion because both a feature point and the corresponding motion direction are tracked at the same time. Tracking performance is not degraded even with complicated background because feature points inside an object are completely separated from background. Finally, the AFM enables stable tracking of occluded objects with maximum 60% occlusion. NPT-AFM, which is one of the major contributions of this paper, removes the off-line, preprocessing step for generating a priori training set. The training set used for model fitting can be updated at each frame to make more robust objects features under occluded situation. The proposed AFM can track deformable, partially occluded objects by using the greatly reduced number of feature points rather than taking entire shapes in the existing shape-based methods. The on-line updating of the training set and reducing the number of feature points can realize a real-time, robust tracking system. Experiments have been performed using several in-house video clips of a static camera including objects such as a robot moving on a floor and people walking both indoor and outdoor. In order to show the performance of the proposed tracking algorithm, some experiments have been performed under noisy and low-contrast environment. For more objective comparison, PETS 2001 and PETS 2002 datasets were also used.


IEEE Transactions on Consumer Electronics | 2005

Noise-adaptive spatio-temporal filter for real-time noise removal in low light level images

Seong-Won Lee; Vivek Maik; Jihoon Jang; Jeongho Shin; Joonki Paik

Noise reduction gradually becomes one of the most important features in consumer cameras. The video signal is easily interfered by noise during acquisition process especially in low light environment. Many of the state-of-the-art filters for noise reduction perform-well for high contrast images. However, for low light images, the filter performance degrades seriously. In this paper, we propose a noise-adaptive spatio-temporal (NAST) filtering for removal of noise in low light level images. The proposed algorithm consists of a statistical domain temporal filter (SDTF) for moving area and a spatial hybrid filter (SHF) for stationary area. By minimizing required resources for implementation, we present a high quality, low-cost noise reduction filter for low light images. Since the proposed algorithm is designed for real-time implementation, it can be used as a pre-filter for a DCT-based encoder to enhance the coding efficiency of many commercial applications such as low cost camcorders, digital cameras, CCTV, and surveillance video systems.


international conference on image analysis and recognition | 2005

Video stabilization using kalman filter and phase correlation matching

Ohyun Kwon; Jeongho Shin; Joon Ki Paik

A robust digital image stabilization algorithm is proposed using a Kalman filter-based global motion prediction and phase correlation-based motion correction. Global motion is basically estimated by adaptively averaging multiple local motions obtained by phase correlation. The distribution of phase correlation determines a local motion vector, and the global motion is obtained by suitably averaging multiple local motions. By accumulating the global motion at each frame, we can obtain the optimal motion vector that can stabilize the corresponding frame. The proposed algorithm is robust to camera vibration or unwanted movement regardless of objects movement. Experimental results show that the proposed digital image stabilization algorithm can efficiently remove camera jitter and provide continuously stabilized video.


IEEE Transactions on Circuits and Systems for Video Technology | 2007

Regularized Restoration Using Image Fusion for Digital Auto-Focusing

Vivek Maik; Dohee Cho; Jeongho Shin; Joon Ki Paik

Fusion-based image restoration is an effective way to remove multiple out-of-focus blurs in images. Although image restoration and image fusion have been successfully investigated and developed over the years, little effort has been made to combine them. In this paper, we present an integration method of the two approaches and make them benefit from each other to obtain significantly improved performance. Based on the proposed fusion approach, we present a novel digital auto-focusing algorithm, which restores an image with multiple, differently out-of-focused objects. To this end, an out-of-focused image is first restored by using a directionally regularized iterative restoration with multiple regularization parameters. By assembling multiple, restored regions from consecutive levels of iterations, a salient focus measure is formed as a new query using sum modified Laplacian (SML). An auto-focusing error metric (AFEM) is used as an appropriate termination criterion for iterative restoration. A novel soft decision fusion and blending (SDFB) algorithm combines images from restored by different point-spread functions (PSFs) and enables smooth transition across region boundaries for creating the finally restored image using a pseudo activity measure. Experimental results show that the proposed auto-focusing algorithm provides sufficiently high-quality restored images so that it can be used for devices such as a digital camera and a camcorder.


Real-time Imaging | 2003

Real-time iterative framework of regularized image restoration and its application to video enhancement

Sungjin Kim; Jeongho Shin; Joon Ki Paik

A novel framework of real-time video enhancement is proposed. The proposed framework is based on the regularized iterative image restoration algorithm, which iteratively removes degradation effects under a priori constraints. Although regularized iterative image restoration is proven to be a successful technique in restoring degraded images, its application is limited within still images or off-line video enhancement because of its iterative structure. In order to enable this iterative restoration algorithm to enhance the quality of video in real-time, each frame of video is considered as the constant input and the processed previous frame is considered as the previous iterative solution. This modification is valid only when the input of the iteration, that is each frame, remains unchanged throughout the iteration procedure. Because every frame of general video sequence is different from each other, each frame is segmented into two regions: still background and moving objects. These two regions are processed differently by using a segmentation-based spatially adaptive restoration and a background generation algorithms. Experimental results show that the proposed real-time restoration algorithm can enhance the input video much better than simple filtering techniques. The proposed framework enables real-time video enhancement at the cost of image quality only in the moving object area of dynamic shots, which is relatively insensitive to the human visual system.


visual communications and image processing | 2000

Adaptive regularized image interpolation using data fusion and steerable constraints

Jeongho Shin; Joon Ki Paik; Jeffery R. Price; Mongi A. Abidi

This paper presents an adaptive regularized image interpolation algorithm from blurred and noisy low resolution image sequence, which is developed in a general framework based on data fusion. This framework can preserve the high frequency components along the edge orientation in a restored high resolution image frame. This multiframe image interpolation algorithm is composed of two levels of fusion algorithm. One is to obtain enhanced low resolution images as an input data of the adaptive regularized image interpolation based on data fusion. The other one is to construct the adaptive fusion algorithm based on regularized image interpolation using steerable orientation analysis. In order to apply the regularization approach to the interpolation procedure, we first present an observation model of low resolution video formation system. Based on the observation model, we can have an interpolated image which minimizes both residual between the high resolution and the interpolated images with a prior constraints. In addition, by combining spatially adaptive constraints, directional high frequency components are preserved with efficiently suppressed noise. In the experimental results, interpolated images using the conventional algorithms are shown to compare the conventional algorithms with the proposed adaptive fusion based algorithm. Experimental results show that the proposed algorithm has the advantage of preserving directional high frequency components and suppressing undesirable artifacts such as noise.


computer analysis of images and patterns | 2005

Pattern selective image fusion for multi-focus image reconstruction

Vivek Maik; Jeongho Shin; Joon Ki Paik

This paper presents a method for fusing multiple images of a static scene and shows how to apply the proposed method to extend depth of field. Pattern selective image fusion provides a mechanism for combining multiple monochromatic images through identifying salient features in the source images and combining those features in to a single fused image. The source images are first decomposed using filter subtract decimate (FSD) in laplacian domain. Thesum-modified-Laplacian (SML) is used for obtaining the depth of focus in the source images. The selected images are then blended together using monotonically decreasing soft decision blending (SDB), which enables smooth transitions across region boundaries. The resulting fused image utilizes focus information that is greater than that of the constituent images, while retaining a natural verisimilitude. Experimental results show the performance of the depth of focus extension using consumer video camera outputs.


international conference on consumer electronics | 2006

Edge based adaptive Kalman filtering for real-time video stabilization

Ohyun Kwon; Jeongho Shin; Joonki Paik

Image stabilization, particularly during zoom-in is an important feature in consumer cameras. Existing methods to stabilize an image often make an incorrect motion vector by an improper choice of motion estimation or correction method. The proposed algorithm is simple and effective; it is suitable for low cost camcorders, digital cameras, CCTV, surveillance video systems, and television broadcasting systems.


international conference on image analysis and recognition | 2005

Face recognition using optimized 3d information from stereo images

Changhan Park; Seanae Park; Jeongho Shin; Joon Ki Paik; Jaechan Namkung

In this paper we propose a new range-based face recognition for significant improvement in the recognition rate using an optimized stereo acquisition system. The optimized 3D acquisition system consists of an eyes detection algorithm, facial pose direction distinction, and principal component analysis (PCA). The proposed method is carried out in the YCbCr color space in order to detect the face candidate area. To detect the correct face, it acquires the correct distance of the face candidate area and depth information of eyes and mouth. After scaling, the system transfers the pose change according to the distance. The face is finally recognized by the optimized PCA for each area with the facial pose elements detected. Simulation results with face recognition rate of 95.83% (100cm) in the front and 98.3% with the pose change were obtained successfully. Therefore, proposed method can be used to obtain high recognition rate with an appropriate scaling and pose change according to the distance.


advances in multimedia | 2005

Feature fusion-based multiple people tracking

Junhaeng Lee; Sangjin Kim; Daehee Kim; Jeongho Shin; Joon Ki Paik

This paper presents a feature fusion-based tracking algorithm using optical flow under the non-prior training active feature model (NPT-AFM) framework. The proposed object tracking procedure can be divided into three steps: (i) localization of human objects, (ii) prediction and correction of the object’s location by utilizing spatio-temporal information, and (iii) restoration of occlusion using the NPT-AFM[15]. Feature points inside an ellipsoidal shape including objects are estimated instead of its shape boundary, and are updated as an element of the training set for the AFM. Although the proposed algorithm uses the greatly reduced number of feature points, the proposed feature fusion-based multiple people tracking algorithm enables the tracking of occluded people in complicated background.

Collaboration


Dive into the Jeongho Shin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge