Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where In Kyu Park is active.

Publication


Featured researches published by In Kyu Park.


IEEE Transactions on Parallel and Distributed Systems | 2011

Design and Performance Evaluation of Image Processing Algorithms on GPUs

In Kyu Park; Nitin Singhal; Man Hee Lee; Sung-Dae Cho; Chris W. Kim

In this paper, we construe key factors in design and evaluation of image processing algorithms on the massive parallel graphics processing units (GPUs) using the compute unified device architecture (CUDA) programming model. A set of metrics, customized for image processing, is proposed to quantitatively evaluate algorithm characteristics. In addition, we show that a range of image processing algorithms map readily to CUDA using multiview stereo matching, linear feature extraction, JPEG2000 image encoding, and nonphotorealistic rendering (NPR) as our example applications. The algorithms are carefully selected from major domains of image processing, so they inherently contain a variety of subalgorithms with diverse characteristics when implemented on the GPU. Performance is evaluated in terms of execution time and is compared to the fastest host-only version implemented using OpenMP. It is shown that the observed speedup varies extensively depending on the characteristics of each algorithm. Intensive analysis is conducted to show the appropriateness of the proposed metrics in predicting the effectiveness of an application for parallel implementation.


Image and Vision Computing | 1999

Color image retrieval using hybrid graph representation

In Kyu Park; Il Dong Yun; Sang Uk Lee

Abstract In this paper, a robust color image retrieval algorithm is proposed based on the hybrid graph representation, i.e., a dual graph which consists of the Modified Color Adjacency Graph (MCAG) and Spatial Variance Graph (SVG). The MCAG, which is similar to the Color Adjacency Graph (CAG) [6] , is proposed to enhance the indexing ability and the database capacity, by increasing the feature dimension. In addition, the SVG is introduced, in order to utilize the geometric statistics of the chromatic segment in the spatial domain. In the matching process, we expand the histogram intersection [2] into the graph intersection, in which graph matching is performed using simple matrix operations. Intensive discussions and experimental results are provided to evaluate the performance of the proposed algorithm. Experiments are carried out on the Swains test images and the Virage images, demonstrating that the proposed algorithm yields high retrieval performance with tolerable computational complexity. It is also shown that the proposed algorithm works well, even if the query image is corrupted. e.g., a large part of pixels is missing.


international conference on image processing | 2002

Depth image-based representations for static and animated 3D objects

Yuri Matveevich Bayakovski; Leonid Levkovich-Maslyuk; Alexey Ignatenko; Anton Konushin; Dmitri Alexandrovich Timasov; Alexander Olegovich Zhirkov; Mahn-Jin Han; In Kyu Park

We describe a novel depth image-based representation (DIBR) that has been adopted into the MPEG-4 animation framework extension (AFX). The idea of this approach is to build a compact representation of a 3D object or scene without storing the geometry information in traditional polygonal form. The main formats of the DIBR family are simple texture (an image together with depth array), point texture (a view of a scene from a single input camera but with multiple pixels along each line of sight), and octree image (octree data structure together with a set of images and their viewport parameters). The designed node specifications and rendering algorithms are addressed. The experimental results show the efficacy and fidelity of the proposed approach.


machine vision applications | 2010

Fast and automatic object pose estimation for range images on the GPU

In Kyu Park; Marcel Germann; Michael D. Breitenstein; Hanspeter Pfister

We present a pose estimation method for rigid objects from single range images. Using 3D models of the objects, many pose hypotheses are compared in a data-parallel version of the downhill simplex algorithm with an image-based error function. The pose hypothesis with the lowest error value yields the pose estimation (location and orientation), which is refined using ICP. The algorithm is designed especially for implementation on the GPU. It is completely automatic, fast, robust to occlusion and cluttered scenes, and scales with the number of different object types. We apply the system to bin picking, and evaluate it on cluttered scenes. Comprehensive experiments on challenging synthetic and real-world data demonstrate the effectiveness of our method.


international conference on image processing | 2010

Implementation and optimization of image processing algorithms on handheld GPU

Nitin Singhal; In Kyu Park; Sung-Dae Cho

The advent of GPUs with programmable shaders on handheld devices has motivated embedded application developers to utilize GPU to offload computationally intensive tasks and relieve the burden from embedded CPU. In this work, we propose an image processing toolkit on handheld GPU with programmable shaders using OpenGL ES 2.0 API. By using the image processing toolkit, we show that a range of image processing algorithms map readily to handheld GPU. We employ real-time video scaling, cartoon-style non-photorealistic rendering, and Harris corner detector as our example applications. In addition, we propose techniques to achieve increased performance with optimized shader design and efficient sharing of GPU workload between vertex and fragment shaders. Performance is evaluated in terms of frames per second at varying video stream resolution.


digital identity management | 2007

Automatic Pose Estimation for Range Images on the GPU

Marcel Germann; Michael D. Breitenstein; Hanspeter Pfister; In Kyu Park

Object pose (location and orientation) estimation is a common task in many computer vision applications. Although many methods exist, most algorithms need manual initialization and lack robustness to illumination variation, appearance change, and partial occlusions. We propose a fast method for automatic pose estimation without manual initialization based on shape matching of a 3D model to a range image of the scene. We developed a new error function to compare the input range image to pre-computed range maps of the 3D model. We use the tremendous data- parallel processing performance of modern graphics hardware to evaluate and minimize the error function on many range images in parallel. Our algorithm is simple and accurately estimates the pose of partially occluded objects in cluttered scenes in about one second.


digital identity management | 1999

Constructing NURBS surface model from scattered and unorganized range data

In Kyu Park; Il Dong Yun; Sang Uk Lee

We propose an algorithm to produce a 3D surface model from a set of range data, based on the Non-Uniform Rational B-Splines (NURBS) surface fitting technique. It is assumed that the range data is initially unorganized and scattered 3D points, while their connectivity is also unknown. The proposed algorithm is roughly made up of two stages: initial model approximation employing K-means clustering, and construction of NURBS patch network using hierarchical graph representation. The initial model is approximated by both a polyhedral and triangular model. Then, the initial model is represented by a hierarchical graph, which is efficiently used to construct the G/sup 1/ continuous NURBS patch network of the whole object. Experiments are carried out on synthetic and real range data to evaluate the performance of the proposed algorithm. It is shown that the initial model, as well as the NURBS patch network, are constructed automatically, while the modeling error is observed to be negligible.


Optical Engineering | 2010

Single-image motion deblurring using adaptive anisotropic regularization

Hanyu Hong; In Kyu Park

We present a novel algorithm to remove motion blur from a single blurred image. To estimate the unknown motion blur kernel as accurately as possible, we propose an adaptive algorithm using aniso- tropic regularization. The proposed algorithm preserves the point spread function PSF path while keeping the properties of the motion PSF when solving for the blur kernel. Adaptive anisotropic regularization and refine- ment of the blur kernels are incorporated into an iterative process to improve the precision of the blur kernel. Maximum likelihood ML esti- mation deblurring based on edge-preserving regularization is derived to reduce artifacts while avoiding oversmoothing of the details. By using the estimated blur kernel and the proposed ML estimation deblurring, the motion blur can be removed effectively. The experimental results for real motion blurred images show that the proposed algorithm can removes


computer vision and pattern recognition | 2016

Robust Light Field Depth Estimation for Noisy Scene with Occlusion

W. Williem; In Kyu Park

Light field depth estimation is an essential part of many light field applications. Numerous algorithms have been developed using various light field characteristics. However, conventional methods fail when handling noisy scene with occlusion. To remedy this problem, we present a light field depth estimation method which is more robust to occlusion and less sensitive to noise. Novel data costs using angular entropy metric and adaptive defocus response are introduced. Integration of both data costs improves the occlusion and noise invariant capability significantly. Cost volume filtering and graph cut optimization are utilized to improve the accuracy of the depth map. Experimental results confirm that the proposed method is robust and achieves high quality depth maps in various scenes. The proposed method outperforms the state-of-the-art light field depth estimation methods in qualitative and quantitative evaluation.


EURASIP Journal on Advances in Signal Processing | 2005

Image-based 3D face modeling system

In Kyu Park; Hui Zhang; Vladimir Vezhnevets

This paper describes an automatic system for 3D face modeling using frontal and profile images taken by an ordinary digital camera. The system consists of four subsystems including frontal feature detection, profile feature detection, shape deformation, and texture generation modules. The frontal and profile feature detection modules automatically extract the facial parts such as the eye, nose, mouth, and ear. The shape deformation module utilizes the detected features to deform the generic head mesh model such that the deformed model coincides with the detected features. A texture is created by combining the facial textures augmented from the input images and the synthesized texture and mapped onto the deformed generic head model. This paper provides a practical system for 3D face modeling, which is highly automated by aggregating, customizing, and optimizing a bunch of individual computer vision algorithms. The experimental results show a highly automated process of modeling, which is sufficiently robust to various imaging conditions. The whole model creation including all the optional manual corrections takes only 23 minutes.

Collaboration


Dive into the In Kyu Park's collaboration.

Top Co-Authors

Avatar

Sang Uk Lee

Seoul National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kyoung Mu Lee

Seoul National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Il Dong Yun

Hankuk University of Foreign Studies

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Haesol Park

Seoul National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge