Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sung-Kee Park is active.

Publication


Featured researches published by Sung-Kee Park.


IEEE Transactions on Circuits and Systems for Video Technology | 2010

FPGA Design and Implementation of a Real-Time Stereo Vision System

Seunghun Jin; Jung Uk Cho; Xuan Dai Pham; Kyoung Mu Lee; Sung-Kee Park; Munsang Kim; Jae Wook Jeon

Stereo vision is a well-known ranging method because it resembles the basic mechanism of the human eye. However, the computational complexity and large amount of data access make real-time processing of stereo vision challenging because of the inherent instruction cycle delay within conventional computers. In order to solve this problem, the past 20 years of research have focused on the use of dedicated hardware architecture for stereo vision. This paper proposes a fully pipelined stereo vision system providing a dense disparity image with additional sub-pixel accuracy in real-time. The entire stereo vision process, such as rectification, stereo matching, and post-processing, is realized using a single field programmable gate array (FPGA) without the necessity of any external devices. The hardware implementation is more than 230 times faster when compared to a software program operating on a conventional computer, and shows stronger performance over previous hardware-related studies.


international conference on robotics and automation | 2004

Mobile robot navigation based on direct depth and color-based environment modeling

Sung-Kee Park; Munsang Kim; Chong-Won Lee

This paper describes a new method for indoor environment mapping and localization with stereo camera. For environmental modeling, we directly use the depth and color information in image pixels as visual features. Furthermore, only the depth and color information at horizontal centerline in image is used, where optical axis passes through. The usefulness of this method is that we can easily build a measure between modeling and sensing data only on the horizontal centerline. That is because vertical working volume between model and sensing data can be changed according to robot motion. Therefore, we can build a map about indoor environment as compact and efficient representation. Also, based on such nodes and sensing data, we suggest a method for estimating mobile robot positioning with random sampling stochastic algorithm. With basic real experiments, we show that the proposed method can be an effective visual navigation algorithm.


intelligent robots and systems | 2009

Human augmented mapping for indoor environments using a stereo camera

Soohwan Kim; Howon Cheong; Ju-Hong Park; Sung-Kee Park

In this paper, we suggest a new method of human augmented mapping for indoor environments using only a stereo camera. Through users help, a robot with a stereo camera can investigate the environment without failure and even more efficiently. Moreover, the user can share the information about the environment with the robot and add semantic information to the environmental map. We employ PCA features for visual landmarks and a hybrid map for map representation. Particularly, we define two types of nodes, U/R-nodes and divide the map building into three processes, Users Guidance, Robots Map Revision, and Robots Map Completion. We implemented a human augmented mapping system with a stereo camera and demonstrated it in rectangular-shaped corridors. From the comparison with a manually-built map, we showed the feasibility of the environmental map generated by our proposed method.


intelligent robots and systems | 2005

Experimental research of navigation behavior selection using generalized stochastic Petri nets (GSPN) for a tour-guide robot

Gunhee Kim; Woojin Chung; Sung-Kee Park; Munsang Kim

This paper proposes a formal selection framework of multiple navigation behaviors for a service robot. In our approach, modeling, analysis, and performance evaluation are carried out based on the generalized stochastic Petri nets (GSPN). By adopting probabilistic approach, our framework helps the robot to select the most desirable navigation behavior in run time according to environmental conditions. Moreover, after a mission, the robot evaluates prior navigation performance from accumulated data, and uses the results for the improvement of future operations. Also, GSPN has several advantages over classic automata or direct use of Markov process. The basic ideas of the framework were introduced in our previous work (Gunhee Kim et al., 2005). Thus, this paper focuses on experimental verification by implementing the framework into the guide robot Jinny. We conduct the experiments about real guidance tasks with visitors in the National Science Museum of Korea. The results show that the proposed strategy is useful to select an appropriate navigation behavior in a dynamic space.


제어로봇시스템학회 국제학술대회 논문집 | 2003

Development of Pose-Invariant Face Recognition System for Mobile Robot Applications

Taigun Lee; Sung-Kee Park; Munsang Kim; Mignon Park


제어로봇시스템학회 국제학술대회 논문집 | 2002

A New Landmark-Based Visual Servoing with Stereo Camera for Door Opening

Myoungsoo Han; Soongeul Lee; Sung-Kee Park; Munsang Kim


제어로봇시스템학회 국제학술대회 논문집 | 2002

Real-time Tracking of Multiple Humans for Mobile Robot Application

Joonhyuk Choi; Byungsoo Park; Seok Soo Lee; Sung-Kee Park; Munsang Kim


제어로봇시스템학회 국제학술대회 논문집 | 2002

Face Detection and Recognition with Multiple Appearance Models for Mobile Robot Application

Taigun Lee; Sung-Kee Park; Munsang Kim


Archive | 2004

Flexible screw type height control device

Munsang Kim; Yoha Hwang; Seung-Jong Kim; Jong Min Lee; Sung-Kee Park; JongSuk Choi


Archive | 2003

Apparatus for measuring and fixing the three-dimensional location of medical instrument

Munsang Kim; Sung-Kee Park; Jong Suk Choi; Chang Hyun Cho; Dong Seok Ryu; Yo Ha Hwang; Min Joo Choi

Collaboration


Dive into the Sung-Kee Park's collaboration.

Top Co-Authors

Avatar

Munsang Kim

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

JongSuk Choi

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Chong-Won Lee

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Taigun Lee

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yo Ha Hwang

Korea Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Byungsoo Park

Seoul National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dong Hwan Kim

Korea Institute of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge