Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Suyoung Chi is active.

Publication


Featured researches published by Suyoung Chi.


international conference on pattern recognition | 2004

Scene text extraction in natural scene images using hierarchical feature combining and verification

Kil-Cheon Kim; Hyeran Byun; Y. J. Song; Young-Woo Choi; Suyoung Chi; Kye Kyung Kim; YunKoo Chung

We propose a method that extracts text regions in natural scene images using low-level image features and that verifies the extracted regions through a high-level text stroke feature. Then the two level features are combined hierarchically. The low-level features are color continuity, gray-level variation and color variance. The color continuity is used since most of the characters in a text region have the same color, and the gray-level variation is used since the text strokes are distinctive to the background in their gray-level values. Also, the color variance is used since the text strokes are distinctive in their colors to the background, and this value is more sensitive than the gray-level variations. As a high level feature, text stroke is examined using multi-resolution wavelet transforms on local image areas and the feature vector is input to a SVM (support vector machine) for verification. We tested the proposed method with various kinds of the natural scene images and confirmed that extraction rates are high even in complex images.


international conference on document analysis and recognition | 2005

Text region extraction and text segmentation on camera-captured document style images

Y. J. Song; K. C. Kim; Young-Woo Choi; H. R. Byun; Seon Ho Kim; Suyoung Chi; Dae-Geun Jang; YunKoo Chung

In this paper, we propose a text extraction method from camera-captured document style images and propose a text segmentation method based on a color clustering method. The proposed extraction method detects text regions from the images using two low-level image features and verifies the regions through a high-level text stroke feature. The two level features are combined hierarchically. The low-level features are intensity variation and color variance. And, we use text strokes as a high-level feature using multi-resolution wavelet transforms on local image areas. The stroke feature vector is an input to a SVM (support vector machine) for verification, when needed. The proposed text segmentation method uses color clustering to the extracted text regions. We improved K-means clustering method and it selects K and initial seed values automatically. We tested the proposed methods with various document style images captured by three different cameras. We confirmed that the extraction rates are good enough to be used in real-life applications.


society of instrument and control engineers of japan | 2006

Visual Processing of Rock, Scissors, Paper Game for Human Robot Interaction

Ho-Sub Yoon; Suyoung Chi

There are so many interaction methods between human and robots using visual sensor as camera. In this paper, we try to make rock, scissors, paper game using robot camera. The main problem is occurred by robot height, as more preciously camera height, because hand shapes are dependent on camera angle. If robots camera height is smaller than human, it is difficult to compare with rock and scissors shape, when human stretch out his hand to the robot camera. To solve this problem, we propose that human stretch out his hand to his shoulder. In this circumstance, we can use skin model to separate human hands using face detector that is base on ada-boosting algorithm including OpenCV library. If we detect a human face, and we made a skin models from detected face region, we can get the robust skin information according to the illumination variance


intelligent robots and systems | 2006

A Robot Photographer with User Interactivity

Hyunsang Ahn; Do Hyung Kim; Jaeyeon Lee; Suyoung Chi; Kye Kyung Kim; Jinsul Kim; Minsoo Hahn; Hyunseok Kim

This paper describes a new robot photographer system which can interact with people. The goal of this research is to make the system act like a human photographer. This system is based on a mobile robot having capabilities of wireless communication and stereo vision. It recognizes waving hands of people, moves toward them, and takes pictures with designated compositions. The pictures are transmitted to personal computers over the wireless network. In comparison with previous researches, the most unique things of this system are human interaction and user preference. To realize these properties, this new robot photographer system is considered optical property of lens and applied interesting vision algorithm which was never tried before for previous robot photographer systems


intelligent robots and systems | 2006

A robust human head detection method for human tracking

Ho-Sub Yoon; Do Hyung Kim; Suyoung Chi; Young-Jo Cho

In this paper, an algorithm for human head detection over a distance exceeding 2.5 m between a camera and an object is described. This algorithm is used for the control of a robot, which has the additional limits of a moving camera, moving objects, various face orientations, and unfixed illuminations. With these circumstances, only the assumption that human head and body contours have an omega (Omega) shape is made. In order to separate the background from these omega shapes, the three basic features of gray, color and edge are used, and combined. The skin color cue is very useful when the image stream is frontal and has large face regions, and additionally has no background objects similar to the skin color. The gray cue is also important when captured faces have a lower gray level than background objects. The edge cue is helpful when captured background objects have similar gray levels and colors to those of a head, but can be discriminated by edges. Since these three methods have roughly orthogonal failure results, they serve to complement each other. The next step is a splitting method between the head and body region using the proposed method. The final step is an ellipse fitting and a head region verification algorithm. The results of this algorithm provide robustness for head rotation, illumination changing, and variable head sizes. Furthermore, it is possible to carry out real time processing


international symposium on communications and information technologies | 2007

Robust Face Recognition Using The Modified Census Transform

Woo-han Yun; Ho-Sub Yoon; Dohyung Kim; Suyoung Chi

Many algorithms do not work well in real-world systems as real-world systems have problems with illumination variation and imperfect detection of face and eyes. In this paper, we compare the illumination normalization methods (SQI, HE, GIC), and the feature extraction methods (PCA, LDA, 2dPCA, 2dLDA, B2dLDA) using Yale B database and ETRJ database. In addition, we propose a stable and robust illumination normalization method using a modified census transform. The experimental results show that MCT is robust for illumination variations as well as for inaccurate eyes and face detections. B2dLDA was shown to have the best performance in the feature extraction methods.


IFAC Proceedings Volumes | 2005

REAL-TIME SOUND LOCALIZATION USING TIME DIFFERENCE FOR HUMAN ROBOT INTERACTION

Ji-Yeoun Lee; Suyoung Chi; Jae-Yeun Lee; Minsoo Hahn; Young-Jo Cho

Abstract This paper suggests an algorithm that can detect the direction of sound source at real time. The algorithm uses the time difference or phase difference in the sound source from the three microphones, which are located with difference of 120 degree between two microphones. In addition, sound source tracing algorithm having shorter execution time than that of an existing algorithm is proposed to fit the real-time service robot. Our algorithm has almost correct result within range of an error of ± 6 degree. In consideration of the states of reverberation, echo and the altitude of sound source, if the algorithm is improved, then better efficiency will be expected.


human-robot interaction | 2014

Interaction control for postural correction on a riding simulation system

Sangseung Kang; Kye Kyung Kim; Suyoung Chi; Jaehong Kim

A horseback riding simulator is a robotic machine that simulates the motion of horseback riding. In this paper, we present an interaction control system for a horseback riding simulator. The proposed system provides a postural correction function suited to the user based on their historical log and specialized posture coaching data. The system has adopted certain schemes for posture detection and recognition of the identified user as a way to recommend customized exercise modes. Our experiments show that including these techniques will help users maintain good posture while riding. Categories and Subject Descriptors J.7 [Computer Applications]: Computers in Other Systems; H.5.m [Information Interfaces and Presentation]: Miscellaneous


systems man and cybernetics | 2000

Segmentation of a text printed in Korean and English using structure information and character recognizers

Young-Sup Hwang; Kyung-Ae Moon; Suyoung Chi; Dae-Geun Jang; Weon-Geun Oh

The purpose of the research presented is to segment a text image printed in both Korean and English into character images, utilizing the structure information in Korean and English characters, and using a Korean, English and mixed language character recognizer. The image cannot be separated by only using the width and height of a character because those of an English character are not constant, contrary to those of a Korean character. Therefore we first classify the image into Korean or English using the structure information in Korean and English characters. If it is determined as a Korean character, we segment it with the average width of Korean characters in the text lines. If it is determined as an English character, we segment it using a classical method to segment touching alphanumeric characters. If it cannot be determined, we find possible cut points using a vertical histogram and use the mixed language recognizer to determine the right cut point. Since our method first classifies a block into Korean or English, it can be run faster than the traditional method that cannot identify the language. Each classified block can be segmented more accurately because more specific knowledge about Korean and English characters can be applied.


international conference on ubiquitous and future networks | 2016

Consideration of manufacturing data to apply machine learning methods for predictive manufacturing

Ji-Hyeong Han; Suyoung Chi

According to the recent development of internet of things and big data, the serious tries of implementing smart factory have been increased. To realize the smart factory, firstly predictive manufacturing system should be implemented. As a first step of predictive manufacturing, this paper focuses on solving the simple but time consuming and high cost task in the predictive manner. The target problem of this paper is predicting CNC tool wear compensation offset using machine learning methods based on the data. To apply machine learning methods, we should understand the characteristics of the data and find the most suitable method according to the data characteristics. Thus, this paper discusses the characteristics of manufacturing data and compares various cases of applying machine learning methods.

Collaboration


Dive into the Suyoung Chi's collaboration.

Top Co-Authors

Avatar

Young-Jo Cho

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Sangseung Kang

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Ho-Sub Yoon

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Kyekyung Kim

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Jaehong Kim

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Jaeyeon Lee

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Do Hyung Kim

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Kye Kyung Kim

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

YunKoo Chung

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Keun-Chang Kwak

Electronics and Telecommunications Research Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge