Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Seonghun Lee is active.

Publication


Featured researches published by Seonghun Lee.


international conference on pattern recognition | 2010

Scene Text Extraction with Edge Constraint and Text Collinearity

Seonghun Lee; Min Su Cho; Kyomin Jung; Jin Hyung Kim

In this paper, we propose a framework for isolating text regions from natural scene images. The main algorithm has two functions: it generates text region candidates, and it verifies of the label of the candidates (text or non-text). The text region candidates are generated through a modified K-means clustering algorithm, which references texture features, edge information and color information. The candidate labels are then verified in a global sense by the Markov Random Field model where collinearity weight is added as long as most texts are aligned. The proposed method achieves reasonable accuracy for text extraction from moderately difficult examples from the ICDAR 2003 database.


Pattern Recognition Letters | 2008

Complementary combination of holistic and component analysis for recognition of low-resolution video character images

Seonghun Lee; Jin Hyung Kim

Video OCR aims at extracting text from video images in order to understand the context of the video. Video character images are usually given in low resolution with unique characteristics such as large stroke distortion, font variation, and variable size. Therefore, recognizing such characters in video images is very challenging. This is particularly true in the case of Chinese and Korean languages, where characters have complicated shapes and the number of classes (characters) is very large. In this paper, we propose a complementary combination of two recognizer approaches: a holistic approach and a component analysis. The holistic approach utilizes the global shape information of a character image to recognize a radical at a specific location of the character. On the contrary, the component analysis utilizes a detailed local shape of a segmented radical image to recognize the radical. The former is effective for character degradation whereas the latter is strong at processing ambiguous characters and font variations. In an evaluation of 50,000 video character images of Korean script, the proposed method achieved 96.5% accuracy. From this, we may draw a conclusion that the proposed method works well even with low quality images of complicated characters.


Image and Vision Computing | 2013

Integrating multiple character proposals for robust scene text extraction

Seonghun Lee; Jin Hyung Kim

Text contained in scene images provides the semantic context of the images. For that reason, robust extraction of text regions is essential for successful scene text understanding. However, separating text pixels from scene images still remains as a challenging issue because of uncontrolled lighting conditions and complex backgrounds. In this paper, we propose a two-stage conditional random field (TCRF) approach to robustly extract text regions from the scene images. The proposed approach models the spatial and hierarchical structures of the scene text, and it finds text regions based on the scene text model. In the first stage, the system generates multiple character proposals for the given image by using multiple image segmentations and a local CRF model. In the second stage, the system selectively integrates the generated character proposals to determine proper character regions by using a holistic CRF model. Through the TCRF approach, we cast the scene text separation problem as a probabilistic labeling problem, which yields the optimal label configuration of pixels that maximizes the conditional probability of the given image. Experimental results indicate that our framework exhibits good performance in the case of the public databases.


international conference on document analysis and recognition | 2011

Scene Text Extraction by Superpixel CRFs Combining Multiple Character Features

Min Su Cho; Jae-Hyun Seok; Seonghun Lee; Jin Hyung Kim

Features and relationships based on character color, edge, stroke and context plays a role for text extraction in natural scene images, but any single feature or relationship is not enough to do the job. This paper presents a novel approach for combining features and relationships within the Conditional Random Field (CRF) framework. By a simple homogeneity measure, an input image is over segmented into perceptually meaningful super pixels and then the text extraction task is formulated as a problem of super pixel labeling. Such a formulation allows us to achieve parameter learning from training images and probabilistic inferences by combining all the features and relationships of the input image. The proposed method shows high performance, in terms of quality, on both the KAIST scene text DB and the ICDAR 2003 DB.


chinese conference on pattern recognition | 2009

Scene Text Extraction Using Image Intensity and Color Information

Seonghun Lee; Jae-Hyun Seok; Kyungmin Min; Jinhyung Kim

Robust extraction of text from scene images is essential for successful scene text recognition. Scene images usually have non- uniform illumination, complex background, and text-like objects. In this paper, we propose a text extraction algorithm by combining the adaptive binarization and perceptual color clustering method. Adaptive binarization method can handle gradual illumination changes on character regions, so it can extract whole character regions even though shadows and/or light variations affect the image quality. However, image binarization on gray-scale images cannot distinguish different color components having the same luminance. Perceptual color clustering method complementary can extract text regions which have similar color distances, so that it can prevent the problem of the binarization method. Text verification based on local information of a single component and global relationship between multiple components is used to determine the true text components. It is demonstrated that the proposed method achieved reasonabe accuracy of the text extraction for the moderately difficult examples from the ICDAR 2003 database.


chinese conference on pattern recognition | 2009

Scene Text Separation Using Touch Screen Interface

Jehyun Jung; Egyul Kim; Seonghun Lee; Jin Hyung Kim

Text separation in natural scenes is a crucial step to recognize scene text. Since computational power in a mobile device is limited, current text extraction methods are impractical in real-time devices. We propose efficient text extraction methods by utilizing users indication. When user simply indicates focus or draws the line on touch screen, the system can extract text in natural scenes efficiently using this information. Text region is estimated from users touch and color candidates are extracted. After that, the system chooses color candidates which have high probability of texts. In the experiments on ICDAR 2003 database, our method demonstrates effective performance and usability in a portable device.


ubiquitous computing systems | 2007

Place recognition using multiple wearable cameras

Kyungmin Min; Seonghun Lee; Kee-Eung Kim; Jin Hyung Kim

Recognizing a users location is the most challenging problem for providing intelligent location-based services. In this paper, we presented a realtime camera-based system for the place recognition problem. This system takes streams of scene images of a learned environment from user-worn cameras and produces the class label of the current place as an output. Multiple cameras are used to collect multi-directional scene images because utilizing multiple images yields better and robust recognition than a single image. For more robust recognition, we utilized spatial relationships between the places. In addition that, a temporal reasoning is incorporated with a Markov model to reflect typical staying time at each place. Recognition experiments, which were conducted in a real environment in a university campus, showed that the proposed method yields a very promising result.


international conference on pattern recognition | 2006

Stroke Verification with Gray-level Image for Hangul Video Text Recognition

Jinsik Kim; Seonghun Lee; Younghee Kwon; Jin Hyung Kim

Traditional OCR uses binarization technique, which makes OCR simple. But it makes strokes ambiguous and that causes recognition errors. Main reason of those errors is similar grapheme pair confusing error. It can be reduced by verifying ambiguous area of gray level image. After checking whether there is similar grapheme pair by analyzing traditional OCR result candidates, the base stroke of confused grapheme can be found using the fitness function which reflects the base stroke characteristics. The possibility of confused stroke existence can be measured by analyzing the boundary area of the base stroke. The result is merged with traditional OCR using score-probability converting. We achieved 68.1% error reduction for target grapheme pair errors by the proposed method and it means that 23.1 % total error is reduced


international conference on document analysis and recognition | 2009

Scene Text Extraction Using Focus of Mobile Camera

Egyul Kim; Seonghun Lee; Jin Hyung Kim


Etri Journal | 2011

Touch TT: Scene Text Extractor Using Touchscreen Interface

Jehyun Jung; Seonghun Lee; Min Su Cho; Jin Hyung Kim

Collaboration


Dive into the Seonghun Lee's collaboration.

Researchain Logo
Decentralizing Knowledge