Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Weon-Geun Oh is active.

Publication


Featured researches published by Weon-Geun Oh.


korea japan joint workshop on frontiers of computer vision | 2011

An analysis of the effect of different image preprocessing techniques on the performance of SURF: Speeded Up Robust Features

Robin Kalia; Keun-Dong Lee; B.V.R. Samir; Sung-Kwan Je; Weon-Geun Oh

In this paper, we analyze the effect of different image preprocessing techniques on the performance of Speeded Up Robust Features, SURF. We investigate the effects of the techniques like Histogram Equalization, Multiscale Retinex, and Image Adaptive Contrast Enhancement (IACE) scheme that we propose, on the SURF in terms of its feature points detection, and computational time for extracting the descriptors. We then test the effect of these image preprocessing techniques on the repeatability of the state-of-the-art detectors like Harris-Affine, Hessian-Affine, MSER, Edge Based Regions, Intensity Based Regions, and SURF. We carry out the repeatability test on the standard images which have been used as a benchmark for the evaluation of the performance of other schemes for the detection of feature points. Finally, we propose a method for scaling large resolution images that can be used in conjunction with the IACE method to enhance the matching speed of SURF, along with maintaining the accuracy and the standard of its performance.


systems man and cybernetics | 2000

An efficient extraction of character string positions using morphological operator

Chang-Joon Park; Kyung-Ae Moon; Weon-Geun Oh; Heung-Moon Choi

An efficient extraction of character string positions in a document is proposed by using a morphological operator. In regions of character strings, axial edge pixels and diagonal edge pixels are mingled together, but in other regions, they are distributed separately. Based on this difference in the directional edge pixel distribution between the character and the non-character regions, string positions are extracted directly from arbitrary blocks without any block analysis, in contrast to previous work which requires block analysis to extract string positions (F.M. Wahl et al., 1982; S. Imade et al., 1993). Experiments are conducted on the document images acquired through the scanner, and the proposed method can directly extract the character string positions from the plain text of character blocks, and even from the document containing tables and flow-charts, without any block analysis.


korea japan joint workshop on frontiers of computer vision | 2011

Face recognition using LBP for personal image management system and its analysis

Keun-Dong Lee; Robin Kalia; Sung Kwan-Je; Weon-Geun Oh

In this paper, LBP and its variant were tested on Family Face Database which was constructed in the earlier work by the author [1] in the framework of personal image management system. The combination of LBP and other features were also tested for comparison. In addition, several illumination normalizing methods, dissimilarity measures and sub-block divisions were also tested in our framework.


pacific-rim symposium on image and video technology | 2007

Very fast concentric circle partition-based replica detection method

Ik-Hwan Cho; A-Young Cho; Jun-Woo Lee; Ju-Kyung Jin; Won-Keun Yang; Weon-Geun Oh; Dong-Seok Jeong

Image replica detection becomes very active research field recently as the electronic device such as the digital camera which generates digital images spreads out rapidly. As huge amount of digital images leads to severe problems like copyright protection, the necessity of replica detection system gets more and more attention. In this paper, we propose a new fast image replica detector based on concentric circle partition method. The proposed algorithm partitions image into concentric circle with fixed angle from image center position outwards. From these partitioned regions, total of four features are extracted. They are average intensity distribution and its difference, symmetrical difference distribution and circular difference distribution in bitstring type. To evaluate the performance of the proposed method, pairwise independence test and accuracy test are applied. We compare the duplicate detection performance of the proposed algorithm with that of the MPEG-7 visual descriptors. From experimental results, we can tell that the proposed method shows very high matching speed and high accuracy on the detection of replicas which go through many modification from the original. Because we use the hash code as the image signature, the matching process needs very short computation time. And the proposed method shows 97.6% accuracy on average under 1 part per million false positive rate.


korea-japan joint workshop on frontiers of computer vision | 2013

Extensive analysis of feature selection for compact descriptor

Keun-Dong Lee; Seungjae Lee; Sang-Il Na; Sung-Kwan Je; Weon-Geun Oh

In this paper, feature selection criteria of local descriptors are examined with well-defined evaluation framework in MPEG-7 compact descriptor for visual search (CDVS) [6]. The effect of feature selection on descriptor was analyzed in compressed and uncompressed domain. Various combinations of feature characteristics such as scale and orientation of features, distance from center, and response of difference of Gaussian (DoG) [5] were examined for feature selection criterion via pair-wise matching experiments in MPEG-7 CDVS datasets.


systems man and cybernetics | 2000

Segmentation of a text printed in Korean and English using structure information and character recognizers

Young-Sup Hwang; Kyung-Ae Moon; Suyoung Chi; Dae-Geun Jang; Weon-Geun Oh

The purpose of the research presented is to segment a text image printed in both Korean and English into character images, utilizing the structure information in Korean and English characters, and using a Korean, English and mixed language character recognizer. The image cannot be separated by only using the width and height of a character because those of an English character are not constant, contrary to those of a Korean character. Therefore we first classify the image into Korean or English using the structure information in Korean and English characters. If it is determined as a Korean character, we segment it with the average width of Korean characters in the text lines. If it is determined as an English character, we segment it using a classical method to segment touching alphanumeric characters. If it cannot be determined, we find possible cut points using a vertical histogram and use the mixed language recognizer to determine the right cut point. Since our method first classifies a block into Korean or English, it can be run faster than the traditional method that cannot identify the language. Each classified block can be segmented more accurately because more specific knowledge about Korean and English characters can be applied.


korea japan joint workshop on frontiers of computer vision | 2011

Bag-of-features signature using invariant region descriptor for object retrieval

A-Young Cho; Won-Keun Yang; Dong-Seok Jeong; Weon-Geun Oh

In recent years, a content-based method such as ‘bag-of-features’ (BoF) is coming to the fore as an object recognition and classification technique. This paper proposes a BoF signature using invariant region descriptor for object retrieval. The region descriptors are extracted from dense sampled regions in the training images. These descriptors are quantized by hierarchical k-means clustering in a vocabulary tree of visual words. Each image is represented by occurrence of visual words, and then we use linear combination distance measure in the matching. In the experiments, we use object images that are taken in different condition to evaluate the retrieval performance. The test results show that the proposed method outperforms the BoF method using SURF descriptor. The proposed method searches 2.9 correct images among 3 on average up to the top 3% rank in database. Therefore, the proposed method is considered as an effective technique in terms of retrieval accuracy.


asian conference on computer vision | 2014

Accelerating Local Feature Extraction Using Two Stage Feature Selection and Partial Gradient Computation

Keun-Dong Lee; Seungjae Lee; Weon-Geun Oh

In this paper, we present a fast local feature extraction method, which is our contribution to ongoing MPEG standardization of compact descriptor for visual search (CDVS). To reduce time complexity of feature extraction, two-stage feature selection, which is based on the feature selection method of CDVS Test Model (TM), and partial gradient computation are introduced. The proposed method is examined on SIFT and compared to SIFT and SURF extractor with the previous feature selection method. In addition, the proposed method is compared to various feature extraction methods of the current CDVS TM 11 in CDVS evaluation framework. Experimental results show that the proposed method significantly reduces the time complexity while maintaining the matching and retrieval performance of previous work. For its efficiency, the proposed method has been integrated into CDVS TM since \(107^{\text {th}}\) MPEG meeting. This method will be also useful for feature extraction on mobile devices, where the use of computational resource is limited.


workshop on information security applications | 2003

Visualization of Dynamic Characteristics in Two-Dimensional Time Series Patterns: An Application to Online Signature Verification

Suyoung Chi; Jaeyeon Lee; Jung Soh; Do Hyung Kim; Weon-Geun Oh; Chang Hun Kim

An analysis model for the dynamics information of two-dimensional time-series patterns is described. In the proposed model, two novel transforms that visualize the dynamic characteristics are proposed. The first transform, referred to as speed equalization, reproduces a time-series pattern assuming a constant linear velocity to effectively model the temporal characteristics of the signing process. The second transform, referred to as velocity transform, maps the signal onto a horizontal vs. vertical velocity plane where the variation of the velocities over time is represented as a visible shape. With the transforms, the dynamic characteristics in the original signing process are reflected in the shape of the transformed patterns. An analysis in the context of these shapes then naturally results in an effective analysis of the dynamic characteristics. The proposed transform technique is applied to an online signature verification problem for evaluation. Experimenting on a large signature database, the performance evaluated in EER(Equal Error Rate) was improved to 1.17% compared to 1.93% of the traditional signature verification algorithm in which no transformed patterns are utilized. In the case of skilled forgery experiments, the improvement was more outstanding; it was demonstrated that the parameter set extracted from the transformed patterns was more discriminative in rejecting forgeries


korea-japan joint workshop on frontiers of computer vision | 2013

Intensity comparison based compact descriptor for mobile visual search

Sang-Il Na; Keun-Dong Lee; Seungjae Lee; Sung-Kwan Je; Weon-Geun Oh

In this paper, we proposed intensity comparison based compact descriptor for mobile visual search. For practical mobile applications, the low complexity and the descriptor size are more preferable, and many algorithms such as SURF, CHoG, and PCA-SIFT have been proposed. However, these approaches focused on not the feature description but the extraction time and the size of the feature. This paper suggests feature description method based on simple intensity comparison with considering descriptor size and extraction speed. Experimental results show that the proposed method has comparable performance to SURF with similar complexity and 20 times much smaller size.

Collaboration


Dive into the Weon-Geun Oh's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Keun-Dong Lee

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sang-Il Na

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Sung-Kwan Je

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Seungjae Lee

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Suyoung Chi

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge