Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gyeongyong Heo is active.

Publication


Featured researches published by Gyeongyong Heo.


international geoscience and remote sensing symposium | 2008

Hierarchical Methods for Landmine Detection with Wideband Electro-Magnetic Induction and Ground Penetrating Radar Multi-Sensor Systems

Seniha Esen Yuksel; Paul D. Gader; Joseph N. Wilson; Dominic K. C. Ho; Gyeongyong Heo

A variety of algorithms are presented and employed in a hierarchical fashion to discriminate both anti-tank (AT) and anti-personnel (AP) landmines using data collected from wideband electromagnetic induction (WEMI) and ground penetrating radar (GPR) sensors mounted on a robotic platform. The two new algorithms for WEMI are based on the In-phase vs. Quadrature plot (the Argand diagram) of the complex measurement obtained at a single spatial location. The angle prototype match method uses the sequence of angles as a feature vector. Prototypes are constructed from these feature vectors and used to assign mine confidence to a test sample. The angle model based KNN method uses a two parameter model; where the parameters are fit to the In-phase and Quadrature data. For the GPR data, the Linear Prediction Processing and Spectral Features are calculated. All four features from WEMI and GPR are used in a Hierarchical Mixture of Experts model to increase the landmine detection rate. The EM algorithm is used to estimate the parameters of the hierarchical mixture. Instead of a two way mine/non-mine decision, the HME structure is trained to make a five way decision which aids in the detection of the low metal anti personnel mines.


Neural Networks | 2009

2009 Special Issue: RKF-PCA: Robust kernel fuzzy PCA

Gyeongyong Heo; Paul D. Gader; Hichem Frigui

Principal component analysis (PCA) is a mathematical method that reduces the dimensionality of the data while retaining most of the variation in the data. Although PCA has been applied in many areas successfully, it suffers from sensitivity to noise and is limited to linear principal components. The noise sensitivity problem comes from the least-squares measure used in PCA and the limitation to linear components originates from the fact that PCA uses an affine transform defined by eigenvectors of the covariance matrix and the mean of the data. In this paper, a robust kernel PCA method that extends the kernel PCA and uses fuzzy memberships is introduced to tackle the two problems simultaneously. We first introduce an iterative method to find robust principal components, called Robust Fuzzy PCA (RF-PCA), which has a connection with robust statistics and entropy regularization. The RF-PCA method is then extended to a non-linear one, Robust Kernel Fuzzy PCA (RKF-PCA), using kernels. The modified kernel used in the RKF-PCA satisfies the Mercers condition, which means that the derivation of the K-PCA is also valid for the RKF-PCA. Formal analyses and experimental results suggest that the RKF-PCA is an efficient non-linear dimension reduction method and is more noise-robust than the original kernel PCA.


Pattern Recognition | 2011

Robust kernel discriminant analysis using fuzzy memberships

Gyeongyong Heo; Paul D. Gader

Linear discriminant analysis (LDA) is a simple but widely used algorithm in the area of pattern recognition. However, it has some shortcomings in that it is sensitive to outliers and limited to linearly separable cases. To solve these problems, in this paper, a non-linear robust variant of LDA, called robust kernel fuzzy discriminant analysis (RKFDA) is proposed. RKFDA uses fuzzy memberships to reduce the effect of outliers and adopts kernel methods to accommodate non-linearly separable cases. There have been other attempts to solve the problems of LDA, including attempts using kernels. However, RKFDA, encompassing previous methods, is the most general one. Furthermore, theoretical analysis and experimental results show that RKFDA is superior to other existing methods in solving the problems.


ieee international conference on fuzzy systems | 2009

Fuzzy SVM for noisy data: A robust membership calculation method

Gyeongyong Heo; Paul D. Gader

Support vector machine (SVM) is a theoretically well motivated algorithm developed from statistical learning theory, that have shown good performance in many fields. In spite of its success, it still suffers from a noise sensitivity problem. To relax this problem, the SVM was extended by the introduction of fuzzy memberships to the fuzzy SVM (FSVM). The FSVM also has been extended further in two ways: by adopting a different objective function with the help of domain-specific knowledge and by employing a different membership calculation method. In this paper, we propose a new membership calculation method, that belongs to the second group. It is different from previous ones in that it does not assume any simple data distribution and does not need any prior knowledge. The proposed method is based on reconstruction error, which measures the agreement between the overall data structure and a data point. Thus the reconstruction error can represent the degree of outlier-ness and help in achieving noise robustness. Experimental results with synthetic and real data sets also support this.


international symposium on neural networks | 2009

Robust kernel PCA using fuzzy membership

Gyeongyong Heo; Paul D. Gader; Hichem Frigui

Principal component analysis (PCA) is widely used for dimensionality reduction in pattern recognition. Although PCA has been applied in many areas successfully, it suffers from sensitivity to noise and is limited to linear principal components. The noise sensitivity problem comes from the least-squares measure used in PCA and the limitation to linear components originates from the fact that PCA uses an affine transform defined by eigenvectors of the covariance matrix and the mean of the data. In this paper, a robust kernel PCA method that extends Schölkopf et al.s kernel PCA and uses fuzzy memberships is introduced to tackle the two problems simultaneously. We first propose an iterative method to find a robust covariance matrix called Robust Fuzzy PCA (RF-PCA). The RF-PCA is introduced to reduce the sensitivity to noise with the help of robust estimation technique. The RF-PCA method is then extended to a non-linear one, Robust Kernel Fuzzy PCA (RKF-PCA), using kernels. Experimental results suggest that the proposed algorithm works well on artificial and real world data sets.


ieee international conference on fuzzy systems | 2010

An extension of global fuzzy c-means using kernel methods

Gyeongyong Heo; Paul D. Gader

Fuzzy c-means (FCM) is a simple but powerful clustering method using the concept of fuzzy sets that has been proved to be useful in many areas. There are, however, several well known problems with FCM, such as sensitivity to initialization, sensitivity to outliers, and limitation to convex clusters. In this paper, global fuzzy c-means (G-FCM) and kernel fuzzy c-means (K-FCM) are combined and extended to form a non-linear variant of G-FCM, called kernelized global fuzzy c-means (KG-FCM). G-FCM is a variant of FCM that uses an incremental seed selection method and is effective in alleviating sensitivity to initialization. There are several approaches to reduce the influence of noise and properly partition non-convex clusters, and K-FCM is one. K-FCM is used in this paper because it can easily be extended with different kernels, which provide sufficient flexibility to allow for resolution of the shortcomings of FCM. By combining G-FCM and K-FCM, KG-FCM can resolve the shortcomings mentioned above. The usefulness of the proposed method is demonstrated by experiments using artificial and real world data sets.


ieee international conference on fuzzy systems | 2010

An extension of possibilistic fuzzy c-means with regularization

Younghwan Namkoong; Gyeongyong Heo; Young Woon Woo

Fuzzy c-means (FCM) and possibilistic c-means (PCM) are the two most well-known clustering algorithms in fuzzy clustering area, and have been applied in many areas with their original or modified forms. However, FCMs noise sensitivity problem and PCMs overlapping cluster problem are also well known. Recently there have been several attempts to combine both of them to mitigate these problems and possibilistic fuzzy c-means (PFCM) showed promising results. In this paper, we propose a modified PFCM using regularization to reduce noise sensitivity in PFCM further. Regularization is a well-known technique to make a solution space smooth and an algorithm noise insensitive. The proposed algorithm, PFCM with regularization (PFCM-R), takes advantage of regularization and further reduce the effect of noise. Experimental results are given and show that PFCM-R is better than existing methods in noisy conditions.


ieee international conference on fuzzy systems | 2010

A noise robust variant of context extraction for local fusion

Gyeongyong Heo; Paul D. Gader; Hichem Frigui

Context extraction for local fusion (CELF) is a local approach that combines multiple classifier outputs with the help of feature space information. CELF is based on an objective function that integrates context extraction and decision fusion. Context extraction divides the feature space into homogeneous regions; decision fusion combines multiple classifier outputs in each region or context. Although CELF is a generalization of previous fusion methods, it has some problems and noise sensitivity is one of them. The noise sensitivity problem comes from the fact that there are large number of parameters that should be optimized. Some variants of CELF were propose to mitigate the noise sensitivity problem and they showed better results than CELF. However, they also have their own shortcomings. In this paper, we proposed another variant of CELF to reduce the noise sensitivity and to overcome the shortcomings in the variants of CELF. The proposed method, CELF with regularization (CELF-R), uses regularization, which is a well-known method to reduce noise sensitivity by smoothing solution space. The noise robustness in CELF-R helps it to achieve better results than CELF and its variants. Experiments using landmine data sets also support this.


international symposium on neural networks | 2009

Learning the number of gaussian components using hypothesis test

Gyeongyong Heo; Paul D. Gader

This paper addresses the problem of estimating the correct number of components in a Gaussian mixture given a sample data set. In particular, an extension of Gaussian-means (G-means) and Projected Gaussian-means (PG-means) algorithms is proposed. All these methods are based on one-dimensional statistical hypothesis test. G-means and PG-means are wrapper algorithms of the k-means and Expectation-Maximization (EM) algorithms, respectively. Although G-means is a simple and fast algorithm, it does not perform well when clusters overlap since it is based on k-means. PG-means can handle overlapped clusters but requires more computation and sometimes fails to find the right number of clusters. In this paper, we propose an extension, called Extended Projected Gaussian means (XPG-means). XPG-means is a wrapper algorithm of Possibilistic Fuzzy C-means (PFCM) algorithm. XPG-means integrates the advantages of both algorithms while resolving some of the disadvantages involving overlapped clusters, noise, and computational complexity. More specifically, XPG-means handles overlapped clusters better than G-means because of the use of fuzzy clustering, handles noise better than both algorithms because it uses possibilitistic clustering. XPG-means is less computationally expensive than PG-means because it uses local hypothesis testing scheme used by G-means that is specific to Gaussians wherease PG-means uses a more general Kolmogorov-Smirnow test on Gaussian mixtures. In addition, XPG-means demonstrates less variance in estimating the number of components than either of the other algorithms.


Journal of Korean Institute of Intelligent Systems | 2007

Optimal Parameter Selection in Edge Strength Hough Transform

Gyeongyong Heo; Young-Woon Woo; Kwang-Baek Kim

Though the Hough transform is a well-known method for detecting analytical shape represented by a number of free parameters, the basic property of the Hough transform, the one-to-many mapping from an image space to a Hough space, causes the innate problem, the sensitivity to noise. To remedy this problem, Edge Strength Hough Transform (ESHT) was proposed and proved to reduce the noise sensitivity. However the performance of ESHT depends on the size of a Hough space and image and some other parameters which should be decided experimentally. In this paper, we derived formulae to decide 2 parameter values; decreasing parameter and broadening parameter, which play an important role in ESHT. Using the derived formulae, 2 parameter values can be decided only with the pre-determined values, the size of a Hough space and an image, which make it possible to decide them automatically. The experiments with different parameter values also support the result.

Collaboration


Dive into the Gyeongyong Heo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hichem Frigui

University of Louisville

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge