Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Changha Hwang is active.

Publication


Featured researches published by Changha Hwang.


Fuzzy Sets and Systems | 2006

Support vector interval regression machine for crisp input and output data

Changha Hwang; Dug Hun Hong; Kyung Ha Seok

Support vector regression (SVR) has been very successful in function estimation problems for crisp data. In this paper, we propose a robust method to evaluate interval regression models for crisp input and output data combining the possibility estimation formulation integrating the property of central tendency with the principle of standard SVR. The proposed method is robust in the sense that outliers do not affect the resulting interval regression. Furthermore, the proposed method is model-free method, since we do not have to assume the underlying model function for interval nonlinear regression model with crisp input and output. In particular, this method performs better and is conceptually simpler than support vector interval regression networks (SVIRNs) which utilize two radial basis function networks to identify the upper and lower sides of data interval. Five examples are provided to show the validity and applicability of the proposed method.


IEEE Transactions on Fuzzy Systems | 2005

Interval regression analysis using quadratic loss support vector machine

Dug Hun Hong; Changha Hwang

Support vector machines (SVMs) have been very successful in pattern recognition and function estimation problems for crisp data. This paper proposes a new method to evaluate interval linear and nonlinear regression models combining the possibility and necessity estimation formulation with the principle of quadratic loss SVM. This version of SVM utilizes quadratic loss function, unlike the traditional SVM. For data sets with crisp inputs and interval outputs, the possibility and necessity models have been recently utilized, which are based on quadratic programming approach giving more diverse spread coefficients than a linear programming one. The quadratic loss SVM also uses quadratic programming approach whose another advantage in interval regression analysis is to be able to integrate both the property of central tendency in least squares and the possibilistic property in fuzzy regression. However, this is not a computationally expensive way. The quadratic loss SVM allows us to perform interval nonlinear regression analysis by constructing an interval linear regression function in a high dimensional feature space. The proposed algorithm is a very attractive approach to modeling nonlinear interval data, and is model-free method in the sense that we do not have to assume the underlying model function for interval nonlinear regression model with crisp inputs and interval output. Experimental results are then presented which indicate the performance of this algorithm.


Computational Statistics & Data Analysis | 2009

Selecting marker genes for cancer classification using supervised weighted kernel clustering and the support vector machine

Jooyong Shim; Insuk Sohn; Sujong Kim; Jae Won Lee; Paul Green; Changha Hwang

Due to recent interest in the analysis of DNA microarray data, new methods have been considered and developed in the area of statistical classification. In particular, according to the gene expression profile of existing data, the goal is to classify the sample into a relevant diagnostic category. However, when classifying outcomes into certain cancer types, it is often the case that some genes are not important, while some genes are more important than others. A novel algorithm is presented for selecting such relevant genes referred to as marker genes for cancer classification. This algorithm is based on the Support Vector Machine (SVM) and Supervised Weighted Kernel Clustering (SWKC). To investigate the performance of this algorithm, the methods were applied to a simulated data set and some real data sets. For comparison, some other well-known methods such as Prediction Analysis of Microarrays (PAM), Support Vector Machine-Recursive Feature Elimination (SVM-RFE), and a Structured Polychotomous Machine (SPM) were considered. The experimental results indicate that the proposed SWKC/SVM algorithm is conceptually much simpler and performs more efficiently than other existing methods used in identifying marker genes for cancer classification. Furthermore, the SWKC/SVM algorithm has the advantage that it requires much less computing time compared with the other existing methods.


Computational Statistics & Data Analysis | 2009

Support vector censored quantile regression under random censoring

Joo Yong Shim; Changha Hwang

Censored quantile regression models have received a great deal of attention in both the theoretical and applied statistical literature. In this paper, we propose support vector censored quantile regression (SVCQR) under random censoring using iterative reweighted least squares (IRWLS) procedure based on the Newton method instead of usual quadratic programming algorithms. This procedure makes it possible to derive the generalized approximate cross validation (GACV) method for choosing the hyperparameters which affect the performance of SVCQR. Numerical results are then presented which illustrate the performance of SVCQR using the IRWLS procedure.


international conference on natural computation | 2006

Robust LS-SVM regression using fuzzy c-means clustering

Joo Yong Shim; Changha Hwang; Sungkyun Nau

The least squares support vector machine(LS-SVM) is a widely applicable and useful machine learning technique for classification and regression. The solution of LS-SVM is easily obtained from the linear Karush-Kuhn-Tucker conditions instead of a quadratic programming problem of SVM. However, LS-SVM is less robust due to the assumption of the errors and the use of a squared loss function. In this paper we propose a robust LS-SVM regression method which imposes the robustness on the estimation of LS-SVM regression by assigning weight to each data point, which represents the membership degree to cluster. In the numerical studies, the robust LS-SVM regression is compared with the ordinary LS-SVM regression.


International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems | 2004

RIDGE REGRESSION PROCEDURES FOR FUZZY MODELS USING TRIANGULAR FUZZY NUMBERS

Dug Hun Hong; Changha Hwang

This paper presents a new method of estimating fuzzy multivariable linear and nonlinear regression models using triangular fuzzy numbers. This estimation method is obtained by implementing a dual version of the ridge regression procedure for linear models. It allows us to perform fuzzy nonlinear regression by constructing a fuzzy linear regression in a high dimensional feature space for the data set with crisp inputs and fuzzy output. Experimental results are then presented, which indicate the performance of this algorithm.


Computational Statistics & Data Analysis | 2009

Informative transcription factor selection using support vector machine-based generalized approximate cross validation criteria

Insuk Sohn; Jooyong Shim; Changha Hwang; Sujong Kim; Jae Won Lee

The genetic regulatory mechanism plays a pivotal role in many biological processes ranging from development to survival. The identification of the common transcription factor binding sites (TFBSs) from a set of known co-regulated gene promoters and the identification of genes that are regulated by the transcription factor (TF) that have important roles in a particular biological function will advance our understanding of the interaction among the co-regulated genes and intricate genetic regulatory mechanism underlying this function. To identify the common TFBSs from a set of known co-regulated gene promoters and classify genes that are regulated by TFs, the new approaches using Support Vector Machine (SVM)-based Generalized Approximate Cross Validation (GACV) criteria are proposed. Two variable selection methods are considered for Recursive Feature Elimination (RFE) and Recursive Feature Addition (RFA). Performances of the proposed methods are compared with the existing SVM-based criteria, Logistic Regression Analysis (LRA), Logic Regression (LR), and Decision Tree (DT) methods by using both two real TF target genes data and the simulated data. In terms of test error rates, the proposed methods perform better than the existing methods.


fuzzy systems and knowledge discovery | 2005

Interval regression analysis using support vector machine and quantile regression

Changha Hwang; Dug Hun Hong; Eunyoung Na; Hye-Jung Park; Joo Yong Shim

This paper deals with interval regression analysis using support vector machine and quantile regression method. The algorithm consists of two phases – the identification of the main trend of the data and the interval regression based on acquired main trend. Using the principle of support vector machine the linear interval regression can be extended to the nonlinear interval regression. Numerical studies are then presented which indicate the performance of this algorithm.


Neurocomputing | 2015

Varying coefficient modeling via least squares support vector regression

Jooyong Shim; Changha Hwang

The varying coefficient regression model has received a great deal of attention as an important tool for modeling the dynamic changes of regression coefficients in the social and natural sciences. Lots of efforts have been devoted to develop effective estimation methods for such regression model. In this paper we propose a method for fitting the varying coefficient regression model using the least squares support vector regression technique, which analyzes the dynamic relation between a response and a group of covariates. We also consider a generalized cross validation method for choosing the hyperparameters which affect the performance of the proposed method. We provide a method for estimating the confidence intervals of coefficient functions. The proposed method is evaluated through simulation and real example studies.


Neurocomputing | 2014

Semiparametric spatial effects kernel minimum squared error model for predicting housing sales prices

Jooyong Shim; Okmyung Bin; Changha Hwang

Semiparametric regression models have been extensively used to predict housing sales prices, but semiparametric kernel machines with spatial effect have not been studied yet. This paper proposes the semiparametric spatial effect kernel minimum squared error model (SSEKMSEM) and the semiparametric spatial effect least squares support vector machine (SSELS-SVM) for estimating a hedonic price function and compares the price prediction performance with the conventional parametric models and a semiparametric generalized additive model (GAM). This paper utilizes two data sets. One is a large data set representing 5966 single-family residential home sales between July 2000 and August 2008 from Pitt County, North Carolina. The other is a data set of residential property sales records from September 2000 to September 2004 in Carteret County, North Carolina. The results show that the SSEKMSEM and SSELS-SVM outperform the parametric counterparts and the semiparametric GAM in both in-sample and out-of-sample price predictions, indicating that these kernel machines can be useful for measurement and prediction of housing sales prices.

Collaboration


Dive into the Changha Hwang's collaboration.

Top Co-Authors

Avatar

Jooyong Shim

Catholic University of Daegu

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Insuk Sohn

Samsung Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joo Yong Shim

Catholic University of Daegu

View shared research outputs
Top Co-Authors

Avatar

Hye-Jung Park

Catholic University of Daegu

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge