Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daehyeon Cho is active.

Publication


Featured researches published by Daehyeon Cho.


Neurocomputing | 2011

Semiparametric mixed-effect least squares support vector machine for analyzing pharmacokinetic and pharmacodynamic data

Kyung Ha Seok; Jooyong Shim; Daehyeon Cho; Gyu-Jeong Noh; Changha Hwang

In this paper we propose a semiparametric mixed-effect least squares support vector machine (LS-SVM) regression model for the analysis of pharmacokinetic (PK) and pharmacodynamic (PD) data. We also develop the generalized cross-validation (GCV) method for choosing the hyperparameters which affect the performance of the proposed LS-SVM. The performance of the proposed LS-SVM is compared with those of NONMEM and the regular semiparametric LS-SVM via four measures, which are mean squared error (MSE), mean absolute error (MAE), mean relative absolute error (MRAE) and mean relative prediction error (MRPE). Through paired-t test statistic we find that the absolute values of four measures of the proposed LS-SVM are significantly smaller than those of NONMEM for PK and PD data. We also investigate the coefficient of determinations R^2s of predicted and observed values. The R^2s of NONMEM are 0.66 and 0.59 for PK and PD data, respectively, while the R^2s of the proposed LS-SVM are 0.94 and 0.96. Through cross validation technique we also find that the proposed LS-SVM shows better generalization performance than the regular semiparametric LS-SVM for PK and PD data. These facts indicate that the proposed LS-SVM is an appealing tool for analyzing PK and PD data.


Communications for Statistical Applications and Methods | 2011

Support Vector Quantile Regression Using Asymmetric e-Insensitive Loss Function

Joo-Yong Shim; Kyungha Seok; Changha Hwang; Daehyeon Cho

Support vector quantile regression(SVQR) is capable of providing a good description of the linear and nonlinear relationships among random variables. In this paper we propose a sparse SVQR to overcome a limitation of SVQR, nonsparsity. The asymmetric e-insensitive loss function is used to efficiently provide sparsity. The experimental results are presented to illustrate the performance of the proposed method by comparing it with nonsparse SVQR.


international conference on education technology and computer | 2010

Support vector quantile regression using asymmetric e-insensitive loss function

Kyung Ha Seok; Daehyeon Cho; Changha Hwang; Jooyong Shim

Support vector quantile regression (SVQR) is capable of providing a good description of the linear and nonlinear relationships among random variables. In this paper we propose a sparse SVQR to overcome a weak point of SVQR, nonsparsity. The asymmetric e-insensitive loss function is used to efficiently provide the sparsity Experimental results are then presented; these results illustrate the performance of the proposed method by comparing it with nonsparse SVQR.


Korean Journal of Applied Statistics | 2015

Course Probability of Yut according to Starting Order

Daehyeon Cho

The Korean game of yut is a traditional games that everyone can enjoy regardless of gender or ages. Yut consists of four sticks with a Head and Tail. We are interested in the course probabilities in the game of yut that are different according to the starting order of the four pieces of yut. So we consider the probabilities of five results of yut which we toss according to the probability of Head. We calculate probabilities according to 4 courses where one piece of yut can go through in a yutpan according to the starting order of each piece of yut.


Korean Journal of Applied Statistics | 2012

Method of Choosing One in the Doubles through the Game of Rock-Paper-Scissors

Daehyeon Cho; Byung-Soo Kim

In many sports games, we would use a coin or the game of rock-paper-scissors to decide which side will begin rst. We consider the game of rock-paper-scissors when two teams are composed of two individuals respectively. There are many methods to choose one team of the two. We consider all 3 cases when all 4 individuals participate simultaneously or one by one. According to the methods of the game rules we nd the means and variances of the number of games respectively.


fuzzy systems and knowledge discovery | 2006

Some comments on error correcting output codes

Kyung Ha Seok; Daehyeon Cho

Error Correction Output Codes (ECOC) can improve generalization performance when applied to multiclass problems. In this paper, we compared various criteria used to design codematrices. We also investigated how loss functions affect the results of ECOC. We found that there was no clear evidence of difference between the various criteria used to design codematrices. The One Per Class (OPC) codematrix with Hamming loss yields a higher error rate. The error rate from margin based decoding is lower than from Hamming decoding. Some comments on ECOC are made, and its efficacy is investigated through empirical study.


Journal of the Korean Data and Information Science Society | 2010

Doubly penalized kernel method for heteroscedastic autoregressive datay

Daehyeon Cho; Joo-Yong Shim; Kyung Ha Seok


Journal of the Korean Data and Information Science Society | 2012

Study on the ensemble methods with kernel ridge regression

Sunhwa Kim; Daehyeon Cho; Kyung Ha Seok


Journal of the Korean Data and Information Science Society | 2009

A study on the behavior of cosmetic customers

Daehyeon Cho; Byung-Soo Kim; Kyungha Seok; Jongun Lee; Jong-Sung Kim; Sunhwa Kim


Journal of the Korean Data and Information Science Society | 2010

Micro marketing using a cosmetic transaction data

Kyoung-Ha Seok; Daehyeon Cho; Byung-Soo Kim; Jongun Lee; Seung-Hun Paek; Yu-Joong Jeon; Young-Bae Lee; Jae-Gil Kim

Collaboration


Dive into the Daehyeon Cho's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge