Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Myoungshic Jhun is active.

Publication


Featured researches published by Myoungshic Jhun.


Journal of the American Statistical Association | 1990

Bootstrap Choice of Bandwidth for Density Estimation

Julian J. Faraway; Myoungshic Jhun

Abstract A bootstrap-based choice of bandwidth for kernel density estimation is introduced. The method works by estimating the integrated mean squared error (IMSE) for any given bandwidth and then minimizing over all bandwidths. A straightforward application of the bootstrap method to estimate the IMSE fails because it does not capture the bias component. A smoothed bootstrap method based on an initial density estimate is described that solves this problem. It is possible to construct pointwise and simultaneous confidence intervals for the density. The simulation study compares cross-validation and the bootstrap method over a wide range of densities—a long-tailed, a short-tailed, an asymmetric, and a bimodal, among others. The bootstrap method uniformly outperforms cross-validation. The accuracy of the constructed confidence bands improves as the sample size increases.


Computational Statistics & Data Analysis | 2000

Applications of bootstrap methods for categorical data analysis

Myoungshic Jhun; Hyeong-Chul Jeong

Simultaneous confidence regions for proportions of a single multinomial population, and for a finite number of contrasts from several multinomial populations are proposed. In this paper bootstrap methods are used to construct confidence regions. We compare the performance of bootstrap methods with other methods in terms of average coverage probability by Monte Carlo simulation. Advantages of the bootstrap methods are discussed.


Computational Statistics & Data Analysis | 2012

Simultaneous estimation and factor selection in quantile regression via adaptive sup-norm regularization

Sungwan Bang; Myoungshic Jhun

Some regularization methods, including the group lasso and the adaptive group lasso, have been developed for the automatic selection of grouped variables (factors) in conditional mean regression. In many practical situations, such a problem arises naturally when a set of dummy variables is used to represent a categorical factor and/or when a set of basis functions of a continuous variable is included in the predictor set. Complementary to these earlier works, the simultaneous and automatic factor selection is examined in quantile regression. To incorporate the factor information into regularized model fitting, the adaptive sup-norm regularized quantile regression is proposed, which penalizes the empirical check loss function by the sum of factor-wise adaptive sup-norm penalties. It is shown that the proposed method possesses the oracle property. A simulation study demonstrates that the proposed method is a more appropriate tool for factor selection than the adaptive lasso regularized quantile regression.


Statistics & Probability Letters | 1988

Singh's theorem in the lattice case

Michael Woodroofe; Myoungshic Jhun

The asymptotic behavior of the parametric bootstrap estimator of the sampling distribution of a maximum likelihood estimator is investigated in a simple lattice case, integer valued random variables whose distributions form an exponential family. The expected value of the bootstrap estimator is compared with an Edgeworth expansion, less the continuity correction.


Communications in Statistics - Simulation and Computation | 2014

Weighted Support Vector Machine Using k-Means Clustering

Sungwan Bang; Myoungshic Jhun

The support vector machine (SVM) has been successfully applied to various classification areas with great flexibility and a high level of classification accuracy. However, the SVM is not suitable for the classification of large or imbalanced datasets because of significant computational problems and a classification bias toward the dominant class. The SVM combined with the k-means clustering (KM-SVM) is a fast algorithm developed to accelerate both the training and the prediction of SVM classifiers by using the cluster centers obtained from the k-means clustering. In the KM-SVM algorithm, however, the penalty of misclassification is treated equally for each cluster center even though the contributions of different cluster centers to the classification can be different. In order to improve classification accuracy, we propose the WKM–SVM algorithm which imposes different penalties for the misclassification of cluster centers by using the number of data points within each cluster as a weight. As an extension of the WKM–SVM, the recovery process based on WKM–SVM is suggested to incorporate the information near the optimal boundary. Furthermore, the proposed WKM–SVM can be successfully applied to imbalanced datasets with an appropriate weighting strategy. Experiments show the effectiveness of our proposed methods.


Communications in Statistics - Simulation and Computation | 2007

On the Use of Adaptive Nearest Neighbors for Missing Value Imputation

Myoungshic Jhun; Hyeong Chul Jeong; Ja-Yong Koo

A popular nonparametric treatment of missing value imputation uses methods based on k-nearest neighbors, where the number k of nearest neighbors is fixed without any consideration of the local features of missing values. This article proposes an alternative imputation method based on adaptive nearest neighbors, which takes into account the local features of the data. The proposed method adapts the number of neighbors in imputing the missing values according to the location of the missing values. Efficiency evaluation is then gauged through simulation studies using both simulated and real data. It is shown that the proposed method has distinct advantages over the imputation method based on k-nearest neighbors.


Statistics | 2006

Testing for overdispersion in a censored Poisson regression model

Byoung Cheol Jung; Myoungshic Jhun; Seuck Heun Song

In this article, we investigate the efficiency of score tests for testing a censored Poisson regression model against censored negative binomial regression alternatives. Based on the results of a simulation study, score tests using the normal approximation, underestimate the nominal significance level. To remedy this problem, bootstrap methods are proposed. We find that bootstrap methods keep the significance level close to the nominal one and have greater power uniformly than does the normal approximation for testing the hypothesis.


Computational Statistics & Data Analysis | 2005

Bootstrap tests for independence in two-way ordinal contingency tables

Hyeong Chul Jeong; Myoungshic Jhun; Dae-Hak Kim

Abstract For the analysis of an r × c contingency table having ordered row categories and ordered column categories, a bootstrap method is applied for the model-based likelihood ratio test for independence. A model-based likelihood ratio chi-square statistic and the statistic of the maximum eigenvalue of a Wishart matrix are also discussed. A simulation study is performed to compare the proposed method with existing ones. A real data example is included.


International Journal of Machine Learning and Cybernetics | 2017

Hierarchically penalized support vector machine with grouped variables

Sungwan Bang; Jongkyeong Kang; Myoungshic Jhun; Eunkyung Kim

When input features are naturally grouped or generated by factors in a linear classification problem, it is more meaningful to identify important groups or factors rather than individual features. The F∞-norm support vector machine (SVM) and the group lasso penalized SVM have been developed to perform simultaneous classification and factor selection. However, these group-wise penalized SVM methods may suffer from estimation inefficiency and model selection inconsistency because they cannot perform feature selection within an identified group. To overcome this limitation, we propose the hierarchically penalized SVM (H-SVM) that not only effectively identifies important groups but also removes irrelevant features within an identified group. Numerical results are presented to demonstrate the competitive performance of the proposed H-SVM over existing SVM methods.


Statistics | 2014

Adaptive sup-norm regularized simultaneous multiple quantiles regression

Sungwan Bang; Myoungshic Jhun

When modelling multiple conditional quantiles of univariate and/or multivariate responses, it is of great importance to share strength among them. The simultaneous multiple quantiles regression (SMQR) technique is a novel regularization method that explores the similarity among multiple conditional quantiles and performs simultaneous model selection. However, the SMQR suffers from estimation inefficiency and model selection inconsistency because it applies the same amount of shrinkage to each predictor variable without assessing its relative importance. To overcome such a limitation, we propose the adaptive sup-norm regularized SMQR (ASMQR) method, which allows different amounts of shrinkage to be imposed on different variables according to their relative importance. We show that the proposed ASMQR method, a generalized form of the adaptive lasso regularized quantile regression (ALQR) method, possesses the oracle property and that it is a better tool for selecting a common subset of significant variables than the ALQR and SMQR methods through a simulation study.

Collaboration


Dive into the Myoungshic Jhun's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge