Byeong U. Park
Seoul National University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Byeong U. Park.
Journal of the American Statistical Association | 1990
Byeong U. Park; J. S. Marron
Abstract This article compares several promising data-driven methods for selecting the bandwidth of a kernel density estimator. The methods compared are least squares cross-validation, biased cross-validation, and a plug-in rule. The comparison is done by asymptotic rate of convergence to the optimum and a simulation study. It is seen that the plug-in bandwidth is usually most efficient when the underlying density is sufficiently smooth, but is less robust when there is not enough smoothness present. We believe the plug-in rule is the best of those currently available, but there is still room for improvement.
Econometric Theory | 1998
Alois Kneip; Byeong U. Park; Léopold Simar
Efficiency scores of production units are measured by their distance to an estimated production frontier. Nonparametric data envelopment analysis estimators are based on a finite sample of observed production units, and radial distances are considered.We investigate the consistency and the speed of convergence of these estimated efficiency scores ~or of the radial distances! in the very general setup of a multioutput and multi-input case. It is shown that the speed of convergence relies on the smoothness of the unknown frontier and on the number of inputs and outputs. Furthermore, one has to distinguish between the output- and the input-oriented cases.
Probability Theory and Related Fields | 1992
Peter Hall; J. S. Marron; Byeong U. Park
SummaryFor bandwidth selection of a kernel density estimator, a generalization of the widely studied least squares cross-validation method is considered. The essential idea is to do a particular type of “presmoothing” of the data. This is seen to be essentially the same as using the smoothed bootstrap estimate of the mean integrated squared error. Analysis reveals that a rather large amount of presmoothing yields excellent asymptotic performance. The rate of convergence to the optimum is known to be best possible under a wide range of smoothness conditions. The method is more appealing than other selectors with this property, because its motivation is not heavily dependent on precise asymptotic analysis, and because its form is simple and intuitive. Theory is also given for choice of the amount of presmoothing, and this is used to derive a data-based method for this choice.
Econometric Theory | 2000
Byeong U. Park; Léopold Simar; Ch. Weiner
In productivity analysis, the free disposal hull (FDH) is a nonparametric estimator for the production set, the set of inputs and outputs that are technically feasible. It is defined as the smallest free disposal set containing all observations in a sample of production units. One can then derive the production frontier and efficiency scores from the FDH. In the literature the method is used as if the FDH estimator were the true feasible set. However, assuming that individuals are drawn independently from a distribution where the support is the true production set, FDH efficiency scores are random variables. This paper investigates its stochastic properties
Journal of the American Statistical Association | 1994
M. C. Jones; S. J. Davies; Byeong U. Park
Abstract We explore the aims of, and relationships between, various kernel-type regression estimators. To do so, we identify two general types of (direct) kernel estimators differing in their treatment of the nuisance density function associated with regressor variable design. We look at the well-known Gasser-Muller, Nadaraya-Watson, and Priestley-Chao methods in this light. In the random design case, none of these methods is totally adequate, and we mention a novel (direct) kernel method with appropriate properties. Disadvantages of even the latter idea are remedied by kernel-weighted local linear fitting, a well-known technique that is currently enjoying renewed popularity. We see how to fit this approach into our general framework, and hence form a unified understanding of how these kernel-type smoothers interrelate. Though the mission of this article is unificatory (and even pedagogical), the desire for better understanding of superficially different approaches is motivated by the need to improve prac...
Annals of Operations Research | 2010
Seok-Oh Jeong; Byeong U. Park; Léopold Simar
Cazals et al. (J. Econom. 106: 1–25, 2002), Daraio and Simar (J. Prod. Anal. 24: 93–121, 2005; Advanced Robust and Nonparametric Methods in Efficiency Analysis, 2007a; J. Prod. Anal. 28: 13–32, 2007b) developed a conditional frontier model which incorporates the environmental factors into measuring the efficiency of a production process in a fully nonparametric setup. They also provided the corresponding nonparametric efficiency measures: conditional FDH estimator, conditional DEA estimator. The two estimators have been applied in the literature without any theoretical background about their statistical properties. The aim of this paper is to provide an asymptotic analysis (i.e. asymptotic consistency and limit sampling distribution) of the conditional FDH and conditional DEA estimators.
Journal of the American Statistical Association | 1994
Byeong U. Park; Léopold Simar
This article considers the semiparametric stochastic frontier model with panel data that arises in the problem of measuring technical inefficiency in production processes. We assume a parametric form for the frontier function, which is linear in production inputs. The density of the individual firm-specific effects is considered to be unknown. We construct an efficient estimator of the slope parameters in the frontier function. We also give an estimator of the level of the frontier function and investigate its asymptotic properties. Furthermore, we provide a predictor of the individual effects that can be directly translated to firm-specific technical inefficiencies. Finally, we illustrate our methods through a real data example.
Journal of the American Statistical Association | 2009
Byeong U. Park; Enno Mammen; Wolfgang Karl Härdle; Szymon Borak
High-dimensional regression problems, which reveal dynamic behavior, are typically analyzed by time propagation of a few number of factors. The inference on the whole system is then based on the low-dimensional time series analysis. Such high-dimensional problems occur frequently in many different fields of science. In this article we address the problem of inference when the factors and factor loadings are estimated by semiparametric methods. This more flexible modeling approach poses an important question: Is it justified, from an inferential point of view, to base statistical inference on the estimated times series factors? We show that the difference of the inference based on the estimated time series and “true” unobserved time series is asymptotically negligible. Our results justify fitting vector autoregressive processes to the estimated factors, which allows one to study the dynamics of the whole high-dimensional system with a low-dimensional representation. We illustrate the theory with a simulation study. Also, we apply the method to a study of the dynamic behavior of implied volatilities and to the analysis of functional magnetic resonance imaging (fMRI) data.
Annals of Statistics | 2005
Enno Mammen; Byeong U. Park
The smooth backfitting introduced by Mammen, Linton and Nielsen [Ann. Statist. 27 (1999) 1443-1490] is a promising technique to fit additive regression models and is known to achieve the oracle efficiency bound. In this paper, we propose and discuss three fully automated bandwidth selection methods for smooth backfitting in additive models. The first one is a penalized least squares approach which is based on higher-order stochastic expansions for the residual sums of squares of the smooth backfitting estimates. The other two are plug-in bandwidth selectors which rely on approximations of the average squared errors and whose utility is restricted to local linear fitting. The large sample properties of these bandwidth selection methods are given. Their finite sample properties are also compared through simulation experiments.
Journal of Econometrics | 2003
Byeong U. Park; Robin C. Sickles; Léopold Simar
This study focuses on the semiparametric-efficient estimation of random effect panel models containing AR(1) disturbances. We also consider such estimators when the effects and regressors are correlated (Hausman and Taylor, 1981). We introduce two semiparametric-efficient estimators that make minimal assumptions on the distribution of the random errors, effects, and the regressors and that provide semiparametric-efficient estimates of the slope parameters and of the effects. Our estimators extend the previous work of Park and Simar (J. Amer. Statist. Assoc. 89 (1994) 929), Park et al. (J. Econometrics 84 (1998) 273), and Adams et al. (J. Business Econom. Statist. 17 (1999) 349). Theoretical derivations are supplemented by Monte Carlo simulations. We also provide an empirical illustration by estimating relative efficiencies from a stochastic distance function for the U.S. banking industry over the 1980s and 1990s. In markets where regulatory constraints have been lessened or done away with, the deregulatory dynamic market shocks may not be adjusted to immediately and may induce a serial correlation pattern in firms use of best-practice banking technologies. Our semiparametric estimators have an important role in providing robust point estimates and inferences of the productivity and efficiency gains due to such economic reforms.