Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bong-Jin Yum is active.

Publication


Featured researches published by Bong-Jin Yum.


Expert Systems With Applications | 2005

Development of a recommender system based on navigational and behavioral patterns of customers in e-commerce sites

Yong Soo Kim; Bong-Jin Yum; Junehwa Song; Su Myeon Kim

In this article, a novel CF (collaborative filtering)-based recommender system is developed for e-commerce sites. Unlike the conventional approach in which only binary purchase data are used, the proposed approach analyzes the data captured from the navigational and behavioral patterns of customers, estimates the preference levels of a customer for the products which are clicked but not purchased, and CF is conducted using the preference levels for making recommendations. This also compares with the existing works on clickstream data analysis in which the navigational and behavioral patterns of customers are analyzed for simple relationships with the target variable. The effectiveness of the proposed approach is assessed using an experimental e-commerce site. It is found among other things that the proposed approach outperforms the conventional approach in almost all cases considered. The proposed approach is versatile and can be applied to a variety of e-commerce sites as long as the navigational and behavioral patterns of customers can be captured.


Quality and Reliability Engineering International | 2011

A bibliography of the literature on process capability indices: 2000–2009

Bong-Jin Yum; Kwan-Woo Kim

This paper contains a bibliography of approximately 530 journal papers and books on process capability indices for the period 2000–2009. The related literature is classified into four major categories, namely, books, review/overview papers, theory- and method-related papers, and special applications. Theory- and method-related papers are further classified into univariate and multivariate cases, and special applications include acceptance sampling plans, supplier selection, and tolerance design and other optimizations. Copyright


Expert Systems With Applications | 2005

Collaborative filtering based on iterative principal component analysis

Do Hyun Kim; Bong-Jin Yum

Collaborative filtering (CF) is one of the most popular recommender system technologies, and utilizes the known preferences of a group of users to predict the unknown preference of a new user. However, the existing CF techniques has the drawback that it requires the entire existing data be maintained and analyzed repeatedly whenever new user ratings are added. To avoid such a problem, Eigentaste, a CF approach based on the principal component analysis (PCA), has been proposed. However, Eigentaste requires that each user rate every item in the so called gauge set for executing PCA, which may not be always feasible in practice. Developed in this article is an iterative PCA approach in which no gauge set is required, and singular value decomposition is employed for estimating missing ratings and dimensionality reduction. Principal component values for users in reduced dimension are used for clustering users. Then, the proposed approach is compared to Eigentaste in terms of the mean absolute error of prediction using the Jester, MovieLens, and EachMovie data sets. Experimental results show that the proposed approach, even without a gauge set, performs slightly better than Eigentaste regardless of the data set and clustering method employed, implying that it can be used as a useful alternative when defining a gauge set is neither possible nor practical.


Journal of Applied Statistics | 2011

Optimal design of accelerated degradation tests based on Wiener process models

Heonsang Lim; Bong-Jin Yum

Optimal accelerated degradation test (ADT) plans are developed assuming that the constant-stress loading method is employed and the degradation characteristic follows a Wiener process. Unlike the previous works on planning ADTs based on stochastic process models, this article determines the test stress levels and the proportion of test units allocated to each stress level such that the asymptotic variance of the maximum-likelihood estimator of the qth quantile of the lifetime distribution at the use condition is minimized. In addition, compromise plans are also developed for checking the validity of the relationship between the model parameters and the stress variable. Finally, using an example, sensitivity analysis procedures are presented for evaluating the robustness of optimal and compromise plans against the uncertainty in the pre-estimated parameter value, and the importance of optimally determining test stress levels and the proportion of units allocated to each stress level are illustrated.


Engineering Applications of Artificial Intelligence | 2004

Robust design of multilayer feedforward neural networks: an experimental approach

Young-Sang Kim; Bong-Jin Yum

Abstract Artificial neural networks (ANNs) have been successfully used for solving a wide variety of problems. However, determining a suitable set of structural and learning parameter values for an ANN still remains a difficult task. This article is concerned with the robust design of multilayer feedforward neural networks trained by backpropagation algorithm (Backpropagation net, BPN) and develops a systematic, experimental strategy which emphasizes simultaneous optimization of BPN parameters under various noise conditions. Unlike previous works, the present robust design problem is formulated as a Taguchis dynamic parameter design problem, together with a fine-tuning of the BPN output when necessary. A series of computational experiments are also conducted using the data sets from various sources. From the computational results, statistically significant effects of the BPN parameters on the robustness measure (i.e., signal-to-noise ratio) are identified, based upon which an economical experimental strategy is derived. It is also shown that fine-tuning the BPN output is effective in improving the signal-to-noise ratio. Finally, the step-by-step procedures for implementing the proposed approach are illustrated with an example.


Journal of Applied Statistics | 1996

Development of r,T hybrid sampling plans for exponential lifetime distributions

Hyean-Seok Jeong; Jong-In Park; Bong-Jin Yum

Summary In the original hybrid reliability acceptance sampling plan RASP developed by Epstein for the exponential lifetime distribution, the sample size n and the number of failures r to be observed are determined given the truncation time T . However, these plans cannot be used when test items are expensive and or limited in number. Therefore, one wishes to have a plan in which n is given. In this paper, we develop hybrid RASPs in which r and T are determined, given n. These plans are tabulated for various combinations of parameter values, so that one may easily determine an appropriate plan.


Computational Statistics & Data Analysis | 2008

Selection between Weibull and lognormal distributions: A comparative simulation study

Jin Seon Kim; Bong-Jin Yum

How to select the correct distribution for a given set of data is an important issue, especially when the tail probabilities are of interest as in lifetime data analysis. The Weibull and lognormal distributions are assumed most often in analyzing lifetime data, and in many cases, they are competing with each other. In addition, lifetime data are usually censored due to the constraint on the amount of testing time. A literature review reveals that little attention has been paid to the selection problems for the case of censored samples. In this article, relative performances of the two selection procedures, namely, the maximized likelihood and scale invariant procedures are compared for selecting between the Weibull and lognormal distributions for the cases of not only complete but also censored samples. Monte Carlo simulation experiments are conducted for various combinations of the censoring rate and sample size, and the performance of each procedure is evaluated in terms of the probability of correct selection (PCS) and average error rate. Then, previously unknown behaviors and relative performances of the two procedures are summarized. Computational results suggest that the maximized likelihood procedure can be generally recommended for censored as well as complete sample cases.


Expert Systems With Applications | 2011

Recommender system based on click stream data using association rule mining

Yong Soo Kim; Bong-Jin Yum

In the most studies of the past, only purchase data of users were used in e-commerce recommender system, while navigational and behavioral pattern data were not utilized. However, Kim, Yum, Song, and Kim (2005) developed a collaborative filtering technique based on navigational and behavioral patterns of customers in e-commerce sites. In this article, we improve on Kim et al. (2005) methods and further develop a novel recommender system. The proposed system calculates the confidence levels between clicked products, between the products placed in the basket, and between purchased products, respectively, and then the preference level was estimated through the linear combination of the above three confidence levels. To assess the effectiveness of the proposed approach, an empirical study was conducted by constructing an experimental e-commerce site for compact disc albums. The results from the experimental study clearly showed that the proposed method is superior to Kim et al. (2005) method.


Engineering Optimization | 1997

OPTIMAL DESIGN OF ACCELERATED DEGRADATION TESTS FOR ESTIMATING MEAN LIFETIME AT THE USE CONDITION

Jong-In Park; Bong-Jin Yum

In an accelerated degradation lest, higher-than-normal stress levels are employed to hasten the degradation of product performance, and then the observed degradation data are used to estimate various reliability-related quantities at the use (normal) condition. In this paper, optimal accelerated degradation test plans are developed under the assumptions of destructive testing and the simple constant rate relationship between the stress and the product performance. Specifically, the paper determines the stress levels, the proportion of test units allocated to each stress level, and the measurement times such that the asymptotic variance of the maximum likelihood estimator of the mean lifetime at the use condition is minimized. The optimization problem is formulated as a constrained nonlinear program, and exact optimal solutions are obtained for various cases. Sensitivity analysis and sample size determination procedures are also illustrated with an example.


IEEE Transactions on Reliability | 1993

Estimation methods for the mean of the exponential distribution based on grouped and censored data

Sun-Keun Seo; Bong-Jin Yum

For grouped and censored data from an exponential distribution, the method of maximum likelihood (ML) does not in general yield a closed-form estimate of the mean, and therefore, an iterative procedure must be used. Considered are three approximate estimators of the mean: two approximate ML estimators and the midpoint estimator. Their performances are compared by Monte Carlo simulation to those of the ML estimator, in terms of the mean square error and bias. The two approximate ML estimators are reasonable substitutes for the ML estimator, unless the probability of censoring and the number of inspections are small. The effect of inspection schemes on the relative performances of the three approximate methods is investigated. >

Collaboration


Dive into the Bong-Jin Yum's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jai-Hyun Byun

Gyeongsang National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jae-Gyeun Cho

Electronics and Telecommunications Research Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge