Bumghi Choi
Inha University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Bumghi Choi.
Neurocomputing | 2008
Bumghi Choi; Ju-Hong Lee; Deok-Hwan Kim
The gradient descent algorithms like backpropagation (BP) or its variations on multi-layered feed-forward networks are widely used in many applications. However, the most serious problem associated with the BP is local minima problem. Especially, an exceeding number of hidden nodes make the corresponding network deepen the local minima problem. We propose an algorithm which shows stable performance on training despite of the large number of hidden nodes. This algorithm is called separate learning algorithm in which hidden-to-output and input-to-hidden separately trained. Simulations on some benchmark problems have been performed to demonstrate the validity of the proposed method.
Neurocomputing | 2009
Bumghi Choi; Ju-Hong Lee
The gradient descent algorithms like backpropagation (BP) or its variations on multilayered feed-forward networks are widely used in many applications, especially on solving differential equations. Reformulated radial basis function networks (RBFN) are expected to have more accuracy in generalization capability than BP according to the regularization theory. We show how to apply the both networks to a specific example of differential equations and compare the capability of generalization and convergence. The experimental comparison of various approaches clarifies that reformulated RBFN shows better performance than BP for solving a specific example of differential equations.
ieee international conference on cognitive informatics | 2009
Tae-Su Park; Ju-Hong Lee; Bumghi Choi
We present a new method to optimize weights of Artificial Neural Network (ANN) with particle swarm optimization (PSO), also we propose a new selection strategy of inertial weight, which varies according to the training error of artificial neural network, called adaptive inertial weight. By using Adaptive inertial weight, the proposed method can search global optimal solution faster and exactly. The experimental results show that the proposed method is successfully applied to benchmark examples.
international conference on computational science | 2006
Bumghi Choi; Ju-Hong Lee; Tae-Su Park
Multilayer perceptrons have been applied successfully to solve some difficult and diverse problems with the backpropagation learning algorithm. However, the algorithm is known to have slow and false convergence aroused from flat surface and local minima on the cost function. Many algorithms announced so far to accelerate convergence speed and avoid local minima appear to pay some trade-off for convergence speed and stability of convergence. Here, a new algorithm is proposed, which gives a novel learning strategy for avoiding local minima as well as providing relatively stable and fast convergence with low storage requirement. This is the alternate learning algorithm in which the upper connections, hidden-to-output, and the lower connections, input-to-hidden, alternately trained. This algorithm requires less computational time for learning than the backpropagation with momentum and is shown in a parity check problem to be relatively reliable on the overall performance.
international conference on computational linguistics | 2006
Bumghi Choi; Ju-Hong Lee; Sun Park; Tae-Su Park
In web search engines, index search used to be evaluated at a high recall rate. However, the pitfall is that users have to work hard to select relevant documents from too many search results. Skillful surfers tend to prefer the index searching method, while on the other hand, those who are not accustomed to web searching generally use the directory search method. Therefore, the directory searching method is needed as a complementary way of web searching. However, in the case that target documents for searching are obscurely categorized or users have no exact knowledge about the appropriate categories of target documents, occasionally directory search will fail to come up with satisfactory results. That is, the directory search method has a high precision and low recall rate. With this motive, we propose a novel model in which a category hierarchy is dynamically constructed. To do this, a category is regarded as a fuzzy set which includes keywords. Similarly extensible subcategories of a category can be found using fuzzy relational products. The merit of this method is to enhance the recall rate of directory search by reconstructing subcategories on the basis of similarity.
The Journal of the Korea Contents Association | 2011
Chan-Min Ahn; Ju-Hong Lee; Bumghi Choi; Sun Park
The question answering system shows the answers that are input by other users for user`s question. In spite of many researches to try to enhance the satisfaction level of answers for user question, there is a essential limitation. So, the question answering system provides users with the method of recommendation of another questions that can satisfy user`s intention with high probability as an auxiliary function. The method using the fuzzy relational product operator was proposed for recommending the questions that can includes largely the contents of the user`s question. The fuzzy relational product operator is composed of the Kleene-Dienes operator to measure the implication degree by contents between two questions. However, Kleene-Dienes operator is not fit to be the right operator for finding a question answers pair that semantically includes a user question, because it was not designed for the purpose of finding the degree of semantic inclusion between two documents. We present a novel fuzzy implication operator that is designed for the purpose of finding question answer pairs by considering implication relation. The new operator calculates a degree that the question semantically implies the other question. We show the experimental results that the probability that users are satisfied with the searched results is increased when the proposed operator is used for recommending of question answering system.
international conference industrial engineering other applications applied intelligent systems | 2007
Bumghi Choi; Ju-Hong Lee; Tae-Su Park
The learning algorithms of multilayered feed-forward networks can be classified into two categories, gradient and non-gradient kinds. The gradient descent algorithms like backpropagation (BP) or its variations are widely used in many application areas because of convenience. However, the most serious problem associated with the BP is local minima problem. We propose an improved gradient descent algorithm intended to weaken the local minima problem without doing any harm to simplicity of the gradient descent method. This algorithm is called dual gradient learning algorithm in which the upper connections (hidden-to-output) and the lower connections (input-to-hidden) separately evaluated and trained. To do so, the target values of hidden layer units are introduced to be used as evaluation criteria of the lower connections. Simulations on some benchmark problems and a real classification task have been performed to demonstrate the validity of the proposed method.
Etri Journal | 2007
Deok-Hwan Kim; Jae-Won Song; Ju-Hong Lee; Bumghi Choi
Journal of KIISE:Software and Applications | 2006
Bumghi Choi; Ju-Hong Lee; Tae-Su Park
Lecture Notes in Computer Science | 2003
Bumghi Choi; Ju-Hong Lee; Sun Park