Hirotaka Nakayama
Konan University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hirotaka Nakayama.
Optimization and Engineering | 2002
Hirotaka Nakayama; Masao Arakawa; Rie Sasaki
In many practical engineering design problems, the form of objective functions is not given explicitly in terms of design variables. Given the value of design variables, under this circumstance, the value of objective functions is obtained by some analysis such as structural analysis, fluidmechanic analysis, thermodynamic analysis, and so on. Usually, these analyses are considerably time consuming to obtain a value of objective functions. In order to make the number of analyses as few as possible, we suggest a method by which optimization is performed in parallel with predicting the form of objective functions. In this paper, radial basis function networks (RBFN) are employed in predicting the form of objective functions, and genetic algorithms (GA) are adopted in searching the optimal value of the predicted objective function. The effectiveness of the suggested method will be shown through some numerical examples.
European Journal of Operational Research | 2001
Yeboon Yun; Hirotaka Nakayama; Tetsuzo Tanino; Masao Arakawa
Abstract In many practical problems such as engineering design problems, criteria functions cannot be given explicitly in terms of design variables. Under this circumstance, values of criteria functions for given values of design variables are usually obtained by some analyses such as structural analysis, thermodynamical analysis or fluid mechanical analysis. These analyses require considerably much computation time. Therefore, it is not unrealistic to apply existing interactive optimization methods to those problems. On the other hand, there have been many trials using genetic algorithms (GA) for generating efficient frontiers in multi-objective optimization problems. This approach is effective in problems with two or three objective functions. However, these methods cannot usually provide a good approximation to the exact efficient frontiers within a small number of generations in spite of our time limitation. The present paper proposes a method combining generalized data envelopment analysis (GDEA) and GA for generating efficient frontiers in multi-objective optimization problems. GDEA removes dominated design alternatives faster than methods based on only GA. The proposed method can yield desirable efficient frontiers even in non-convex problems as well as convex problems. The effectiveness of the proposed method will be shown through several numerical examples.
international symposium on neural networks | 2003
Min Yoon; Yeboon Yun; Hirotaka Nakayama
The support vector algorithm has paid attention on maximizing the shortest distance between sample points and discrimination hyperplane. This paper suggests the total margin algorithm which considers the distance between all data points and the separating hyperplane. The method extends existing support vector machine algorithms. In addition, the method improves the generalization error bound. Numerical studies show that the total margin algorithm provides good performance, comparing with the previous methods.
international conference on artificial neural networks | 2001
Hirotaka Nakayama; Masao Arakawa; Rie Sasaki
In many practical engineering design problems, the form of objective function is not given explicitly in terms of design variables. Given the value of design variables, under this circumstance, the value of objective function is obtained by some analysis such as structural analysis, fluidmechanic analysis, thermodynamic analysis, and so on. Usually, these analyses are considerably time consuming to obtain a value of objective function. In order to make the number of analyses as few as possible, we suggest a method by which optimization is performed in parallel with predicting the form of objective function. In this paper, radial basis function networks (RBFN) are employed in predicting the form of objective function, and genetic algorithms (GA) in searching the optimal value of the predicted objective function. The effectiveness of the suggested method will be shown through some numerical examples.
international symposium on neural networks | 2003
Hirotaka Nakayama; Masao Arakawa; K. Washino
In many practical engineering design problems, the form of objective functions is not given explicitly in terms of design variables. Given the value of design variables, under this circumstance, the value of objective functions is obtained be real/computational experiments such as structural analysis, fluid mechanic analysis, thermodynamic analysis, and so on. Usually, these experiments are considerable expensive. In order to make the number of these experiments as few as possible, optimization is performed in parallel with predicting the form of objective functions. Response surface methods (RSM) are well known along this approach. This paper suggests to apply support vector machines (SVM) for predicting the objective functions. One of the most important tasks in this approach is to allocate sample moderately in order to make the umber of experiments as small as possible. It will be shown that the information of support vector can be used effectively to this aim. The effectiveness of our suggested method is shown through numerical examples.
Archive | 2002
Yeboon Yun; Hirotaka Nakayama; Tetsuzo Tanino
So far, in order to evaluate the efficiency of DMUs, there have been developed several kinds of DEA models, for example, CCR model, BCC model, FDH model, and so on. These models are characterized by how to determine the production possibility set; a convex cone, a convex hull and a free disposable hull of observed data set. In this paper, we the GDEA D model based on production possibility as a dual approach to GDEA [13] and the concept of α D -efficiency in the GDEA D model. In addition, we establish the relations between the GDEA D model and existing DEA models, and interpret the meaning of an optimal value to the problem (GDEA D ). Therefore, we show that it is possible to evaluate the efficiency for each decision making unit by considering surplus of inputs/slack of outputs as well as the technical efficiency. Finally, through an illustrative example, it is shown that GDEA D can reveal domination relations among all decision snaking units.
Archive | 2003
Takeshi Asada; Hirotaka Nakayama
Support Vector Machines are now thought as a powerful method for solving pattern recognition problems. In general, SVMs tend to make overlearning. In order to overcome this difficulty, the notion of soft margin is introduced. In this event, it is difficult to decide the weight for slack variables reflecting soft margin. In this paper, Soft margin method is extended to Multi Objective Linear Programming(MOLP). To solve MOLP, Goal Programming method is used.
international conference on neural information processing | 2002
Hirotaka Nakayama; K. Washino
In many practical engineering design problems, the form of objective function is not given explicitly in terms of design variables. Under this circumstance, it usually takes expensive computation time to obtain the value of objective function by some analysis such as structural analysis, fluid mechanic analysis, and so on. In order to make the number of analyses as few as possible, we suggest a method by which optimization is performed in parallel with predicting the form of objective function. In this paper, support vector machine (SVM) is employed in predicting the form of objective function, and genetic algorithms (GA) in searching the optimal value of the predicted objective function.
international conference on knowledge-based and intelligent information and engineering systems | 2003
Hirotaka Nakayama; Yeboon Yun; Takeshi Asada; Min Yoon
Support vector machines (SVMs) are gaining much popularity as effective methods in machine learning. In pattern classification problems with two class sets, their basic idea is to find a maximal margin separating hyperplane which gives the greatest separation between the classes in a high dimensional feature space. However, the idea of maximal margin separation is not quite new: in 1960’s the multi-surface method (MSM) was suggested by Mangasarian. In 1980’s, linear classifiers using goal programming were developed extensively. This paper considers SVMs from a viewpoint of goal programming, and proposes a new method based on the total margin instead of the shortest distance between learning data and separating hyperplane.
Archive | 2003
Hirotaka Nakayama
Recently, data mining is attracting researchers’ interest as a tool for getting knowledge from data bases on a large scale. Although there have been several approaches to data mining, we focus on mathematical programming (in particular, multi-objective and goal programming; MOP/GP) approaches in this paper. Among them, Support Vector Machine (SVM) is gaining popularity as a method for machine learning. In pattern classification problems with two class sets, its idea is to find a maximal margin separating hyperplane which gives the greatest separation between the classes in a high dimensional feature space. This task is performed by solving a quadratic programming problem in a traditional formulation, and can be reduced to solving a linear programming in another formulation. However, the idea of maximal margin separation is not quite new: in 1960’s the multi-surface method (MSM) was suggested by Mangasarian. In 1980’s, linear classifiers using goal programming were developed extensively.