Yuji Sakane
Osaka Prefecture University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yuji Sakane.
systems, man and cybernetics | 2009
Hisao Ishibuchi; Yuji Sakane; Noritaka Tsukamoto; Yusuke Nojima
Evolutionary multiobjective optimization (EMO) is an active research area in the field of evolutionary computation. EMO algorithms are designed to find a non-dominated solution set that approximates the entire Pareto front of a multiobjective optimization problem. Whereas EMO algorithms usually work well on two-objective and three-objective problems, their search ability is degraded by the increase in the number of objectives. One difficulty in the handling of many-objective problems is the exponential increase in the number of non-dominated solutions necessary for approximating the entire Pareto front. A simple countermeasure to this difficulty is to use large populations in EMO algorithms. In this paper, we examine the behavior of EMO algorithms with large populations (e.g., with 10,000 individuals) through computational experiments on multiobjective and many-objective knapsack problems with two, four, six, eight and ten objectives. We examine two totally different algorithms: NSGA-II and MOEA/D. NSGA-II is a Pareto dominance-based algorithm while MOEA/D uses scalarizing functions. Their search ability is examined for various specifications of the population size under the fixed computation load. That is, we use the total number of examined solutions as the stopping condition of each algorithm. Thus the use of a very large population leads to the termination at an early generation (e.g., 20th generation). It is demonstrated through computational experiments that the use of too large populations makes NSGA-II very slow and inefficient. On the other hand, MOEA/D works well even when it is executed with a very large population. We also discuss why MOEA/D works well even when the population size is unusually large.
international conference on evolutionary multi criterion optimization | 2009
Hisao Ishibuchi; Yuji Sakane; Noritaka Tsukamoto; Yusuke Nojima
It is well-known that multiobjective problems with many objectives are difficult for Pareto dominance-based algorithms such as NSGA-II and SPEA. This is because almost all individuals in a population are non-dominated with each other in the presence of many objectives. In such a population, the Pareto dominance relation can generate no strong selection pressure toward the Pareto front. This leads to poor search ability of Pareto dominance-based algorithms for many-objective problems. Recently it has been reported that better results can be obtained for many-objective problems by the use of scalarizing functions. The weighted sum usually works well in scalarizing function-based algorithms when the Pareto front is convex. However, we need other functions such as the weighted Tchebycheff when the Pareto front is non-convex. In this paper, we propose an idea of automatically choosing between the weighted sum and the weighted Tchebycheff for each individual in each generation. The characteristic feature of the proposed idea is to use the weighted Tchebycheff only when it is needed for individuals along non-convex regions of the Pareto front. The weighted sum is used for the other individuals in each generation. The proposed idea is combined with a high-performance scalarizing function-based algorithm called MOEA/D (multiobjective evolutionary algorithm based on decomposition) of Zhang and Li (2007). Effectiveness of the proposed idea is demonstrated through computational experiments on modified multiobjective knapsack problems with non-convex Pareto fronts.
genetic and evolutionary computation conference | 2010
Hisao Ishibuchi; Yuji Sakane; Noritaka Tsukamoto; Yusuke Nojima
The use of Pareto dominance for fitness evaluation has been the mainstream in evolutionary multiobjective optimization for the last two decades. Recently, it has been pointed out in some studies that Pareto dominance-based algorithms do not always work well on multiobjective problems with many objectives. Scalarizing function-based fitness evaluation is a promising alternative to Pareto dominance especially for the case of many objectives. A representative scalarizing function-based algorithm is MOEA/D (multiobjective evolutionary algorithm based on decomposition) of Zhang & Li (2007). Its high search ability has already been shown for various problems. One important implementation issue of MOEA/D is a choice of a scalarizing function because its search ability strongly depends on this choice. It is, however, not easy to choose an appropriate scalarizing function for each multiobjective problem. In this paper, we propose an idea of using different types of scalarizing functions simultaneously. For example, both the weighted Tchebycheff (Chebyshev) and the weighted sum are used for fitness evaluation. We examine two methods for implementing our idea. One is to use multiple grids of weight vectors and the other is to assign a different scalarizing function alternately to each weight vector in a single grid.
genetic and evolutionary computation conference | 2010
Hisao Ishibuchi; Noritaka Tsukamoto; Yuji Sakane; Yusuke Nojima
Pareto dominance-based algorithms have been the main stream in the field of evolutionary multiobjective optimization (EMO) for the last two decades. It is, however, well-known that Pareto-dominance-based algorithms do not always work well on many-objective problems with more than three objectives. Currently alternative frameworks are studied in the EMO community very actively. One promising framework is the use of an indicator function to find a good solution set of a multiobjective problem. EMO algorithms with this framework are called indicator-based evolutionary algorithms (IBEAs) where the hypervolume measure is frequently used as an indicator. IBEAs with the hypervolume measure have strong theoretical support and high search ability. One practical difficult of such an IBEA is that the hypervolume calculation needs long computation time especially when we have many objectives. In this paper, we propose an idea of using a scalarizing function-based hypervolume approximation method in IBEAs. We explain how the proposed idea can be implemented in IBEAs. We also demonstrate through computational experiments that the proposed idea can drastically decrease the computation time of IBEAs without severe performance deterioration.
congress on evolutionary computation | 2009
Hisao Ishibuchi; Yuji Sakane; Noritaka Tsukamoto; Yusuke Nojima
Cellular evolutionary algorithms usually use a single neighborhood structure for local selection. When a new solution is to be generated by crossover and/or mutation for a cell, a pair of parent solutions is selected from its neighbors. The current solution at the cell is replaced with the newly generated offspring if the offspring has the higher fitness value than the current one. That is, the “replace-if-better” policy is used for the replacement of the current solution. Local selection, crossover, mutation and replacement are iterated at every cell in cellular algorithms. A recently proposed multiobjective evolutionary algorithm called MOEA/D by Zhang and Li (2007) can be viewed as a cellular algorithm where each cell has its own scalarizing fitness function with a different weight vector. We can introduce a spatial structure to MOEA/D by the Euclidean distance between weight vectors. Its main difference from standard cellular algorithms is that a newly generated offspring for a cell is compared with not only the current solution of the cell but also its neighbors for local replacement in MOEA/D. In this paper, we examine the effect of local replacement on the search ability of a cellular version of MOEA/D. Whereas the same neighborhood structure was used for local selection and local replacement in the original MOEA/D, we examine the use of different neighborhood structures for local selection and local replacement. It is shown through computational experiments on multiobjective 0/1 knapsack problems with two, four and six objectives that local replacement plays an important role in MOEA/D especially for many-objective optimization problems.
congress on evolutionary computation | 2009
Hisao Ishibuchi; Noritaka Tsukamoto; Yuji Sakane; Yusuke Nojima
This paper proposes an idea of approximating the hypervolume of a non-dominated solution set using a number of achievement scalarizing functions with uniformly distributed weight vectors. Each achievement scalarizing function with a different weight vector is used to measure the distance from the reference point of the hypervolume to the attainment surface of the non-dominated solution set along its own search direction specified by its weight vector. Our idea is to approximate the hypervolume by the average distance from the reference point to the attainment surface over a large number of uniformly distributed weight vectors (i.e., over various search directions). We examine the effect of the number of weight vectors (i.e., the number of search directions) on the approximation accuracy and the computation time of the proposed approach. As expected, experimental results show that the approximation accuracy is improved by increasing the number of weight vectors. It is also shown that the proposed approach needs much less computation time than the exact hypervolume calculation for a six-objective knapsack problem even when we use about 100,000 weight vectors.
genetic and evolutionary computation conference | 2009
Hisao Ishibuchi; Yuji Sakane; Noritaka Tsukamoto; Yusuke Nojima
A new trend in evolutionary multi-objective optimization (EMO) is the handling of a multi-objective problem as an optimization problem of an indicator function. A number of approaches have been proposed under the name of indicator-based evolutionary algorithms (IBEAs). In IBEAs, the entire population usually corresponds to a solution of the indicator optimization problem. In this paper, we show how hypervolume maximization can be handled as single-objective and multi-objective problems by coding a set of solutions of the original multi-objective problem as an individual. Our single-objective formulation maximizes the hypervolume under constraint conditions on the number of nondominated solutions. On the other hand, our multi-objective formulation minimizes the number of non-dominated solutions while maximizing their Hypervolume.
ieee international conference on fuzzy systems | 2009
Hisao Ishibuchi; Yuji Sakane; Noritaka Tsukamoto; Yusuke Nojima
A large number of non-dominated solutions are often obtained by a single run of an evolutionary multiobjective optimization (EMO) algorithm. In the EMO research area, it is usually assumed that a single solution is to be chosen from the obtained non-dominated solutions by the decision maker. It is, however, time-consuming and not easy for the decision maker to examine a large number of obtained non-dominated solutions. Motivated by these discussions, we proposed single-objective and multiobjective formulations of solution selection problems to present only a small number of representative non-dominated solutions to the decision maker in our former study. The basic idea is to minimize the number of solutions to be presented while maximizing their hypervolume. A number of single-objective formulations can be derived from such a two-objective solution selection problem. In this paper, single-objective rule selection is performed as a post-processing procedure of EMO algorithms to select a prespecified number of non-dominated solutions (e.g., 10 or 20 solutions). Through computational experiments on multiobjective 0/1 knapsack problems, we examine the characteristic features of selected non-dominated solutions. We also examine the effect of the choice of a reference point for hypervolume calculation on the distribution of selected non-dominated solutions.
soft computing | 2011
Hisao Ishibuchi; Yuji Sakane; Noritaka Tsukamoto; Yusuke Nojima
In cellular algorithms, a single neighborhood structure for local selection is usually assumed to specify a set of neighbors for each cell. There exist, however, a number of examples with two neighborhood structures in nature. One is for local selection for mating, and the other is for local competition such as the fight for water and sunlight among neighboring plants. The aim of this paper is to show several implementations of cellular algorithms with two neighborhood structures for single-objective and multi-objective optimization problems. Since local selection has already been utilized in cellular algorithms in the literature, the main issue of this paper is how to implement the concept of local competition. We show three ideas about its utilization: Local elitism, local ranking, and local replacement. Local elitism and local ranking are used for single-objective optimization to increase the diversity of solutions. On the other hand, local replacement is used for multi-objective optimization to improve the convergence of solutions to the Pareto frontier. The main characteristic feature of our approach is that the two neighborhood structures can be specified independently of each other. Thus, we can separately examine the effect of each neighborhood structure on the behavior of cellular algorithms.
Transactions of the Institute of Systems, Control and Information Engineers | 2010
Noritaka Tsukamoto; Yuji Sakane; Yusuke Nojima; Hisao Ishibuchi