Yuki Tanigaki
Osaka Prefecture University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yuki Tanigaki.
international conference on evolutionary multi-criterion optimization | 2015
Hisao Ishibuchi; Hiroyuki Masuda; Yuki Tanigaki; Yusuke Nojima
In this paper, we propose the use of modified distance calculation in generational distance (GD) and inverted generational distance (IGD). These performance indicators evaluate the quality of an obtained solution set in comparison with a pre-specified reference point set. Both indicators are based on the distance between a solution and a reference point. The Euclidean distance in an objective space is usually used for distance calculation. Our idea is to take into account the dominance relation between a solution and a reference point when we calculate their distance. If a solution is dominated by a reference point, the Euclidean distance is used for their distance calculation with no modification. However, if they are non-dominated with each other, we calculate the minimum distance from the reference point to the dominated region by the solution. This distance can be viewed as an amount of the inferiority of the solution (i.e., the insufficiency of its objective values) in comparison with the reference point. We demonstrate using simple examples that some Pareto non-compliant results of GD and IGD are resolved by the modified distance calculation. We also show that IGD with the modified distance calculation is weakly Pareto compliant whereas the original IGD is Pareto non-compliant.
multiple criteria decision making | 2014
Hisao Ishibuchi; Hiroyuki Masuda; Yuki Tanigaki; Yusuke Nojima
Recently the inverted generational distance (IGD) measure has been frequently used for performance evaluation of evolutionary multi-objective optimization (EMO) algorithms on many-objective problems. When the IGD measure is used to evaluate an obtained solution set of a many-objective problem, we have to specify a set of reference points as an approximation of the Pareto front. The IGD measure is calculated as the average distance from each reference point to the nearest solution in the solution set, which can be viewed as an approximate distance from the Pareto front to the solution set in the objective space. Thus the IGD-based performance evaluation totally depends on the specification of reference points. In this paper, we illustrate difficulties in specifying reference points. First we discuss the number of reference points required to approximate the entire Pareto front of a many-objective problem. Next we show some simple examples where the uniform sampling of reference points on the known Pareto front leads to counter-intuitive results. Then we discuss how to specify reference points when the Pareto front is unknown. In this case, a set of reference points is usually constructed from obtained solutions by EMO algorithms to be evaluated. We show that the selection of EMO algorithms used to construct reference points has a large effect on the evaluated performance of each algorithm.
parallel problem solving from nature | 2014
Hisao Ishibuchi; Yuki Tanigaki; Hiroyuki Masuda; Yusuke Nojima
It has been reported for multi-objective knapsack problems that the recombination of similar parents often improves the performance of evolutionary multi-objective optimization (EMO) algorithms. Recently performance improvement was also reported by exchanging only a small number of genes between two parents (i.e., crossover with a very small gene exchange probability) without choosing similar parents. In this paper, we examine these performance improvement schemes through computational experiments where NSGA-II is applied to 500-item knapsack problems with 2-10 objectives. We measure the parent-parent distance and the parent-offspring distance in computational experiments. Clear performance improvement is observed when the parent-offspring distance is small. To further examine this observation, we implement a distance-based crossover operator where the parent-offspring distance is specified as a user-defined parameter. Performance of NSGA-II is examined for various parameter values. Experimental results show that an appropriate parameter value (parent-offspring distance) is surprisingly small. It is also shown that a very small parameter value is beneficial for diversity maintenance.
soft computing | 2014
Yuki Tanigaki; Kaname Narukawa; Yusuke Nojima; Hisao Ishibuch
Many-objective optimization has attracted increasing attention in the evolutionary multi-objective optimization (EMO) community. It has been repeatedly demonstrated that many-objective optimization problems with four or more objectives are very difficult to solve for EMO algorithms. Whereas a number of performance improvement attempts have been proposed, many-objective optimization is still difficult for EMO algorithms. In our previous study, we proposed a preference-based approach where Gaussian functions on a hyperplane in the objective space are used for preference representation. In this paper, we examine the behavior of our approach in the handling of combinatorial many-objective problems. Through computational experiments on multi-objective knapsack problems with 2-10 objectives, a set of well-distributed solutions over the preferred regions is obtained for each test problem. A trade-off relation between convergence and diversity for the preferred regions is also observed through computational experiments.
multiple criteria decision making | 2014
Hisao Ishibuchi; Hiroyuki Masuda; Yuki Tanigaki; Yusuke Nojima
In the evolutionary multi-objective optimization (EMO) community, some well-known test problems have been frequently and repeatedly used to evaluate the performance of EMO algorithms. When a new EMO algorithm is proposed, its performance is evaluated on those test problems. Thus algorithm development can be viewed as being guided by test problems. A number of test problems have already been designed in the literature. Since the difficulty of designed test problems is usually evaluated by existing EMO algorithms through computational experiments, test problem design can be viewed as being guided by EMO algorithms. That is, EMO algorithms and test problems have been developed in a coevolutionary manner. The goal of this paper is to clearly illustrate such a coevolutionary development. We categorize EMO algorithms into four classes: non-elitist, elitist, many-objective, and combinatorial algorithms. In each category of EMO algorithms, we examine the relation between developed EMO algorithms and used test problems. Our examinations of test problems suggest the necessity of strong diversification mechanisms in many-objective EMO algorithms such as SMS-EMOA, MOEA/D and NSGA-III.
genetic and evolutionary computation conference | 2014
Kaname Narukawa; Yuki Tanigaki; Hisao Ishibuchi
This paper proposes to represent the preference of a decision maker by Gaussian functions on a hyperplane. The preference is used to evaluate non-dominated solutions as a second criterion instead of the crowding distance in NSGA-II. High performance of our proposal is demonstrated for many-objective DTLZ problems.
soft computing | 2016
Kaname Narukawa; Yu Setoguchi; Yuki Tanigaki; Markus Olhofer; Bernhard Sendhoff; Hisao Ishibuchi
Many-objective optimization has attracted much attention in evolutionary multi-objective optimization (EMO). This is because EMO algorithms developed so far often degrade their search ability for optimization problems with four or more objectives, which are frequently referred to as many-objective problems. One of promising approaches to handle many objectives is to incorporate the preference of a decision maker (DM) into EMO algorithms. With the preference, EMO algorithms can focus the search on regions preferred by the DM, resulting in solutions close to the Pareto front around the preferred regions. Although a number of preference-based EMO algorithms have been proposed, it is not trivial for the DM to reflect his/her actual preference in the search. We previously proposed to represent the preference of the DM using Gaussian functions on a hyperplane. The DM specifies the center and spread vectors of the Gaussian functions so as to represent his/her preference. The preference handling is integrated into the framework of NSGA-II. This paper extends our previous work so that obtained solutions follow the distribution of Gaussian functions specified. The performance of our proposed method is demonstrated mainly for benchmark problems and real-world applications with a few objectives in this paper. We also show the applicability of our method to many-objective problems.
congress on evolutionary computation | 2013
Hisao Ishibuchi; Yuki Tanigaki; Naoya Akedo; Yusuke Nojima
An important implementation issue in the design of hybrid evolutionary multiobjective optimization algorithms with local search (i.e., multiobjective memetic algorithms) is how to strike a balance between local search and global search. If local search is applied to all individuals at every generation, almost all computation time is spent by local search. As a result, global search ability of memetic algorithms is not well utilized. We can use three ideas for decreasing the computation load of local search. One idea is to apply local search to only a small number of individuals. This idea can be implemented by introducing a local search probability, which is used to choose only a small number of initial solutions for local search from the current population. Another idea is a periodical (i.e., intermittent) use of local search. This idea can be implemented by introducing a local search interval (e.g., every 10 generations), which is used to specify when local search is applied. The other idea is an early termination of local search. Local search for each initial solution is terminated after a small number of neighbors are examined. This idea can be implemented by introducing a local search length, which is the number of examined neighbors in a series of iterated local search from a single initial solution. In this paper, we discuss the use of these three ideas to strike a local-global search balance. Through computational experiments on a two-objective 500-item knapsack problem, we compare various settings of local search such as short local search from all individuals at every generation, long local search from only a few individuals at every generation, and periodical long local search from all individuals. Global search in this paper means genetic search by crossover and mutation in multiobjective memetic algorithms.
congress on evolutionary computation | 2015
Yuki Tanigaki; Hiroyuki Masuda; Yu Setoguchi; Yusuke Nojima; Hisao Ishibuchi
An important implementation issue in the design of hybrid evolutionary multiobjective optimization algorithms such as multiobjective genetic local search (MOGLS) is how to combine local search with evolutionary algorithms. It has been demonstrated that the performance of MOGLS strongly depends on the order of global search and local search. A balance between local search and global search also affects its search ability. We can use three ideas for designing high-performance MOGLS algorithms. One idea is to choose one of two options: local search after global search or global search after local search. In general, their appropriate order depends on the problem. Another idea is to use tuned parameter values to appropriately specify their balance. The other idea is to change both their order and the parameter values during the execution of MOGLS. This idea can be implemented by dividing the whole search period into some sub-periods (i.e., dividing all generations into some intervals of generations). The appropriate order and parameter values are assigned to each sub-period. In this paper, we propose off-line algorithm structure optimization for MOGLS. The effectiveness of the proposed idea is examined by computational experiments on a two-objective knapsack problem and a two-objective flowshop scheduling problem. Based on experimental results, we discuss the importance of structure optimization of MOGLS.
soft computing | 2017
Yuki Tanigaki; Yusuke Nojima; Hisao Ishibuchi
We examine the performance of evolutionary multi-objective optimization (EMO) algorithms on various shapes of the search space in the objective space (i.e., the feasible region in the objective space). To analyze the advantage and disadvantage of each EMO algorithm on the shape of the search space, we propose a meta-optimization method which can automatically create multi-objective optimization problems (MOPs) for clarifying the advantage and disadvantage of EMO algorithms. In particular, we propose a two-level model to generate such MOPs. In the upper level, MOPs are handled as solutions. Some design variables of each MOP are optimized in this level. In the lower level, each MOP is used to calculate the relative performance between two EMO algorithms. The relative performance is regarded as the fitness of the MOP in the upper level. Thus, by maximizing the relative performance, we can obtain an MOP which differentiates the search performance between two EMO algorithms. Through computational experiments, we obtained two interesting observations. One is that Pareto dominance-based EMO algorithms have a low escaping ability from local Pareto-optimal regions. The other is that it is difficult for decomposition- and indicator-based EMO algorithms to find solutions along the entire Pareto front.