Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Youhei Akimoto is active.

Publication


Featured researches published by Youhei Akimoto.


Algorithmica | 2012

Theoretical Foundation for CMA-ES from Information Geometry Perspective

Youhei Akimoto; Yuichi Nagata; Isao Ono; Shigenobu Kobayashi

This paper explores the theoretical basis of the covariance matrix adaptation evolution strategy (CMA-ES) from the information geometry viewpoint.To establish a theoretical foundation for the CMA-ES, we focus on a geometric structure of a Riemannian manifold of probability distributions equipped with the Fisher metric. We define a function on the manifold which is the expectation of fitness over the sampling distribution, and regard the goal of update of the parameters of sampling distribution in the CMA-ES as maximization of the expected fitness. We investigate the steepest ascent learning for the expected fitness maximization, where the steepest ascent direction is given by the natural gradient, which is the product of the inverse of the Fisher information matrix and the conventional gradient of the function.Our first result is that we can obtain under some types of parameterization of multivariate normal distribution the natural gradient of the expected fitness without the need for inversion of the Fisher information matrix. We find that the update of the distribution parameters in the CMA-ES is the same as natural gradient learning for expected fitness maximization. Our second result is that we derive the range of learning rates such that a step in the direction of the exact natural gradient improves the parameters in the expected fitness. We see from the close relation between the CMA-ES and natural gradient learning that the default setting of learning rates in the CMA-ES seems suitable in terms of monotone improvement in expected fitness. Then, we discuss the relation to the expectation-maximization framework and provide an information geometric interpretation of the CMA-ES.


genetic and evolutionary computation conference | 2009

Adaptation of expansion rate for real-coded crossovers

Youhei Akimoto; Jun Sakuma; Isao Ono; Shigenobu Kobayashi

Premature convergence is one of the most notable obstacles that GAs face with. Once it happens, GAs cannot generate candidate solutions globally and the solutions are finally captured by local minima. To overcome it, we propose a mechanism that indirectly controls the variety of the population. It is realized by adapting the expansion rate parameter of crossovers, which determines the variance of the crossover distribution. The resulting algorithm is called adaptation of expansion rate (AER). The performance of the proposed methods is compared to an existing GA on several benchmark functions including functions whose landscape have ridge or multimodal structure. On these functions, existing GAs are likely to lead to premature convergence. The experimental result shows our approach outperforms the existing one on deceptive functions without disturbing the performance on comparatively easy problems.


Theoretical Computer Science | 2015

Analysis of runtime of optimization algorithms for noisy functions over discrete codomains

Youhei Akimoto; Sandra Cecilia Astete-Morales; Olivier Teytaud

We consider in this work the application of optimization algorithms to problems over discrete codomains corrupted by additive unbiased noise.We propose a modification of the algorithms by repeating the fitness evaluation of the noisy function sufficiently so that, with a fix probability, the function evaluation on the noisy case is identical to the true value.If the runtime of the algorithms on the noise-free case is known, the number of resampling is chosen accordingly. If not, the number of resampling is chosen regarding to the number of fitness evaluations, in an anytime manner.We conclude that if the additive noise is Gaussian, then the runtime on the noisy case, for an adapted algorithm using resamplings, is similar to the runtime on the noise-free case: we incur only an extra logarithmic factor. If the noise is non-Gaussian but with finite variance, then the total runtime of the noisy case is quadratic in function of the runtime on the noise-free case.


genetic and evolutionary computation conference | 2014

Comparison-based natural gradient optimization in high dimension

Youhei Akimoto; Anne Auger; Nikolaus Hansen

We propose a novel natural gradient based stochastic search algorithm, VD-CMA, for the optimization of high dimensional numerical functions. The algorithm is comparison-based and hence invariant to monotonic transformations of the objective function. It adapts a multivariate normal distribution with a restricted covariance matrix with twice the dimension as degrees of freedom, representing an arbitrarily oriented long axis and additional axis-parallel scaling. We derive the different components of the algorithm and show linear internal time and space complexity. We find empirically that the algorithm adapts its covariance matrix to the inverse Hessian on convex-quadratic functions with an Hessian with one short axis and different scaling on the diagonal. We then evaluate VD-CMA on test functions and compare it to different methods. On functions covered by the internal model of VD-CMA and on the Rosenbrock function, VD-CMA outperforms CMA-ES (having quadratic internal time and space complexity) not only in internal complexity but also in number of function calls with increasing dimension.


IEEE Transactions on Evolutionary Computation | 2015

Computational Cost Reduction of Nondominated Sorting Using the M-Front

Martin Drozdik; Youhei Akimoto; Hernán E. Aguirre; Kiyoshi Tanaka

Many multiobjective evolutionary algorithms rely on the nondominated sorting procedure to determine the relative quality of individuals with respect to the population. In this paper, we propose a new method to decrease the cost of this procedure. Our approach is to determine the nondominated individuals at the start of the evolutionary algorithm run and to update this knowledge as the population changes. In order to do this efficiently, we propose a special data structure called the M-front, to hold the nondominated part of the population. The M-front uses the geometric and algebraic properties of the Pareto dominance relation to convert orthogonal range queries into interval queries using a mechanism based on the nearest neighbor search. These interval queries are answered using dynamically sorted linked lists. Experimental results show that our method can perform significantly faster than the state-of-the-art Jensen-Fortins algorithm, especially in many-objective scenarios. A significant advantage of our approach is that, if we change a single individual in the population we still know which individuals are dominated and which are not.


parallel problem solving from nature | 2012

Convergence of the continuous time trajectories of isotropic evolution strategies on monotonic C 2 - composite functions

Youhei Akimoto; Anne Auger; Nikolaus Hansen

The Information-Geometric Optimization (IGO) has been introduced as a unified framework for stochastic search algorithms. Given a parametrized family of probability distributions on the search space, the IGO turns an arbitrary optimization problem on the search space into an optimization problem on the parameter space of the probability distribution family and defines a natural gradient ascent on this space. From the natural gradients defined over the entire parameter space we obtain continuous time trajectories which are the solutions of an ordinary differential equation (ODE). Via discretization, the IGO naturally defines an iterated gradient ascent algorithm. Depending on the chosen distribution family, the IGO recovers several known algorithms such as the pure rank-μ update CMA-ES. Consequently, the continuous time IGO-trajectory can be viewed as an idealization of the original algorithm. In this paper we study the continuous time trajectories of the IGO given the family of isotropic Gaussian distributions. These trajectories are a deterministic continuous time model of the underlying evolution strategy in the limit for population size to infinity and change rates to zero. On functions that are the composite of a monotone and a convex-quadratic function, we prove the global convergence of the solution of the ODE towards the global optimum. We extend this result to composites of monotone and twice continuously differentiable functions and prove local convergence towards local optima.


foundations of genetic algorithms | 2017

Quality Gain Analysis of the Weighted Recombination Evolution Strategy on General Convex Quadratic Functions

Youhei Akimoto; Anne Auger; Nikolaus Hansen

We investigate evolution strategies with weighted recombination on general convex quadratic functions. We derive the asymptotic quality gain in the limit of the dimension to infinity, and derive the optimal recombination weights and the optimal step-size. This work is an extension of previous works where the asymptotic quality gain of evolution strategies with weighted recombination was derived on the infinite dimensional sphere function. Moreover, for a finite dimensional search space, we derive rigorous bounds for the quality gain on a general quadratic function. They reveal the dependency of the quality gain both in the eigenvalue distribution of the Hessian matrix and on the recombination weights. Taking the search space dimension to infinity, it turns out that the optimal recombination weights are independent of the Hessian matrix, i.e., the recombination weights optimal for the sphere function are optimal for convex quadratic functions.


genetic and evolutionary computation conference | 2008

Functionally specialized CMA-ES: a modification of CMA-ES based on the specialization of the functions of covariance matrix adaptation and step size adaptation

Youhei Akimoto; Jun Sakuma; Isao Ono; Shigenobu Kobayashi

This paper aims the design of efficient and effective optimization algorithms for function optimization. This paper presents a new framework of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Recent studies modified the CMA-ES from the viewpoint of covariance matrix adaptation and resulted in drastic reduction of the number of generations. In addition to their modification, this paper modifies the CMA-ES from the viewpoint of step size adaptation. The main idea of modification is semantically specializing functions of covariance matrix adaptation and step size adaptation. This new method is evaluated on 8 classical unimodal and multimodal test functions and the performance is compared with standard CMA-ES. The experimental result demonstrates an improvement of the search performances in particular with large populations. This result is mainly because the proposed Hybrid-SSA instead of the existing CSA can adjust the global step length more appropriately under large populations and function specialization helps appropriate adaptation of the overall variance of the mutation distribution.


foundations of genetic algorithms | 2013

Objective improvement in information-geometric optimization

Youhei Akimoto; Yann Ollivier

Information-Geometric Optimization (IGO) is a unified framework of stochastic algorithms for optimization problems. Given a family of probability distributions, IGO turns the original optimization problem into a new maximization problem on the parameter space of the probability distributions. IGO updates the parameter of the probability distribution along the natural gradient, taken with respect to the Fisher metric on the parameter manifold, aiming at maximizing an adaptive transform of the objective function. IGO recovers several known algorithms as particular instances: for the family of Bernoulli distributions IGO recovers PBIL, for the family of Gaussian distributions the pure rank-μ CMA-ES update is recovered, and for exponential families in expectation parametrization the cross-entropy/ML method is recovered. This article provides a theoretical justification for the IGO framework, by proving that any step size not greater than 1 guarantees monotone improvement over the course of optimization, in terms of q-quantile values of the objective function f. The range of admissible step sizes is independent of f and its domain. We extend the result to cover the case of different step sizes for blocks of the parameters in the IGO algorithm. Moreover, we prove that expected fitness improves over time when fitness-proportional selection is applied, in which case the RPP algorithm is recovered.


genetic and evolutionary computation conference | 2016

Projection-Based Restricted Covariance Matrix Adaptation for High Dimension

Youhei Akimoto; Nikolaus Hansen

We propose a novel variant of the covariance matrix adaptation evolution strategy (CMA-ES) using a covariance matrix parameterized with a smaller number of parameters. The motivation of a restricted covariance matrix is twofold. First, it requires less internal time and space complexity that is desired when optimizing a function on a high dimensional search space. Second, it requires less function evaluations to adapt the covariance matrix if the restricted covariance matrix is rich enough to express the variable dependencies of the problem. In this paper we derive a computationally efficient way to update the restricted covariance matrix where the model richness of the covariance matrix is controlled by an integer and the internal complexity per function evaluation is linear in this integer times the dimension, compared to quadratic in the dimension in the CMA-ES. We prove that the proposed algorithm is equivalent to the sep-CMA-ES if the covariance matrix is restricted to the diagonal matrix, it is equivalent to the original CMA-ES if the matrix is not restricted. Experimental results reveal the class of efficiently solvable functions depending on the model richness of the covariance matrix and the speedup over the CMA-ES.

Collaboration


Dive into the Youhei Akimoto's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shigenobu Kobayashi

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Isao Ono

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shinichi Shirakawa

Yokohama National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuichi Nagata

Tokyo Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge