Marat S. Mukhametzhanov
University of Calabria
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Marat S. Mukhametzhanov.
Mathematics and Computers in Simulation | 2017
Pierluigi Amodio; Felice Iavernaro; Francesca Mazzia; Marat S. Mukhametzhanov; Yaroslav D. Sergeyev
A well-known drawback of algorithms based on Taylor series formulae is that the explicit calculation of higher order derivatives formally is an over-elaborate task. To avoid the analytical computation of the successive derivatives, numeric and automatic differentiation are usually used. A recent alternative to these techniques is based on the calculation of higher derivatives by using the Infinity Computer—a new computational device allowing one to work numerically with infinities and infinitesimals. Two variants of a one-step multi-point method closely related to the classical Taylor formula of order three are considered. It is shown that the new formula is order three accurate, though requiring only the first two derivatives of y(t) (rather than three if compared with the corresponding Taylor formula of order three). To get numerical evidence of the theoretical results, a few test problems are solved by means of the new methods and the obtained results are compared with the performance of Taylor methods of order up to four.
Journal of Optimization Theory and Applications | 2016
Yaroslav D. Sergeyev; Marat S. Mukhametzhanov; Dmitri E. Kvasov; Daniela Lera
Geometric and information frameworks for constructing global optimization algorithms are considered, and several new ideas to speed up the search are proposed. The accelerated global optimization methods automatically realize a local behavior in the promising subregions without the necessity to stop the global optimization procedure. Moreover, all the trials executed during the local phases are used also in the course of the global ones. The resulting geometric and information global optimization methods have a similar structure, and a smart mixture of new and traditional computational steps leads to 22 different global optimization algorithms. All of them are studied and numerically compared on three test sets including 120 benchmark functions and 4 applied problems.
Mathematics and Computers in Simulation | 2017
Yaroslav D. Sergeyev; Dmitri E. Kvasov; Marat S. Mukhametzhanov
Univariate continuous global optimization problems are considered in this paper. Several widely used multidimensional metaheuristic global optimization methods–genetic algorithm, differential evolution, particle swarm optimization, artificial bee colony algorithm, and firefly algorithm–are adapted to the univariate case and compared with three Lipschitz global optimization algorithms. For this purpose, it has been introduced a methodology allowing one to compare stochastic methods with deterministic ones by using operational characteristics originally proposed for working with deterministic algorithms only. As a result, a visual comparison of methods having different nature on classes of randomly generated test functions becomes possible. A detailed description of the new methodology for comparing, called “operational zones”, is given and results of wide numerical experiments with five metaheuristics and three Lipschitz algorithms are reported.
NUMERICAL COMPUTATIONS: THEORY AND ALGORITHMS (NUMTA–2016): Proceedings of the 2nd International Conference “Numerical Computations: Theory and Algorithms” | 2016
Francesca Mazzia; Ya. D. Sergeyev; Felice Iavernaro; Pierluigi Amodio; Marat S. Mukhametzhanov
New algorithms for the numerical solution of Ordinary Differential Equations (ODEs) with initial conditions are proposed. They are designed for working on a new kind of a supercomputer – the Infinity Computer – that is able to deal numerically with finite, infinite and infinitesimal numbers. Due to this fact, the Infinity Computer allows one to calculate the exact derivatives of functions using infinitesimal values of the stepsize. As a consequence, the new methods are able to work with the exact values of the derivatives, instead of their approximations. Within this context, variants of one-step multi-point methods closely related to the classical Taylor formulae and to the Obrechkoff methods are considered. To get numerical evidence of the theoretical results, test problems are solved by means of the new methods and the results compared with the performance of classical methods.
Applied Mathematics and Computation | 2018
Dmitri E. Kvasov; Marat S. Mukhametzhanov
Many practical problems involve the search for the global extremum in the space of the system parameters. The functions to be optimized are often highly multiextremal, black-box with unknown analytical representations, and hard to evaluate even in the case of one parameter to be adjusted in the presence of non-linear constraints. The interest of both the stochastic (in particular, metaheuristic) and mathematical programming (in particular, deterministic) communities to the comparison of metaheuristic and deterministic classes of methods is well recognized. Although both the communities have a huge number of journal and proceedings papers, a few of them are really dedicated to a systematic comparison of the methods belonging to these two classes. This paper meets the requirement of such a comparison between nature-inspired metaheuristic and deterministic algorithms (more than 125,000 launches of the methods have been performed) and presents an attempt (beneficial to practical fields including engineering design) to bring together two rather disjoint communities of metaheuristic and mathematical programming researchers and applied users.
INTERNATIONAL CONFERENCE OF NUMERICAL ANALYSIS AND APPLIED MATHEMATICS 2015 (ICNAAM 2015) | 2016
Dmitri E. Kvasov; Marat S. Mukhametzhanov
Lipschitz global optimization appears in many practical problems: decision making, optimal control, stability problems, finding the minimal root problems, etc. In many engineering applications the objective function is a “black-box”, multiextremal, non-differentiable and hard to evaluate. Another common property of the function to be optimized very often is the Lipschitz condition. In this talk, the Lipschitz global optimization problem is considered and several nature-inspired and Lipschitz global optimization algorithms are briefly described and compared with respect to the number of evaluations of the objective function.
Applied Mathematics and Computation | 2018
Manlio Gaudioso; Giovanni Giallombardo; Marat S. Mukhametzhanov
Abstract The objective of the paper is to evaluate the impact of the infinity computing paradigm on practical solution of nonsmooth unconstrained optimization problems, where the objective function is assumed to be convex and not necessarily differentiable. For such family of problems, the occurrence of discontinuities in the derivatives may result in failures of the algorithms suited for smooth problems. We focus on a family of nonsmooth optimization methods based on a variable metric approach, and we use the infinity computing techniques for numerically dealing with some quantities which can assume values arbitrarily small or large, as a consequence of nonsmoothness. In particular we consider the case, treated in the literature, where the metric is defined via a diagonal matrix with positive entries. We provide the computational results of our implementation on a set of benchmark test-problems from scientific literature.
INTERNATIONAL CONFERENCE OF NUMERICAL ANALYSIS AND APPLIED MATHEMATICS 2015 (ICNAAM 2015) | 2016
Yaroslav D. Sergeyev; Dmitri E. Kvasov; Marat S. Mukhametzhanov
An optimization problem is considered where the objective function f (x) is black-box and multiextremal and the information about its gradient ∇ f (x) is available during the search. It is supposed that ∇ f (x) satisfies the Lipschitz condition over the admissible hyperinterval with an unknown Lipschitz constant K. Some numerical Lipschitz global optimization methods based on geometric ideas with the usage of different estimates of the Lipschitz constant K are presented. Results of their systematic experimental investigation are reported and commented on.
Advances in Stochastic and Deterministic Global Optimization | 2016
Yaroslav D. Sergeyev; Dmitri E. Kvasov; Marat S. Mukhametzhanov
The sinusoidal parameter estimation problem is considered to fit a sum of damped sinusoids to a series of noisy observations. It is formulated as a nonlinear least-squares global optimization problem. A one-parametric case study is examined to determine an unknown frequency of a signal. Univariate Lipschitz-based deterministic methods are used for solving such problems within a limited computational budget. It is shown that the usage of local information in these methods (such as local tuning on the objective function behavior and/or evaluating the function first derivatives) can significantly accelerate the search for the problem solution with a required guarantee. Results of a numerical comparison with metaheuristic techniques frequently used in engineering design are also reported and commented on.
Communications in Nonlinear Science and Numerical Simulation | 2018
Yaroslav D. Sergeyev; Dmitri E. Kvasov; Marat S. Mukhametzhanov
Abstract The necessity to find the global optimum of multiextremal functions arises in many applied problems where finding local solutions is insufficient. One of the desirable properties of global optimization methods is strong homogeneity meaning that a method produces the same sequences of points where the objective function is evaluated independently both of multiplication of the function by a scaling constant and of adding a shifting constant. In this paper, several aspects of global optimization using strongly homogeneous methods are considered. First, it is shown that even if a method possesses this property theoretically, numerically very small and large scaling constants can lead to ill-conditioning of the scaled problem. Second, a new class of global optimization problems where the objective function can have not only finite but also infinite or infinitesimal Lipschitz constants is introduced. Third, the strong homogeneity of several Lipschitz global optimization algorithms is studied in the framework of the Infinity Computing paradigm allowing one to work numerically with a variety of infinities and infinitesimals. Fourth, it is proved that a class of efficient univariate methods enjoys this property for finite, infinite and infinitesimal scaling and shifting constants. Finally, it is shown that in certain cases the usage of numerical infinities and infinitesimals can avoid ill-conditioning produced by scaling. Numerical experiments illustrating theoretical results are described.