Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adam P. Piotrowski is active.

Publication


Featured researches published by Adam P. Piotrowski.


Information Sciences | 2013

Adaptive Memetic Differential Evolution with Global and Local neighborhood-based mutation operators

Adam P. Piotrowski

Differential Evolution (DE) is one of the most popular optimization methods for real-valued problems and a large number of its variants have been proposed so far. However, bringing together different ideas that already led to successful DE versions is rare in the literature. In the present paper a novel DE algorithm is proposed, in which three among the most efficient concepts already applied separately within DE framework are gathered together, namely: (1) the adaptation of algorithm control parameters and probabilities of using different mutation strategies; (2) the use of Nelder-Mead algorithm as a local search method hybridized with DE; and (3) the splitting mutation into Global and Local models, when Local mutation model is based on the concept of neighborhood of individuals organized on a ring topology. The performance of the novel algorithm, called Adaptive Memetic DE with Global and Local neighborhood-based mutation operators is compared with 13 different DE variants on a set of 25 popular problems which include rotated, shifted and hybrid composition functions. It is found that, although none DE algorithm outperforms all the others for the majority of problems, on average the proposed approach perform better than all 13 DE algorithms selected for comparison. The proposed algorithm is another heuristic approach developed to solve optimization problems. The question may arise, whether proposing novel methods is useful as No Free Lunch theorems for optimization state that the expected performance of all possible heuristics on all possible problems is equal. In the last section of the paper the limitations and implications of No Free Lunch theorems are discussed based on rich, but unfortunately frequently neglected literature. A very simple continuous and differentiable minimization problem is proposed, for which it is empirically verified that each among considered 14 DE algorithms perform poorer than random sampling. It is also empirically shown that when all considered DE algorithms search for the maximum of the proposed problem, they found lower minimum than DE algorithms searching for the minimum. Such result is not unexpected according to No Free Lunch theorems and should be considered as a precaution from generalization of good performance of heuristic optimization methods.


European Journal of Operational Research | 2012

Differential Evolution algorithm with Separated Groups for multi-dimensional optimization problems

Adam P. Piotrowski; Jaroslaw J. Napiorkowski; Adam Kiczko

The classical Differential Evolution (DE) algorithm, one of population-based Evolutionary Computation methods, proved to be a successful approach for relatively simple problems, but does not perform well for difficult multi-dimensional non-convex functions. A number of significant modifications of DE have been proposed in recent years, including very few approaches referring to the idea of distributed Evolutionary Algorithms. The present paper presents a new algorithm to improve optimization performance, namely DE with Separated Groups (DE-SG), which distributes population into small groups, defines rules of exchange of information and individuals between the groups and uses two different strategies to keep balance between exploration and exploitation capabilities. The performance of DE-SG is compared to that of eight algorithms belonging to the class of Evolutionary Strategies (Covariance Matrix Adaptation ES), Particle Swarm Optimization (Comprehensive Learning PSO and Efficient Population Utilization Strategy PSO), Differential Evolution (Distributed DE with explorative-exploitative population families, Self-adaptive DE, DE with global and local neighbours and Grouping Differential Evolution) and multi-algorithms (AMALGAM). The comparison is carried out for a set of 10-, 30- and 50-dimensional rotated test problems of varying difficulty, including 10- and 30-dimensional composition functions from CEC2005. Although slow for simple functions, the proposed DE-SG algorithm achieves a great success rate for more difficult 30- and 50-dimensional problems.


Applied Soft Computing | 2014

Differential Evolution algorithms applied to Neural Network training suffer from stagnation

Adam P. Piotrowski

Abstract Large number of population-based Differential Evolution algorithms has been proposed in the literature. Their good performance is often reported for benchmark problems. However, when applied to Neural Networks training for regression, these methods usually perform poorer than classical Levenberg–Marquardt algorithm. The major aim of the present paper is to clarify, why? In this research, in which Neural Networks are used for a real-world regression problem, it is empirically shown that various Differential Evolution algorithms are falling into stagnation during Neural Network training. It means that after some time the individuals stop improving, or improve very occasionally, although the population diversity remains high. Similar behavior of Differential Evolution algorithms is observed for some, but not the majority of, benchmark problems. In the paper the impact of Differential Evolution population size, the initialization range and bounds on Neural Networks performance is also discussed. Among tested algorithms only the Differential Evolution with Global and Local neighborhood-based mutation operators performs better than the Levenberg–Marquardt algorithm for Neural Networks training. This version of Differential Evolution also shows the symptoms of stagnation, but much weaker than the other tested variants. To enhance exploitation in the final stage of Neural Networks training, it is proposed to merge the Differential Evolution with Global and Local neighborhood-based mutation operators algorithm with the Trigonometric mutation operator. This method does not rule out the stagnation problem, but slightly improves the performance of trained Neural Networks.


Swarm and evolutionary computation | 2017

Review of Differential Evolution population size

Adam P. Piotrowski

Abstract Population size of Differential Evolution (DE) algorithms is often specified by user and remains fixed during run. During the first decade since the introduction of DE the opinion that its population size should be related to the problem dimensionality prevailed, later the approaches to DE population size setting diversified. In large number of recently introduced DE algorithms the population size is considered to be problem-independent and often fixed to 100 or 50 individuals, but alongside a number of DE variants with flexible population size have been proposed. The present paper briefly reviews the opinions regarding DE population size setting and verifies the impact of the population size on the performance of DE algorithms. Ten DE algorithms with fixed population size, each with at least five different population size settings, and four DE algorithms with flexible population size are tested on CEC2005 benchmarks and CEC2011 real-world problems. It is found that the inappropriate choice of the population size may severely hamper the performance of each DE algorithm. Although the best choice of the population size depends on the specific algorithm, number of allowed function calls and problem to be solved, some rough guidelines may be sketched. When the maximum number of function calls is set to classical values, i.e. those specified for CEC2005 and CEC2011 competitions, for low-dimensional problems (with dimensionality below 30) the population size equal to 100 individuals is suggested; population sizes smaller than 50 are rarely advised. For higher-dimensional artificial problems the population size should often depend on the problem dimensionality d and be set to 3 d –5 d . Unfortunately, setting proper population size for higher-dimensional real-world problems ( d >40) turns out too problem and algorithm-dependent to give any general guide; 200 individuals may be a first guess, but many DE approaches would need a much different choice, ranging from 50 to 10 d . However, quite clear relation between the population size and the convergence speed has been found, showing that the fewer function calls are available, the lower population sizes perform better. Based on the extensive experimental results the use of adaptive population size is highly recommended, especially for higher-dimensional and real-world problems. However, which specific algorithms with population size adaptation perform better depends on the number of function calls allowed.


Expert Systems With Applications | 2012

Comparison of evolutionary computation techniques for noise injected neural network training to estimate longitudinal dispersion coefficients in rivers

Adam P. Piotrowski; Paweł M. Rowiński; Jaroslaw J. Napiorkowski

This study presents the comparison of various evolutionary computation (EC) optimization techniques applied to train the noise-injected multi-layer perceptron neural networks used for estimation of longitudinal dispersion coefficient in rivers. The special attention is paid to recently developed variants of Differential Evolution (DE) algorithm. The most commonly used gradient-based optimization methods have two significant drawbacks: they cannot cope with non-differentiable problems and quickly converge to local optima. These problems can be avoided by the application of EC techniques. Although a great amount of various EC algorithms have been proposed in recent years, only some of them have been applied to neural network training - usually with no comparison to other methods. We restrict our comparison to the regression problem with limited data and noise injection technique used to avoid premature convergence and to improve robustness of the model. The optimization methods tested in the present paper are: Distributed DE with Explorative-Exploitative Population Families, Self-Adaptive DE, DE with Global and Local Neighbors, Grouping DE, JADE, Comprehensive Learning Particle Swarm Optimization, Efficient Population Utilization Strategy Particle Swarm Optimization and Covariance Matrix Adaptation - Evolution Strategy.


Information Sciences | 2014

How novel is the novel black hole optimization approach

Adam P. Piotrowski; Jaroslaw J. Napiorkowski; Paweł M. Rowiński

Due to abundance of novel optimization algorithms in recent years, the problem of large similarities among methods that are named differently is becoming troublesome and general. The question arises if the novel source of inspiration is sufficient to breed an optimization algorithm with a novel name, even if its search properties are almost the same as, or are even a simplified variant of, the search properties of an older and well-known method. In this paper it is rigidly shown that the recently proposed heuristic approach called the black hole optimization is in fact a simplified version of Particle Swarm Optimization with inertia weight. Additionally, because a large number of metaheuristics developed during the last decade is claimed to be nature-inspired, a short discussion on inspirations of optimization algorithms is presented.


Hydrological Sciences Journal-journal Des Sciences Hydrologiques | 2005

Are artificial neural network techniques relevant for the estimation of longitudinal dispersion coefficient in rivers? / Les techniques de réseaux de neurones artificiels sont-elles pertinentes pour estimer le coefficient de dispersion longitudinale en rivières?

Paweł M. Rowiński; Adam P. Piotrowski; Jaroslaw J. Napiorkowski

Abstract Abstract Accurate application of the longitudinal dispersion model requires that specially designed experimental studies are performed in the river reach under consideration. Such studies are usually very expensive, so in order to quantify the longitudinal dispersion coefficient, as an alternative approach, various researchers have proposed numerous empirical formulae based on hydraulic and morphometric characteristics. The results are presented of the application of artificial neural networks as a parameter estimation technique. Five different cases were considered with the network trained for different arrangements of input nodes, such as channel depth, channel width, cross-sectionally averaged water velocity, shear velocity and sinuosity index. In the case where the sinuosity index is included as an input node, the results turned out to be better than those presented by other authors.


Information Sciences | 2015

Regarding the rankings of optimization heuristics based on artificially-constructed benchmark functions

Adam P. Piotrowski

Novel Evolutionary Algorithms are usually tested on sets of artificially-constructed benchmark problems. Such problems are often created to make the search of one global extremum (usually minimum) tricky. In this paper it is shown that benchmarking heuristics on either minimization or maximization of the same set of artificially-created functions (with equal bounds and number of allowed function calls) may lead to very different ranking of tested algorithms. As Evolutionary Algorithms and other heuristic optimizers are developed in order to be applicable to real-world problems, such result may raise doubts on the practical meaning of benchmarking them on artificial functions, as there is little reason that searching for the minimum of such functions should be more important than searching for their maximum.Thirty optimization heuristics, including a number of variants of Differential Evolution, as well as other kinds of Evolutionary Algorithms, Particle Swarm Optimization, Direct Search methods and - following the idea borrowed from No Free Lunch - pure random search are tested in the paper. Some discussion regarding the choice of the mean or the median performance for comparison is addressed and a short debate on the overall performance of particular methods is given.


Computers & Geosciences | 2014

Comparing large number of metaheuristics for artificial neural networks training to predict water temperature in a natural river

Adam P. Piotrowski; Marzena Osuch; Maciej J. Napiorkowski; Paweł M. Rowiński; Jaroslaw J. Napiorkowski

Nature-inspired metaheuristics found various applications in different fields of science, including the problem of artificial neural networks (ANN) training. However, very versatile opinions regarding the performance of metaheuristics applied to ANN training may be found in the literature.Both nature-inspired metaheuristics and ANNs are widely applied to various geophysical and environmental problems. Among them the water temperature forecasting in a natural river, especially in colder climate zones where the seasonality plays important role, is of great importance, as water temperature has strong impact on aquatic life and chemistry. As the impact of possible future climate change on water temperature is not trivial, models are needed to allow projection of streamwater temperature based on simple hydro-meteorological variables.In this paper the detailed comparison of the performance of nature-inspired optimization methods and Levenberg-Marquardt (LM) algorithm in ANNs training is performed, based on the case study of water temperature forecasting in a natural stream, namely Biala Tarnowska river in southern Poland. Over 50 variants of 22 various metaheuristics, including a large number of Differential Evolution, as well as some Particle Swarm Optimization, Evolution Strategies, multialgorithms and Direct Search methods are compared with LM algorithm on ANN training for the described case study. The impact of population size and some control parameters of particular metaheuristics on the ANN training performance are verified. It is found that despite widely claimed large improvement in nature-inspired methods during last years, the vast majority of them are still outperformed by LM algorithm on the selected problem. The only methods that, based on this case study, seem competitive to LM algorithm in terms of the final performance (but not speed) are Differential Evolution algorithms that benefit from the concept of Global and Local neighborhood-based mutation operators. The streamwater forecasting performance of the neural networks is adequate, the major prediction errors are related to the river freezing and melting processes that occur during winter in the mountainous catchment under study. The applicability of metaheuristics to neural networks training is verified.Levenberg-Marquardt and DEGL algorithms outperform other training methods.In case of Differential Evolution methods population size is crucial.Neural networks appear to be useful for water temperature predictions in rivers.


Information Sciences | 2017

Swarm Intelligence and Evolutionary Algorithms

Adam P. Piotrowski; Maciej J. Napiorkowski; Jaroslaw J. Napiorkowski; Paweł M. Rowiński

The popularity of metaheuristics, especially Swarm Intelligence and Evolutionary Algorithms, has increased rapidly over the last two decades. Numerous algorithms are proposed each year, and progressively more novel applications are being found. However, different metaheuristics are often compared by their performance on problems with an arbitrarily fixed number of allowed function calls. There are surprisingly few papers that explore the relationship between the relative performance of numerous metaheuristics on versatile numerical real-world problems and the number of allowed function calls.In this study the performance of 33 various metaheuristics proposed between 1960 and 2016 have been tested on 22 numerical real-world problems from different fields of science, with the maximum number of function calls varying between 5000 and 500,000. It is confirmed that the algorithms that succeed in comparisons when the computational budget is low are among the poorest performers when the computational budget is high, and vice versa. Among the tested variants, Particle Swarm Optimization algorithms and some new types of metaheuristics perform relatively better when the number of allowed function calls is low, whereas Differential Evolution and Genetic Algorithms perform better relative to other algorithms when the computational budget is large. It is difficult to find any metaheuristic that would perform adequately over all of the numbers of function calls tested. It was also found that some algorithms may become completely unreliable on specific real-world problems, even though they perform reasonably on others.

Collaboration


Dive into the Adam P. Piotrowski's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Maciej J. Napiorkowski

Warsaw University of Technology

View shared research outputs
Top Co-Authors

Avatar

Marzena Osuch

Polish Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Adam Kiczko

Warsaw University of Life Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge