Marjan Mernik
University of Maribor
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Marjan Mernik.
IEEE Transactions on Evolutionary Computation | 2006
Janez Brest; Sašo Greiner; Borko Boskovic; Marjan Mernik; Viljem Zumer
We describe an efficient technique for adapting control parameter settings associated with differential evolution (DE). The DE algorithm has been used in many practical cases and has demonstrated good convergence properties. It has only a few control parameters, which are kept fixed throughout the entire evolutionary process. However, it is not an easy task to properly set control parameters in DE. We present an algorithm-a new version of the DE algorithm-for obtaining self-adaptive control parameter settings that show good performance on numerical benchmark problems. The results show that our algorithm with self-adaptive control parameter settings is better than, or at least comparable to, the standard DE algorithm and evolutionary algorithms from literature when considering the quality of the solutions obtained
ACM Computing Surveys | 2013
Matej Črepinšek; Shih-Hsi Liu; Marjan Mernik
“Exploration and exploitation are the two cornerstones of problem solving by search.” For more than a decade, Eiben and Schippers advocacy for balancing between these two antagonistic cornerstones still greatly influences the research directions of evolutionary algorithms (EAs) [1998]. This article revisits nearly 100 existing works and surveys how such works have answered the advocacy. The article introduces a fresh treatment that classifies and discusses existing work within three rational aspects: (1) what and how EA components contribute to exploration and exploitation; (2) when and how exploration and exploitation are controlled; and (3) how balance between exploration and exploitation is achieved. With a more comprehensive and systematic understanding of exploration and exploitation, more research in this direction may be motivated and refined.
Information Sciences | 2014
Niki Veček; Marjan Mernik; Matej Črepinšek
Abstract The Null Hypothesis Significance Testing (NHST) is of utmost importance for comparing evolutionary algorithms as the performance of one algorithm over another can be scientifically proven. However, NHST is often misused, improperly applied and misinterpreted. In order to avoid the pitfalls of NHST usage this paper proposes a new method, a Chess Rating System for Evolutionary Algorithms (CRS4EAs) for the comparison and ranking of evolutionary algorithms. A computational experiment in CRS4EAs is conducted in the form of a tournament where the evolutionary algorithms are treated as chess players and a comparison between the solutions of two algorithms on the objective function is treated as one game outcome. The rating system used in CRS4EAs was inspired by the Glicko-2 rating system, based on the Bradley–Terry model for dynamic pairwise comparisons, where each algorithm is represented by rating, rating deviation, a rating/confidence interval, and rating volatility. The CRS4EAs was empirically compared to NHST within a computational experiment conducted on 16 evolutionary algorithms and a benchmark suite of 20 numerical minimisation problems. The analysis of the results shows that the CRS4EAs is comparable with NHST but may also have many additional benefits. The computations in CRS4EAs are less complicated and sensitive than those in statistical significance tests, the method is less sensitive to outliers, reliable ratings can be obtained over a small number of runs, and the conservativity/liberality of CRS4EAs is easier to control.
Applied Soft Computing | 2013
Shih-Hsi Liu; Marjan Mernik; Dejan Hrncic; Matej RepinšEk
Exploration and exploitation are omnipresent terms in evolutionary computation community that have been broadly utilized to explain how evolutionary algorithms perform search. However, only recently exploration and exploitation measures were presented in a quantitative way enabling to measure amounts of exploration and exploitation. To move a step further, this paper introduces a parameter control approach that utilizes such measures as feedback to adaptively control evolution processes. The paper shows that with new exploration and exploitation measures, the evolution process generates relatively well results in terms of fitness and/or convergence rate when applying to a practical chemical engineering problem of fitting Sovovas model. We also conducted an objective statistical analysis using Bonferroni-Dunn test and sensitivity analysis on the experimental results. The statistical analysis results again proved that the parameter control strategy using exploration and exploitation measures is competitive to the other approaches presented in the paper. The sensitivity analysis results also showed that different initial values may affect output in different magnitude.
soft computing | 2016
Matej Črepinšek; Shih-Hsi Liu; Luka Mernik; Marjan Mernik
The main objective of this paper is to correct the unreasonable and inaccurate criticism to our previous experiments using Teaching–Learning-Based Optimization algorithm and to quantify the amount of error that may arise due to incorrect counting of fitness evaluations. It is shown that inexact experiment replication should be avoided in comparisons between meta-heuristic algorithms whenever possible. Otherwise, an inexact replication and margin of error should be explicitly reported.
International Journal of Innovative Computing and Applications | 2011
Matej Črepinšek; Marjan Mernik; Shih-HsiLiu Liu
This paper introduces an ancestry tree-based approach for exploration and exploitation analysis. The approach introduces a data structure to record the evolution history of a population and a number of exploration and exploitation metrics. Such an approach not only provides insight of how and when the exploration and exploitation influence an evolution process, but also how the genetic structure of an individual is affected. It can be used to better understand inner working of an evolutionary algorithm or in evolutionary algorithm designing phase to develop suitable variation operators with good balance between exploration and exploitation. The approach is applied to the multi-objective 0/1 knapsack problem.
International Journal of Knowledge-based and Intelligent Engineering Systems | 2009
Shih-Hsi Liu; Marjan Mernik; Barrett R. Bryant
An evolutionary algorithm is an optimization process comprising two important aspects: exploration discovers potential offspring in new search regions; and exploitation utilizes promising solutions already identified. Intelligent balance between these two aspects may drive the search process towards better fitness results and/or faster convergence rates. Yet, how and when to control the balance perceptively have not yet been comprehensively addressed. This paper introduces an entropy-driven approach for evolutionary algorithms. Five kinds of entropy to express diversity are presented; and the balance between exploration and exploitation is adaptively controlled by one kind of entropy and mutation rate in a metaprogramming fashion. The experimental results of the benchmark functions show that the entropy-driven approach achieves explicit balance between exploration and exploitation and hence obtains even better fitness values and/or convergence rates.
Information Sciences | 2016
Niki Veček; Marjan Mernik; Bogdan Filipič; Matej źrepinšek
Meta-heuristic algorithms should be compared using the best parameter values for all the involved algorithms. However, this is often unrealised despite the existence of several parameter tuning approaches. In order to further popularise tuning, this paper introduces a new tuning method CRS-Tuning that is based on meta-evolution and our novel method for comparing and ranking evolutionary algorithms Chess Rating System for Evolutionary Algorithms (CRS4EAs). The utility or performance a parameter configuration achieves in comparison with other configurations is based on its rating, rating deviation, and rating interval. During each iteration significantly worse configurations are removed and new configurations are formed through crossover and mutation. The proposed tuning method was empirically compared to two well-known tuning methods F-Race and Revac through extensive experimentation where the parameters of Artifical Bee Colony, Differential Evolution, and Gravitational Search Algorithm were tuned. Each of the presented methods has its own features as well as advantages and disadvantages. The configurations found by CRS-Tuning were comparable to those found by F-Race and Revac, and although they were not always significantly different regarding the null-hypothesis statistical testing, CRS-Tuning displayed many useful advantages. When configurations are similar in performance, it tunes parameters faster than F-Race and there are no limitations in tuning categorical parameters.
Applied Soft Computing | 2017
Niki Veek; Matej repinek; Marjan Mernik
Graphical abstractDisplay Omitted HighlightsNHST and CRS4EAs have been compared with respect to k, N, and n.Both methods give similar conclusions regarding different numbers of algorithms k.The value of number of problems N affects NHST more than CRS4EAs.When the number of independent runs n is small, CRS4EAs is more reliable than NHST. When conducting a comparison between multiple algorithms on multiple optimisation problems it is expected that the number of algorithms, problems and even the number of independent runs will affect the final conclusions. Our question in this research was to what extent do these three factors affect the conclusions of standard Null Hypothesis Significance Testing (NHST) and the conclusions of our novel method for comparison and ranking the Chess Rating System for Evolutionary Algorithms (CRS4EAs). An extensive experiment was conducted and the results were gathered and saved of k=16 algorithms on N=40 optimisation problems over n=100 runs. These results were then analysed in a way that shows how these three values affect the final results, how they affect ranking and which values provide unreliable results. The influence of the number of algorithms was examined for values k={4, 8, 12, 16}, number of problems for values N={5, 10, 20, 40}, and number of independent runs for values n={10, 30, 50, 100}. We were also interested in the comparison between both methods NHSTs Friedman test with post-hoc Nemenyi test and CRS4EAs to see if one of them has advantages over the other. Whilst the conclusions after analysing the values of k were pretty similar, this research showed that the wrong value of N can give unreliable results when analysing with the Friedman test. The Friedman test does not detect any or detects only a small number of significant differences for small values of N and the CRS4EAs does not have a problem with that. We have also shown that CRS4EAs is an appropriate method when only a small number of independent runs n are available.
Applied Soft Computing | 2017
Miha Ravber; Marjan Mernik; Matej repinek
The figure displays confidence interval of Quality Indicators GD to R2 which both assess convergence to the Pareto optimal front. We can see that they are in contradiction with each other and that they ranked MOEAs exactly the opposite. CRS4EAs has discovered significant differences between MOEAs, showing us the impact of different QIs on the ranking even when they assess the same aspects of quality.Display Omitted A detailed analysis of Quality Indicators using a novel method called Chess Rating System for Evolutionary Algorithms (CRS4EAs).Experiments conducted on synthetic and real-world problems.Acquired new knowledge about Quality Indicators. Evaluating and comparing multi-objective optimizers is an important issue. But, when doing a comparison, it has to be noted that the results can be influenced highly by the selected Quality Indicator. Therefore, the impact of individual Quality Indicators on the ranking of Multi-objective Optimizers in the proposed method must be analyzed beforehand. In this paper the comparison of several different Quality Indicators with a method called Chess Rating System for Evolutionary Algorithms (CRS4EAs) was conducted in order to get a better insight on their characteristics and how they affect the ranking of Multi-objective Evolutionary Algorithms (MOEAs). Although it is expected that Quality Indicators with the same optimization goals would yield a similar ranking of MOEAs, it has been shown that results can be contradictory and significantly different. Consequently, revealing that claims about the superiority of one MOEA over another can be misleading.