Sedigheh Mahdavi
University of Ontario Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sedigheh Mahdavi.
soft computing | 2017
Sedigheh Mahdavi; Shahryar Rahnamayan; Mohammad Ebrahim Shiri
Large-scale global optimization (LSGO) algorithms are crucially important to handle real-world problems. Recently, cooperative co-evolution (CC) algorithms have successfully been applied for solving many large-scale practical problems. Many applications have imbalanced subcomponents where the size of subcomponents and their contribution to the objective function value are different. CC algorithms often lose their efficiency on LSGO problems with the imbalanced subcomponents; since they do not consider the imbalance aspect of variables. In this paper, we propose a multilevel optimization framework based on variables effect (called MOFBVE) which optimizes several subcomponents of the most important variables at earlier stages of optimization procedure before optimizing the problem with the original search space at its last stage. Sensitivity analysis (SA) method determines how the variation in the outputs of the model can be influenced by the variation of its input parameters. MOFBVE computes the main effect of variables using an SA method, Morris screening, and then it employs the k-means clustering method to construct groups including variables with the similar effects on the fitness value. The constructed groups are sorted in the descending order based on their contribution on the fitness value and the top groups are selected as the levels of the important variables. MOFBVE can reduce the complexity of search space to work with a simplified model to achieve an efficient exploration. The performance of MOFBVE is benchmarked on the imbalanced LSGO problems, i.e., two individually modified CEC-2010 and the CEC-2013 LSGO benchmark functions. The simulated experiments confirmed that MOFBVE obtains a promising performance on the majority of the imbalanced LSGO test functions. Also, MOFBVE is compared with state-of-the-art CC algorithms; and the results show that it is better than or at least comparable to CC algorithms.
congress on evolutionary computation | 2016
Sedigheh Mahdavi; Shahryar Rahnamayan; Kalyanmoy Deb
Cooperative Coevolution (CC) framework has become a powerful approach to solve large-scale global optimization problems effectively. Although a number of significant modifications of CC algorithms have been introduced in recent years, the theoretical studies of population initialization strategies in the CC framework are quite limited so far. The population initialization strategies can help a population-based algorithm to start with better candidate solutions for achieving better results. In this paper, we propose a CC algorithm with population initialization strategies based on the center region to improve its performance. Three population initialization strategies, namely, center-based normal distribution sampling, central golden region, and hybrid random-center normal distribution sampling are utilized in the CC framework. These population initialization strategies attempt to generate points around center-point with different schemes. The performance of the proposed algorithm is evaluated on CEC-2013 LSGO benchmark functions. Simulation results confirm that the proposed algorithm obtains a promising performance on the majority of the nonseparable high dimension benchmark functions.
soft computing | 2018
Sedigheh Mahdavi; Shahryar Rahnamayan; Mohammad Ebrahim Shiri
Cooperative coevolution (CC) is an efficient framework for solving large-scale global optimization (LSGO) problems. It uses a decomposition method to divide the LSGO problems into several low-dimensional subcomponents; then, subcomponents are optimized. Since CC algorithms do not consider any imbalance feature, their performance degrades during solving imbalanced LSGO problems. In this paper, we propose an incremental CC (ICC) algorithm in which the algorithm optimizes an integrated subcomponent which subcomponents are dynamically added to it. Therefore, the search space of the optimizer is grown incrementally toward the original problem search space. Various search spaces are built according to three approaches, namely random-based, sensitivity analysis-based, and random sensitivity analysis-based methods; then, ICC explores these search spaces effectively. Random-based selects a subcomponent randomly for adding it to the current search space and the sensitivity analysis-based method uses a sensitivity analysis strategy to select a subcomponent. The random sensitivity analysis-based strategy is a hybrid of the random and sensitivity analysis-based methods. Theoretical analysis is provided to demonstrate that the proposed ICC-based algorithms are effective for solving imbalanced LSGO problems. Finally, the efficiency of these algorithms is benchmarked on the complex imbalanced LSGO problems. Simulation results confirm that ICC obtains a better performance overall.
Swarm and evolutionary computation | 2017
Sedigheh Mahdavi; Shahryar Rahnamayan; Kalyanmoy Deb
Abstract Opposition-based Learning (OBL) is a new concept in machine learning, inspired from the opposite relationship among entities. In 2005, for the first time the concept of opposition was introduced which has attracted a lot of research efforts in the last decade. Variety of soft computing algorithms such as, optimization methods, reinforcement learning, artificial neural networks, and fuzzy systems have already utilized the concept of OBL to improve their performance. This survey has been conducted on three classes of OBL attempts: a) theoretical, including the mathematical theorems and fundamental definitions, b) developmental, focusing on the design of the special OBL-based schemes, and c) real-world applications of OBL. More than 380 papers in a variety of disciplines are surveyed and also a comprehensive set of promising directions are discussed in detail.
Applied Intelligence | 2017
Sedigheh Mahdavi; Shahryar Rahnamayan; Mohammad Ebrahim Shiri
Cooperative co-evolution has proven to be a successful approach for solving large-scale global optimization (LSGO) problems. These algorithms decompose the LSGO problems into several smaller subcomponents using a decomposition method, and each subcomponent of the variables is optimized by a certain optimizer. They use a simple technique, the round-robin method, to equally assign the computational time. Since the standard cooperative co-evolution algorithms allocate the computational budget equally, the performance of these algorithms deteriorates for solving LSGO problems with subcomponents by various effects on the objective function. For this reason, it could be very useful to detect the subcomponents’ effects on the objective function in LSGO problems. Sensitivity analysis methods can be employed to identify the most significant variables of a model. In this paper, we propose a cooperative co-evolution algorithm with a sensitivity analysis-based budget assignment method (SACC), which can allocate the computational time among all subcomponents based on their different effects on the objective function, accordingly. SACC is benchmarked on imbalanced LSGO problems. Simulation results confirm that SACC obtains a promising performance on the majority of the imbalanced LSGO benchmark functions.
soft computing | 2018
Sedigheh Mahdavi; Shahryar Rahnamayan; Abbas Mahdavi
Population-based metaheuristic algorithms have been extensively applied to solve discrete optimization problems. Generally speaking, they work with a set of candidate solutions in the population which evolve during generations using variant reproduction and selection operations to find the optimal solution(s). The population is similar to a small society having several individuals which seek a common goal/solution. This study is motivated from the election systems of societies which can be applied in the population-based algorithms. We propose utilizing the majority voting for discrete population-based optimization algorithms which uses the information of all candidate solutions in the current generation to create a new trial candidate solution, called a president candidate solution. During optimization process, after applying the evolutionary operations, all candidate solutions vote collectively to determine the values of the president’s variables. In the proposed method, a majority voting is utilized to choose a value for each variable (gene) of the president candidate solution. This method keeps untouched all other steps of population-based algorithms; therefore, it can be used with any kind of population-based algorithm. As case studies, the discrete differential evolution (DDE) algorithm and the discrete particle swarm optimization (DPSO) are used as the parent algorithms to develop majority voting-based discrete DE (MVDDE) and majority voting-based discrete PSO (MVDPSO). These two algorithms are evaluated on the fifteen discrete benchmark functions with the dimensions of D = 10, 30, 50, 100, 200 and 500. Simulation results confirm that majority voting-based discrete optimization algorithms obtain a promising performance on the majority of the benchmark functions. In addition, we have conducted some tests on large-scale 0–1 knapsack problems with large scales as a real-world application.
congress on evolutionary computation | 2017
Sedigheh Mahdavi; Shahryar Rahnamayan
In the recent years, Large-Scale Global Optimization (LSGO) algorithms attempt to solve real-world problems efficiently. The imbalance in the contribution of variables and the interaction among variables pose major challenges for LSGO algorithms. This paper proposes mapping schemes based on the interaction among variables and the imbalance in the contribution of variables. The proposed mapping schemes present the different relations between the constructed class of variables according to the interaction feature and the constructed class of variables according to the imbalance feature. Covering a wide range of real-world problems is considered in the mapping schemes; therefore it can provide some insights to design LSGO benchmark suites. By developing LSGO benchmark suites with the ability of representing many-real world problems, researchers will be motivated to realize the success or failure level of LSGO algorithms for tackling various types of LSGO problems. Also, a preliminary set of experiments is conducted to present the importance of considered features in each scheme.
congress on evolutionary computation | 2017
Sedigheh Mahdavi; Shahryar Rahnamayan; Chirag Karia
Differential Evolution (DE) is a simple powerful evolutionary algorithm for solving global continuous optimization problems. The especial characteristic of DE algorithm is calculating a weighted difference vector of two random candidate solutions in the population to generate the new promising candidate solutions. A major operation of the DE algorithm is the mutation which can affect its performance. The main goal of this study is investigating the influence of ordering vectors on various mutation schemes. We design some Monte-Carlo based simulations to analyze several mutation schemes by calculating the probability of closeness of a new trial solutions to a random optimal solution. These simulations indicate that mutation schemes can enhance the performance of the DE algorithm which they consider right ordering of the vectors in their mutation operators. Also, we introduce a new mutation scheme which considers in ordering vectors in the mutation scheme. We benchmark the modified DE algorithm with the ordered mutation scheme (DE/order) on CEC-2014 test functions with three dimensions 30, 50, and 100. Simulation results confirm that DE/order obtains a promising performance on the majority of the test functions on all mentioned dimensions.
ieee symposium series on computational intelligence | 2016
Sedigheh Mahdavi; Shahryar Rahnamayan; Kalyanmoy Deb
Opposition based learning (OBL) has been gaining significant attention in machine learning, specially, in metaheuristic optimization algorithms to take OBLs advantage for enhancing their performance. In OBL, all variables are changed to their opposites while some variables are currently holding proper values which are discarded and converted to worse values by performing opposite. The partial opposition scheme was developed to change randomly some variables to its opposites but they do not pay enough attention to identify and keep variables which have proper values. In this paper, we propose a novel partial opposition scheme, which is generated based on the current best candidate solution. It tries to generate new trial solutions by using the candidate solutions and their opposites such that some variables of a candidate solution are remain unchanged and other variables are changed to their opposites in the trial solution (i.e., gene/variable based optimization). Variables in the trial solution are identified as close or far, according to their Euclidean distance from the corresponding variables/genes in the current best candidate solution. The proposed scheme uses the opposite of variables, which are closer to the current best solution. Only the new trial solutions are included in the next generation which are closer to corresponding opposite solution. As a case study, we employ the proposed partial opposition scheme in the DE algorithm and the partial opposition-based DE is evaluated on CEC-2014 benchmark functions. Simulation results confirm that the partial opposition-based DE obtains a promising performance on the majority of the benchmark functions. The proposed algorithm is compared with the Opposition-based DE (ODE) and random partial opposition-based DE algorithm (DE-RPO); the results show that our new method is better than or at least comparable to other competitors.
canadian conference on electrical and computer engineering | 2018
Sedigheh Mahdavi; Shahryar Rahnamayan