Multimedia Tools and Applications | 2021
Oppositional salp swarm algorithm with mutation operator for global optimization and application in training higher order neural networks
Abstract
Effectiveness of any swarm based metaheuristic optimization algorithm focuses on perfect mishmash of operator’s castoff for exploration and exploitation. The absenteeism of balance between this two factors leads to deprived performance in terms of attaining global optimum by stagnating in local optimum and untimely convergence. Salp Swarm Algorithm (SSA) is a recently evolved optimization technique, intended to resolve continuous, non-linear and multifaceted real world optimization glitches. For solving complex day to day life problems the explorative strength of existing SSA is not adequate. So, this paper proposes a new improved algorithm termed as OBL-MO-SSA to enhance the performance of existing SSA. Two techniques such as normal distributed mutation operator and oppositional learning concept is embedded to achieve the purpose. Oppositional learning concept ensures the current as well as opposite candidate solutions in the search region simultaneously to evaluate the closer solutions during ongoing evolution process. Mutation operator avoids the arbitrary positions in the search region by choosing lesser and larger mutations for balanced motion in current and opposite directions. The proposed method OBL-MO-SSA improves the exploration and exploitation strength inside search region at the same time exhibiting better convergence speed by successfully avoiding local optima stagnation. To confirm the efficiency of proposed OBL-MO-SSA algorithm, the same is assessed by benchmark problems pertaining to IEEE-CEC-2017. The competence and strength of the proposed OBL-MO-SSA is characterised by using performance metrics, complexity analysis, convergence rate and statistical significance. Friedman and Holms test has been accomplished to substantiate its statistical significance. Furthermore to elucidate complex difficulties, the proposed method used to train higher order neural network (FLANN) by the help of 10 customary datasets preferred from UCI storehouse. The simulated outcomes reveals that the developed OBL-MO-SSA might be cast-off for resolving various optimization complications efficiently.