IEEE Transactions on Evolutionary Computation | 2021

Towards Large-Scale Evolutionary Multi-Tasking: A GPU-Based Paradigm

 
 
 
 
 

Abstract


Evolutionary Multi-Tasking (EMT), which shares knowledge across multiple tasks while the optimization progresses online, has demonstrated superior performance in terms of both optimization quality and convergence speed over its single-task counterpart in solving complex optimization problems. However, most of existing EMT algorithms only consider to handle two tasks simultaneously. As the computational cost incurred in the evolutionary search and knowledge transfer increased rapidly with the number of optimization tasks, these EMT algorithms cannot meet today’s requirements of optimization service on the cloud for many real-world applications, where hundreds or thousands of optimization requests (labelled as large-scale EMT) are often received simultaneously and require to be optimized in a short time. Recently, Graphics Processing Unit (GPU) computing has attracted extensive attention to accelerate the applications possessing large-scale data volume that are traditionally handled by the Central Processing Unit (CPU). Taking this cue, towards large-scale EMT, in this paper, we propose a new EMT paradigm based on the island model with the Compute Unified Device Architecture (CUDA), which is able to handle a large number of continuous optimization tasks efficiently and effectively. Moreover, under the proposed paradigm, we develop the GPU-based implicit and explicit knowledge transfer mechanisms for EMT. To evaluate the performance of the proposed paradigm, comprehensive empirical studies have been conducted against its CPU-based counterpart in large-scale EMT.

Volume None
Pages None
DOI 10.1109/tevc.2021.3110506
Language English
Journal IEEE Transactions on Evolutionary Computation

Full Text