IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 2021

Neural-ILT 2.0: Migrating ILT to Domain-specific and Multi-task-enabled Neural Network

 
 
 
 
 

Abstract


Optical proximity correction (OPC) in modern design closures has become extremely expensive and challenging. Conventional model-based OPC encounters performance degradation and large process variation, while aggressive approach such as inverse lithography technology (ILT) suffers from large computational overhead for both mask optimization and mask writing processes. In this paper, we developed Neural-ILT, an end-to-end learning-based OPC framework, which literally conducts mask prediction and ILT correction for a given layout in a single neural network, with the objectives of (1) mask printability enhancement, (2) mask complexity optimization, and (3) flow acceleration. A domain-specific model pre-training recipe, which introduces the domain knowledge of lithography system, is proposed to help Neural-ILT achieving faster and better convergence. Quantitative results show that compared to the state-of-the-art (SOTA) learning-based OPC solutions and conventional OPC flows, Neural-ILT can achieve 15× to 30× turnaround time (TAT) speedup and the best mask printability with relatively lower mask complexity. Based on the developed infrastructure, we further investigated the feasibility of handling multiple mask optimization tasks for different datasets within a common Neural-ILT platform. We believe this work could bridge well-developed deep learning toolkits to GPU-based high-performance lithographic computations to achieve groundbreaking performance-boosting on various computational lithography-related tasks.

Volume None
Pages None
DOI 10.1109/tcad.2021.3109556
Language English
Journal IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems

Full Text