Ching-Heng Wang
Semiconductor Manufacturing International Corporation
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ching-Heng Wang.
Proceedings of SPIE, the International Society for Optical Engineering | 2006
Chi-Yuan Hung; Ching-Heng Wang; Qingwei Liu; Cliff Ma; Kechih Wu; Gary Zhang
All OPC model builders are in search of a physically realistic model that is adequately calibrated and contains the information that can be used for process predictions and analysis of a given process. But there still are some unknown physics in the process and wafer data sets are not perfect. Most cases even using the average values of different empirical data sets will still take inaccurate measurements into the model fitting process (as Fig.1), which makes the fitting process more time consuming and also may cause losing convergence and stability. This work is to weight different wafer data points with a weighting function. The weighting function is dependent on the deviation (or range or other statistical index) values for each measurable symmetric feature in the sampling space of the model fitting. Using this approach, we can filter wrong information of the process and make the OPC model more accurate (as Fig.2). NanoScope-Modeler is the platform we used in this study, which has been proven to have an excellent performance on 0.13μm, 90nm and 65nm production and development models setup. Leveraging its automatic optical-tuning function, we practiced the best weighting approach to achieve the most efficient and convergent tuning flow.
Proceedings of SPIE, the International Society for Optical Engineering | 2008
Ching-Heng Wang; Qingwei Liu; Liguo Zhang
A dilemmatic trade-off that all OPC engineers are facing everyday is the convergence of the OPC result and the control of the OPC iteration times. Theoretically, infinite times of OPC iterations are needed to achieve a convergent and stable correction result. But actually there should always be a cut-off for the iteration time, for turnaround- time is always an important criteria for IC fabs. But considering the design layout becomes more complicated and pattern density becomes higher with the shrinkage of the critical dimension, fragmentation control during the OPC procedure is also becoming more and more sophisticated. Thus, to achieve a convergent correction result for all OPC fragments within limited correction iteration times now becomes a big challenge to OPC engineers. This work presents our study in a new OPC iteration control methodology. It can help to find an algorithm that always converges, and reduce the excessive use of parameter setting, commands and other involvement by the user. With this, we can reduce the run time required to obtain a convergent OPC solution.
Proceedings of SPIE | 2008
Ching-Heng Wang; Qingwei Liu; Liguo Zhang
Within the past several years, IC design and manufacture technology node transits rapidly from 0.13um to 65nm and 45nm. Whatever the technology node is, the same goal that both the designer and the manufacturer put most of their effort on is how to improve the chip yield as high as possible. A bunch of evidences have shown that the final yield is extremely related to the pattern transfer from design to wafer. But with the critical dimension shrinks, the largest challenge that the whole industry meets is how to keep high fidelity while transferring the patterns. Since the process window is now very limited even with the assistance of kinds of resolution enhancement technology, a tiny process deviation may cause large critical dimension variation, which will result in significant device character change. Micro-lithography combined with Optical proximity correct is supposed to be the most critical stage in pattern transfer stage. But conventional OPC always use nominal model, which will not take random process variation into account during applying OPC. This work will demonstrate our experiment in OPC with process window model, which is then proved to have obvious improvement in pattern fidelity.
Proceedings of SPIE, the International Society for Optical Engineering | 2006
Ching-Heng Wang; Qingwei Liu; Liguo Zhang; Chi-Yuan Hung
The most important task of the microlithography process is to make the manufacturable process latitude/window, including dose latitude and Depth of Focus, as wide as possible. Thus, to perform a thorough source optimization during process development is becoming more critical as moving to high NA technology nodes. Furthermore, Optical proximity correction (OPC) are always used to provide a common process window for structures that would, otherwise, have no overlapping windows. But as the critical dimension of the IC design shrinks dramatically, the flexibility for applying OPC also decreases. So a robust microlithography process should also be OPC-friendly. This paper demonstrates our work on the illumination optimization during the process development. The Calibre ILO (Illumination Optimization) tool was used to perform the illumination optimization and provided plots of DOF vs. various parametric illumination settings. This was used to screen the various illumination settings for the one with optimum process margins. The resulting illumination conditions were then implemented and analyzed at a real wafer level on our 90/65nm critical layers, such as Active, Poly, Contact and Metal. In conclusion, based on these results, a summary is provided highlighting how OPC can get benefit from proper illumination optimization.
Proceedings of SPIE, the International Society for Optical Engineering | 2008
Ching-Heng Wang; Qingwei Liu; Liguo Zhang
Now it comes to the 45nm technology node, which should be the first generation of the immersion micro-lithography. And the brand-new lithography tool makes many optical effects, which can be ignored at 90nm and 65nm nodes, now have significant impact on the pattern transmission process from design to silicon. Among all the effects, one that needs to be pay attention to is the mask pellicle effects impact on the critical dimension variation. With the implement of hyper-NA lithography tools, light transmits the mask pellicle vertically is not a good approximation now, and the image blurring induced by the mask pellicle should be taken into account in the computational microlithography. In this works, we investigate how the mask pellicle impacts the accuracy of the OPC model. And we will show that considering the extremely tight critical dimension control spec for 45nm generation node, to take the mask pellicle effect into the OPC model now becomes necessary.
Proceedings of SPIE, the International Society for Optical Engineering | 2007
Ching-Heng Wang; Qingwei Liu; Liguo Zhang
With the design rule shrinks rapidly, full chip robust Optical Proximity Correction (OPC) will definitely need longer time due to the increasing pattern density. Furthermore, to achieve a perfect OPC control recipe becomes more difficult. For, the critical dimension of the design features is deeply sub exposure wavelength, and there is only limited room for the OPC correction. Usually very complicated fragment commands need to be developed to handle the shrinking designs, which can be infinitely complicated. So when you finished debug a sophisticated fragment scripts, you still cannot promise that the script is universal for all kinds of design. So when you find some hot spot after you apply OPC correction for certain design. The only thing you can do is to modify your fragmentation script and try to re-apply OPC on this design. But considering the increasing time that is needed for applying full chip OPC nowadays, re-apply OPC will definitely prolong the tape-out time. We here demonstrate an approach, through which we can automatically fix some simple hotspots like pinch, bridging. And re-run OPC for the full chip is not necessary now. However, this work is only the early study of the auto-fix of post OPC hot spots. There is still a long way need to go to provide a perfect solution of this issue.
Proceedings of SPIE | 2007
Ching-Heng Wang; Qingwei Liu; Liguo Zhang
All OPC model builders are in search of a physically realistic model that is adequately calibrated and contains the information that can be used for process predictions and analysis of a given process. But there still are some unknown physics in the process and wafer data sets are not perfect. Most cases even using the average values of different empirical data sets will still take inaccurate measurements into the model fitting process, which makes the fitting process more time consuming and also may cause losing convergence and stability. The Image quality is one of the most worrisome obstacles faced by next-generation lithography. Nowadays, considerable effort is devoted to enhance the contrast, as well as understanding its impact on devices. It is a persistent problem for 193nm micro-lithography and will carry us for at least three generations, culminating with immersion lithography. This work is to weight different wafer data points with a weighting function. The weighting function is dependent on the Normal image log slope (NILS), which can reflect the image quality. Using this approach, we can filter wrong information of the process and make the OPC model more accurate. CalibreWorkbench is the platform we used in this study, which has been proven to have an excellent performance on 0.13um, 90nm and 65nm production and development models setup. Leveraging its automatic optical-tuning function, we practiced the best weighting approach to achieve the most efficient and convergent tuning flow.
Proceedings of SPIE, the International Society for Optical Engineering | 2006
Ching-Heng Wang; Qingwei Liu; Liguo Zhang; Gensheng Gao; Travis Brist; Tom Donnelly; Shumay Shang
In our continued pursuit to keep up with Moors Law we are encountering lower and lower k1 factors resulting in increased sensitivity to lithography / OPC un-friendly designs, mask rule constraints and OPC setup file errors such as bad fragmentation, sub-optimal site placement, and poor convergence during the OPC application process. While the process has become evermore sensitive and more vulnerable to yield loss, the incurred costs associated with such losses is continuing to increase in the form of higher reticle costs, longer cycle times for learning, increased costs associated with the lithography tools, and most importantly lost revenue due to bringing a product to market late. This has resulted in an increased need for virtual manufacturing tools that are capable of accurately simulating the lithography process and detecting failures and weak points in the layout so they can be resolved before committing a layout to silicon and / or identified for inline monitoring during the wafer manufacturing process. This paper will attempt to outline a verification flow that is employed in a high volume manufacturing environment to identify, prevent, monitor and resolve critical lithography failures and yield inhibitors thereby minimizing how much we succumb to the aforementioned semiconductor manufacturing vulnerabilities.
Storage and Retrieval for Image and Video Databases | 2005
Chi-Yuan Hung; Ching-Heng Wang; Cliff Ma; Gary G Zhang
Proceedings of SPIE, the International Society for Optical Engineering | 2006
Ching-Heng Wang; Qingwei Liu; Liguo Zhang; Chi-Yuan Hung