Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Won-Tai Ki is active.

Publication


Featured researches published by Won-Tai Ki.


22nd Annual BACUS Symposium on Photomask Technology | 2002

Manufacturability evaluation of model-based OPC masks

Sung-Hoon Jang; Sonny Y. Zinn; Won-Tai Ki; Ji-Hyun Choi; Chan-Uk Jeon; Seong-Woon Choi; Hee-Sun Yoon; Jung-Min Sohn; Yong-Ho Oh; Jai-Cheol Lee; Sungwoo Lim

A systematic method for the model-based optical proximity correction in presented. This is called optical proximity effect reducing algorithm (OPERA) and has been implemented to TOPO, an in-house program for optical lithography simulations. Comparing simulational results as well as experimental results, we found that OPERA is not only suitable for shape restoration but also for resolution enhancement. However, the resulting optimized patterns have a high degree of complexity and this brought up a number of issues for mask manufacturing. First, data volume and exposure time were dramatically increased for conventional e-beam file formats. This was solved by using the MODE6 format that preserves data hierarchy. Second, due to excessive shot divisions, a variable-shaped beam machine could not finish the exposure process. A raster-scan beam machine successfully finished the exposure. Finally, a die-to-die inspection was performed but many false defects that do not affect wafer printing were defected. This will be solved by a new type of tool that inspects a mask by evaluating its aerial image.


Japanese Journal of Applied Physics | 2002

Flare in Microlithographic Exposure Tools

Tae Moon Jeong; Sung-Woon Choi; Jong Rak Park; Won-Tai Ki; Jung-Min Sohn; Sung-Woo Lee; Hyun-Jae Kang; Sang-Gyun Woo; Woo-Sung Han

To achieve the high level in photolithographic technology that is needed for current microelectronic devices, it is strongly required to consider emerging key parameters that were not critical drawbacks in previous photolithographic techniques. Flare existing in optical elements is one example of such emerging key parameters. In this paper, undesirable linewidth variation due to flare and a measurement method of flare are described. Various phenomena related to linewidth variation due to flare are experimentally observed and theoretically analyzed. Finally, the photomask linewidth correction is introduced to compensate this undesirable linewidth variation due to flare.


Optical Microlithography XVI | 2003

Improvement of shot uniformity on a wafer by controlling backside transmittance distribution of a photomask

Jong Rak Park; Soon Ho Kim; Gi-Sung Yeo; Sung-Woon Choi; Won-Tai Ki; Hee-Sun Yoon; Jung-Min Sohn

CD (critical dimension) uniformity on a wafer is affected by several factors such as resist coating, bake, development, etch processes, scanner performance, and photomask CD uniformity. Especially, shot uniformity or in-field CD uniformity is strongly dependent on scanner and photomask. CD error of a photomask and imaging error of a scanner lead to nonuniformity of in-field linewidth distribution. In this paper we propose and demonstrate a shot uniformity improvement method. The shot uniformity improvement method described in this paper utilizes the original shot uniformity map and dose latitude to determine the distribution of illumination intensity drop suitable for correcting CD error on the wafer. The distribution of illumination intensity drop is realized by controlling pattern density of contact hole pattern with 180° phase on the backside of the photomask. We applied this technique to several masks and it was found that global CD uniformity could be excellently improved by the method.


Proceedings of SPIE, the International Society for Optical Engineering | 2008

Optimization of mask manufacturing rule check constraint for model based assist feature generation

Seongbo Shim; Young-Chang Kim; Yong-Jin Chun; Seong-Woo Lee; Suk-joo Lee; Seong-Woon Choi; Woo-Sung Han; Seong-hoon Chang; Seok-chan Yoon; Hee-Bom Kim; Won-Tai Ki; Sang-Gyun Woo; Hangu Cho

SRAF (sub-resolution assist feature) generation technology has been a popular resolution enhancement technique in photo-lithography past sub-65nm node. It helps to increase the process window, and these are some times called ILT(inverse lithography technology). Also, many studies have been presented on how to determine the best positions of SRAFs, and optimize its size. According to these reports, the generation of SRAF can be formulated as a constrained optimization problem. The constraints are the side lobe suppression and allowable minimum feature size or MRC (mask manufacturing rule check). As we know, bigger SRAF gives better contribution to main feature but susceptible to SRAF side lobe issue. Thus, we finally have no choice but to trade-off the advantages of the ideally optimized mask that contains very complicated SRAF patterns to the layout that has been MRC imposed applied to it. The above dilemma can be resolved by simultaneously using lower dose (high threshold) and cleaning up by smaller MRC. This solution makes the room between threshold (side lobe limitation) and MRC constraint (minimum feature limitation) wider. In order to use smaller MRC restriction without considering the mask writing and inspection issue, it is also appropriate to identify the exact mask writing limitation and find the smart mask constraints that well reflect the mask manufacturability and the e-beam lithography characteristics. In this article, we discuss two main topics on mask optimizations with SRAF. The first topic is on the experimental work to find what behavior of the mask writing ability is in term of several MRC parameters, and we propose more effective MRC constraint for aggressive generation of SRAF. The next topic is on finding the optimum MRC condition in practical case, 3X nm node DRAM contact layer. In fact, it is not easy to encompass the mask writing capability for very complicate real SRAF pattern by using the current MRC constraint based on the only width and space restriction. The test mask for this experimental work includes not only typical split patterns but also real device patterns that are generated by in-house model-based assist feature generation tool. We analyzed the mask writing result for typical patterns and compared the simulation result, and wafer result for real device patterns.


Proceedings of SPIE, the International Society for Optical Engineering | 2005

Optimized distributed computing environment for mask data preparation

Byoung-Sup Ahn; Ju-Mi Bang; Min-Kyu Ji; Sun Kang; Sung-Hoon Jang; Yo-Han Choi; Won-Tai Ki; Seong-Woon Choi; Woo-Sung Han

As the critical dimension (CD) becomes smaller, various resolution enhancement techniques (RET) are widely adopted. In developing sub-100nm devices, the complexity of optical proximity correction (OPC) is severely increased and applied OPC layers are expanded to non-critical layers. The transformation of designed pattern data by OPC operation causes complexity, which cause runtime overheads to following steps such as mask data preparation (MDP), and collapse of existing design hierarchy. Therefore, many mask shops exploit the distributed computing method in order to reduce the runtime of mask data preparation rather than exploit the design hierarchy. Distributed computing uses a cluster of computers that are connected to local network system. However, there are two things to limit the benefit of the distributing computing method in MDP. First, every sequential MDP job, which uses maximum number of available CPUs, is not efficient compared to parallel MDP job execution due to the input data characteristics. Second, the runtime enhancement over input cost is not sufficient enough since the scalability of fracturing tools is limited. In this paper, we will discuss optimum load balancing environment that is useful in increasing the uptime of distributed computing system by assigning appropriate number of CPUs for each input design data. We will also describe the distributed processing (DP) parameter optimization to obtain maximum throughput in MDP job processing.


Photomask and Next-Generation Lithography Mask Technology XII | 2005

Simulation of resist heating effect with e-beam lithography using distributed processing (DP)

Won-Tai Ki; Byung-Sup Ahn; Ji-Soong Park; Seung-Woon Choi; Sangback Ma; Woo-Sung Han

As the design rule with wafer is tightening to sub-100nm, the specification of Mask CD uniformity is steeply tightened too. For instance, according to 2004 ITRS Roadmap updated, the specification of DRAMs CD uniformity requires less then 7nm on 80nm nodes in Yr. 2005. In order to satisfy that specification, it is important to analyze various factors such as e-beam machine error, heating effect, fogging effect, proximity effect, and process errors which cause CD non-uniformity in the mask. In this paper, a simulation method will be introduced to calculate the local and global heating effect by applying DP(Distributed Processing). First, experiments were performed to see heating effects on mask CD uniformity. In case of the ZEP process with 50KeV exposure, the CD error caused by heating effect amounted to 45nm in worst case. Second, heating effect was simulated using DP. Recently, most simulators have been required high accuracy. However, it is inevitable to spend more calculation time. To improve that problem, DP has been adopted in many softwares. In this paper, MPI(Message Passing Interface) library was applied to simulate heating effect. Finally, the experiment and simulated results were compared. As a result, simulation results could explain the CD errors investigated on our experiment. In our experiment, 2D simulation is sufficient to expect CD errors caused by resist heating effect.


Photomask and next-generation lithography mask technology. Conference | 2000

Dose latitude dependency on resist contrast in e-beam mask lithography

Byung-Cheol Cha; Seong-Yong Moon; Won-Tai Ki; Seung-Hune Yang; Seong-Woon Choi; Woo-Sung Han; Hee-Sun Yoon; Jung-Min Sohn

In mask-making process with e-beam lithography, the process capability is usually affected by exposure profile, resist contrast and development process. Dose latitude depends significantly on these three parameters. In this work, dose latitude between different resist contrasts has been experimentally studied as a function of linewidth, dose, beam size and over development magnitude using commercial PBS and ZEP 7000 resist on a photomask with 10 keV exposure. It has been found that ZEP 7000 resist with high contrast shows lower dose latitude, more sensitivity to the variation of linewidth, dose and beam size except for over development magnitude due to its relatively longer development time.


Proceedings of SPIE, the International Society for Optical Engineering | 2008

Distributed processing (DP) based e-beam lithography simulation with long range correction algorithm in e-beam machine

Won-Tai Ki; Ji-Hyeon Choi; Byung-Gook Kim; Sang-Gyun Woo; Han-Ku Cho

As the design rule with wafer process is getting smaller down below 50nm node, the specification of CDs on a mask is getting more tightened. Therefore, more tight and accurate E-Beam Lithography simulation is highly required in these days. However, in reality most of E-Beam simulation cases, there is a trade-off relationship between the accuracy and the simulation speed. Moreover, the necessity of full chip based simulation has been increasing in order to estimate more accurate mask CDs based on real process condition. Therefore, without consideration of long range correction algorithm such as fogging effect and loading effect correction in E-beam machine, it would be impossible and meaningless to pursue the full chip based simulation. In this paper, we introduce a breakthrough method to overcome the obstacles of E-Beam simulation. In-house E-beam simulator, ELIS (E-beam LIthography Simulator), has been upgraded to solve these problems. First, DP (Distributed Processing) strategy was applied to improve calculation speed. Secondly, the long range correction algorithm of E-beam machine was also applied to compute intensity of exposure on a full chip based (Mask). Finally, ELIS-DP has been evaluated possibility of expecting or analyzing CDs on full chip base.


Proceedings of SPIE | 2008

Predicting Conversion Time of Circuit Design File by Artificial Neural Networks

Sung-Hoon Jang; Jee-Hyong Lee; Byoung-Sup Ahn; Won-Tai Ki; Ji-Hyeon Choi; Sang-Gyun Woo; Han-Ku Cho

GDSII is a data format of the circuit design file for producing semiconductor. GDSII is also used as a transfer format for fabricating photo mask as well. As design rules are getting smaller and RET (Resolution Enhancement Technology) is getting more complicated, the time of converting GDSII to a mask data format has been increased, which influences the period of mask production. Photo mask shops all over the world are widely using computer clusters which are connected through a network, that is, called distributed computing method, to reduce the converting time. Commonly computing resource for conversion is assigned based on the input file size. However, the result of experiments showed that the input file size was improper to predict the computing resource usage. In this paper, we propose the methodology of artificial intelligence with considering the properties of GDSII file to handle circuit design files more efficiently. The conversion time will be optimized by controlling the hardware resource for data conversion as long as the conversion time is predictable through analyzing the design data. Neural networks are used to predict the conversion time for this research. In this paper, the application of neural networks for the time prediction will be discussed and experimental results will be shown with comparing to statistical model based approaches.


Proceedings of SPIE, the International Society for Optical Engineering | 2006

Reduction of MDP time through the improvement of verification method

Young-hwa Noh; Sung-Hoon Jang; Won-Tai Ki; Ji-Hyeon Choi; Seong-Woon Choi; Woo-Sung Han

The low-k1 lithography produces large volumes of mask data resulting in more complex optical proximity effect. It puts heavy burden on MDP flow and affects turn around time (TAT). To solve this problem, DP (Distributed Processing) method has been introduced. Even though DP is a very powerful tool to reduce the MDP time, there still might be unexpected pattern drop issue. In order to deal with this issue, the verification step was added in MDP flow. The present verification method is a boolean operation using 2 machine data after converting as a same way. However this verification method has two shortcomings. First, this method is not suitable to detect the same error caused by same software bug. Secondly, it needs double conversion time. A new verification method should be much faster and more accurate than the current verification method. In this paper, the new verification method will be discussed and experimental results using the new verification method will be shown with comparing to the old verification method.

Collaboration


Dive into the Won-Tai Ki's collaboration.

Researchain Logo
Decentralizing Knowledge