Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael L. Rieger is active.

Publication


Featured researches published by Michael L. Rieger.


14th Annual BACUS Symposium on Photomask Technology and Management | 1994

Optimizing proximity correction for wafer fabrication processes

John P. Stirniman; Michael L. Rieger

A key requirement for any proximity correction method is the ability to accurately predict proximity effects for any given circuit configuration. In this paper we present a methodology for characterizing proximity effects from measurements taken on a processed wafer. The characterization will determine what types of effects are present, which effects can be corrected, and it will quantify behavior parameters for a generalized proximity error model.


design automation conference | 2001

Layout design methodologies for sub-wavelength manufacturing

Michael L. Rieger; Jeffrey P. Mayhew; Sridhar Panchapakesan

In this paper, we describe new types of layout design constraints needed to effectively leverage advanced optical wafter lithography techniques. Most of these constraints are dictated by the physics of advanced lithography processes, while other constraints are imposed by new photomask techniques. Among the methods discussed are 1) phase shift mask (PSM) lithography in which phase information is placed to the photomask in combination with conventional clear and dar information; 2) optical proximity correction (OPC) where predictable distorations in feature geometry are corrected by putting an inverse distortion on the mask; 3) off-axis illumination optics that improve resolution of some configurations at the expense of others; and 4) use of non-resolving assist features that improve neighboring structures.


21st Annual BACUS Symposium on Photomask Technology | 2002

OPC strategies to minimize mask cost and writing time

Michael L. Rieger; Jeffrey P. Mayhew; Jiangwei Li; James P. Shiely

As k1 factors decline, optical proximity correction (OPC) treatments required to maintain dimensional tolerances involve increasingly complex correction shapes. This translates to more detailed, larger mask pattern databases. Intricate, dense mask-layouts increase mask writing time and cost. OPC employment within a growing number of lithography layers compounds the issue, leading to skyrocketing mask-set costs and long turn-times. ASIC manufacturing, where average chip life cycles consume less than 500 wafers, is particularly hard hit by elevated mask manufacturing costs. OPC increases mask data mainly by adding geometric detail - serifs, hammerheads, jogs, etc - to the design layout. The vertex count, a measure of shape complexity, typically expands by a factor of 2 to 5, depending on OPC objectives and accuracy requirements. OPC can also increase hierarchic data file size through loss of hierarchic compression. In this paper we outline several alternatives for reducing OPC data base size and for making OPC layout configurations friendlier to mask fabrication tools. An underlying assumption is that there is an optimum OPC treatment dictated by the behavior of the process, and that approximations to this ideal involve trade-offs with OPC accuracy. To whatever extent OPC effectiveness can be maintained while accuracy is compromised, mask complexity can be reduced.


Optical Microlithography XVII | 2004

Classical control theory applied to OPC correction segment convergence

Benjamin D. Painter; Lawrence L. Melvin; Michael L. Rieger

Model based Optical Proximity Correction work is currently performed by segmenting patterns in a layout and iteratively applying corrections to these segments for a set number of iterations. This is an open loop control methodology that relies on a finely tuned algorithm to arrive at a proper correction. A goal of this algorithm is to converge in the fewest number of iterations possible. As technology nodes become smaller, different correction areas tend to correct at different rates, and these correction rates are diverging with process node. This leads to more iterations being required to converge to a final OPC solution, the consequence of which is an increased runtime and tapeout cost. The current solution to this problem is to use proportional damping factors to attempt to bring different structure types to a solution. Classical control theory provides tools to optimize the convergence of these processes and to speed up convergence in physical systems. Introducing derivative and integral control while continuing use of proportional control should reduce the number of iterations needed to converge to a final solution as well as optimize the convergence for varied configurations.


Optical Microlithography XVII | 2004

Advanced model formulations for optical and process proximity correction

Daniel F. Beale; James P. Shiely; Lawrence L. Melvin; Michael L. Rieger

As post-litho process effects account for a larger and larger portion of CD error budgets, process simulation terms must be given more weight in the models used for proximity correction. It is well known that for sub-90 nm processes resist and etch effects can no longer be treated as a small perturbation on a purely optical (aerial image) OPC model. The aerial image portion of the model must be combined in a more appropriate way with empirical terms describing resist and etch effects. The OPC engineer must choose a model form which links an optical component with a resist/etch component in a manner that balances efficiency, robustness and fidelity to the aerial image, among other factors. No single way of connecting litho and etch models is ideal in all cases; the best form of linkage depends on the particular litho and etch process to be simulated. In this paper, we provide practical guidelines for linking litho and etch components of a model, using a representative 70 nm process with a large etch bias as an example. This 70 nm case study, which is representative of many sub-90 nm processes that rely on etch to shrink critical features, presents special challenges for OPC modeling. For the process under study, lines were are printed in resist at 120 nm, and the litho model was verified via resist SEM measurements taken at the resist edge. Note that a thresholded aerial image is not well-characterized a distance 25 nm from the resist edge. This is roughly the distance the edge moves back due to the etch step. Although in some cases etch bias can be calculated from aerial image contrast, in general etch bias cannot be predicted from the aerial image because litho and etch are governed by different underlying physics. The model forms available for linking litho and etch range from the efficient “lumped” form, which combines litho and etch simulation in a single model, to a highly accurate two-stage form which separates the two components. In this paper we evaluate the following model forms for applicability to the 70 nm process under study: 1) Aerial image/load kernel combined (“lumped”) model form 2) Aerial image/rule offset “hybrid” model form 3) Separate litho and etch models (2-stage correction)


Journal of Micro-nanolithography Mems and Moems | 2012

Communication theory in optical lithography

Michael L. Rieger

In addition to the well-known wavelength challenges in optical lithography, sustaining increases in total layout information density-a doubling every two years or so, per Moores Law-further strains pattern transfer capabilities and costs for advanced designs. Emerging lithography methods address these barriers by leveraging optical, materials, and process techniques that deliver more useful information to the wafer image on top of modest improvements to the spatial bandwidth of the lithography channel. Lithography is a communication channel specialized in delivering high-definition, high-density physical images to silicon wafers. Parallels can be drawn to communication theory, where key innovations have steadily improved the efficiency of digital communication within increasingly precious bandwidth. Several recent lithography process innovations will be outlined in terms of communication theory concepts, and their impact on economic trade-offs and implications to layout design styles will be discussed.


23rd Annual BACUS Symposium on Photomask Technology | 2003

Model-based methodology for reducing OPC output pattern complexity

Lawrence S. Melvin; Michael L. Rieger

One of the best methods to increase correction accuracy in model based OPC is to decrease the correction segment length. As design rules shrink, this methodology is becoming more prevalent in model based OPC corrections. Unfortunately, it increases general mask feature complexity, which leads to reticles that are difficult to manufacture and inspect. With current OPC segmentation methodologies, the smallest correction segment length is generally applied uniformly across an entire correction set. A more targeted segmentation approach using the process model to determine sampling rates and locations could be used to confine complex correction features only to regions where they are absolutely necessary. For example, the choice between a hammerhead or dog-ear serif can be made using process model data so that dog-ear serifs are only used when flat aerial images are generated by the layout. This would lead to a more frugal correction that maintains correction accuracy while reducing mask construction complexity. OPC complexity is a key factor driving mask costs higher as design rules are pushed smaller. Methods for effectively reducing OPC complexity, without compromising OPC effectiveness, are being leveraged to help reduce the rate of NRE cost growth. In previous papers we have discussed methods for identifying features in which OPC accuracy can be sacrificed safely to reduce mask complexity. In addition, we have outlined methods for handling process variation effects for simple OPC shapes in complex regions. In this paper we will discuss another method for reducing OPC complexity while optimally preserving OPC accuracy on every feature. The method leverages pre-correction process simulation to predict the most “cost effective” shape for a feature. With simulated pattern characteristics and with consideration of potential mask rule violations, the method establishes an optimum correction shape “template.” For example, the choice between various line end-treatments can be determined up front, thus focusing the OPC computation on the most effective and least complex shape, and removing the need to perform post-OPC mask constraint shape adjustments. The implementation of this methodology leads to a more frugal correction that maintains correction accuracy while reducing mask construction complexity.


Photomask and next-generation lithography mask technology. Conference | 2002

Enriching design intent for optimal OPC and RET

Michael L. Rieger; Valery Gravoulet; Jeffrey P. Mayhew; Daniel F. Beale; Robert Lugg

In typical rule- or model-based optical proximity correction (OPC) the goal is to align the silicon layout edges as closely as possible to the corresponding edges in the design layout. OPC precision requirements are approaching 1nm or less at the 0.1mm process node. While state-of-the-art OPC tools are capable of operating at this accuracy, such tight requirements increase computational cycle time, output file size, and photomask fabrication cost. Accuracy requirements on different features in the design may vary widely, and regions that do not need the highest accuracy can be exploited to reduce OPC complexity. For example, transistor gate dimensions require tighter dimensional control than interconnect features on the polysilicon layer. Furthermore gate features typically occupy less area than interconnect. When relaxed OPC accuracy requirements are applied to the interconnect features, but not the gate features, the overall complexity of the polysilicon mask pattern can be significantly reduced without losing accuracy where it counts.


Optical Microlithography XVI | 2003

A methodology to calculate line-end correction feature performance as a function of reticle cost

Lawrence S. Melvin; James P. Shiely; Michael L. Rieger; Benjamin D. Painter

Mask fabrication costs are significantly aggravated by OPC complexity. This increased complexity is presumably needed to accurately render 2-D configurations. The humble line-end is one of the most difficult 2-D configurations to print accurately, when considering process margin requirements and mask fabrication constraints. In this paper, the requirements for proximity corrected line-end structures will be explored and a pattern complexity metric will be proposed to compare relative mask cost versus line-end lithographic performance. Many types of correction shapes are available to improve process margin for line-ends. However, the cost of producing these various line-end configurations can vary dramatically. Using both a simple optical model to simulate line-end performance through focus offset and a cost metric based on fracture shots, a comparison of six types of lines ends for correction and process efficiency will be undertaken. Each of the six line-end corrections will attempt to produce equally effective silicon line-end shapes. Line-ends will be evaluated based on shortening (pullback), pinching, and bridging characteristics. Line-end lithographic behavior will be characterized through all process window boundary conditions. The objective of this study is to quantify the tradeoffs among three variables: mask cost, process-window robustness, and design tolerance margin. In addition, through the study of proximity effects on the various line-end types, the possibility of mixing expensive but high performance line-ends with simpler less aggressive line-ends to reduce reticle cost while maintaining or increasing correction fidelity will be studied.


22nd Annual BACUS Symposium on Photomask Technology | 2002

An Effective Distributed Architecture for OPC & RET Applications

Robert Lugg; Mathias Boman; James Burdorf; Michael L. Rieger

The computational power needed to generate mask layouts for OPC and resolution enhancement techniques increases exponentially with process node. Rapidly growing design complexity is compounded with the more aggressive methods now required for smaller feature sizes. Layers once considered non-critical now routinely receive correction. While some improvement in code efficiency can be expected, algorithms are maturing to the point where improvements will likely not keep pace with the computational need. To maintain required processing cycle times massively parallel processing methods must be employed. In this paper we discuss loosely-coupled distributed computing architectures applied to OPC/RET layout synthesis. The degree to which an application is scalable depends on how well the problem can be divided into independent sets of data. Furthermore, data must also be partitioned into reasonably sized blocks so that memory requirements per processor can be bounded. Communication overhead, I/O overhead and serial processeses all degrade scalability, and may increase overall storage requirements. In this paper we analyze behavior of distributed processing architectures with large numbers of processors, and we present performance data on an existing massively parallel system.

Collaboration


Dive into the Michael L. Rieger's collaboration.

Researchain Logo
Decentralizing Knowledge