Mohamed Al-Imam
Mentor Graphics
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mohamed Al-Imam.
Proceedings of SPIE, the International Society for Optical Engineering | 2006
Azalia A. Krasnoperova; James A. Culp; Ioana Graur; Scott M. Mansfield; Mohamed Al-Imam; Hesham Maaty
As the industry moves toward 45nm technology node and beyond, further reduction of lithographic process window is anticipated. The consequence of this is twofold: first, the manufactured chip will have pattern sizes that are different from the designed pattern sizes and those variations may become more dominated by systematic components as the process windows shrink; second, smaller process windows will lead to yield loss as, at small dimensions, lithographic process windows are often constrained by catastrophic fails such as resist collapse or trench scumming, rather than by gradual pattern size variation. With this notion, Optical Proximity Correction (OPC) for future technology generations must evolve from the current single process point OPC to algorithms that provide an OPC solution optimized for process variability and yield. In this paper, a Process Window OPC (PWOPC) concept is discussed, along with its place in the design-to-manufacturing flow. Use of additional models for process corners, integration of process fails and algorithm optimization for a production-worthy flow are described. Results are presented for 65nm metal levels.
Proceedings of SPIE | 2010
James M. Oberschmidt; Amr Abdo; Tamer Desouky; Mohamed Al-Imam; Azalia A. Krasnoperova; Ramya Viswanathan
The process of preparing a sample plan for optical and resist model calibration has always been tedious. Not only because it is required to accurately represent full chip designs with countless combinations of widths, spaces and environments, but also because of the constraints imposed by metrology which may result in limiting the number of structures to be measured. Also, there are other limits on the types of these structures, and this is mainly due to the accuracy variation across different types of geometries. For instance, pitch measurements are normally more accurate than corner rounding. Thus, only certain geometrical shapes are mostly considered to create a sample plan. In addition, the time factor is becoming very crucial as we migrate from a technology node to another due to the increase in the number of development and production nodes, and the process is getting more complicated if process window aware models are to be developed in a reasonable time frame, thus there is a need for reliable methods to choose sample plans which also help reduce cycle time. In this context, an automated flow is proposed for sample plan creation. Once the illumination and film stack are defined, all the errors in the input data are fixed and sites are centered. Then, bad sites are excluded. Afterwards, the clean data are reduced based on geometrical resemblance. Also, an editable database of measurement-reliable and critical structures are provided, and their percentage in the final sample plan as well as the total number of 1D/2D samples can be predefined. It has the advantage of eliminating manual selection or filtering techniques, and it provides powerful tools for customizing the final plan, and the time needed to generate these plans is greatly reduced.
Proceedings of SPIE | 2007
Walid A. Tawfic; Mohamed Al-Imam; Karim Madkour; Rami Fathy; Ir Kusnadi; George E. Bailey
Process models are responsible for the prediction of the latent image in the resist in a lithographic process. In order for the process model to calculate the latent image, information about the aerial image at each layout fragment is evaluated first and then some aerial image characteristics are extracted. These parameters are passed to the process models to calculate wafer latent image. The process model will return a threshold value that indicates the position of the latent image inside the resist, the accuracy of this value will depend on the calibration data that were used to build the process model in the first place. The calibration structures used in building the models are usually gathered in a single layout file called the test pattern. Real raw data from the lithographic process are measured and attached to its corresponding structure in the test pattern, this data is then applied to the calibration flow of the models. In this paper we present an approach to automatically detect patterns that are found in real designs and have considerable aerial image parameters differences with the nearest test pattern structure, and repair the test patterns to include these structures. This detect-and-repair approach will guarantee accurate prediction of different layout fragments and therefore correct OPC behavior.
Design and process integration for microelectronic manufacturing. Conference | 2006
Scott M. Mansfield; Geng Han; Mohamed Al-Imam; Rami Fathy
In recent years, design for manufacturability (DfM) has become an important focus item of the semiconductor industry and many new DfM applications have arisen. Most of these applications rely heavily on the ability to model process sensitivity and here we explore the role of through-process modeling on DfM applications. Several different DfM applications are examined and their lithography model requirements analyzed. The complexities of creating through-process models are then explored and methods to ensure their accuracy presented.
Proceedings of SPIE | 2007
Rami Fathy; Mohamed Al-Imam; Hesham Diab; Moutaz Fakhry; Juan Andres Torres; B. Graupp; Jean-Marie Brunet; Mohamed Bahnas
Device extraction and the quality of device extraction is becoming of increasing concern for integrated circuit design flow. As circuits become more complicated with concomitant reductions in geometry, the design engineer faces the ever burgeoning demand of accurate device extraction. For technology nodes of 65nm and below approximation of extracting the device geometry drawn in the design layout polygons might not be sufficient to describe the actual electrical behavior for these devices, therefore contours from lithographic simulations need to be considered for more accurate results. Process window variations have a considerable effect on the shape of the device wafer contour, having an accurate method to extract device parameters from wafer contours would still need to know which lithographic condition to simulate. Many questions can be raised here like: Are contours that represent the best lithography conditions just enough? Is there a need to consider also process variations? How do we include them in the extraction algorithm? In this paper we first present the method of extracting the devices from layout coupled with lithographic simulations. Afterwards a complete flow for circuit time/power analysis using lithographic contours is described. Comparisons between timing results from the conventional LVS method and Litho aware method are done to show the importance of litho contours considerations.
Proceedings of SPIE, the International Society for Optical Engineering | 2006
Mohamed Al-Imam; Andres Torres; Jean-Marie Brunet; Moutaz Fakhry; Rami Fathy
Cutting edge technology node manufacturers are always researching how to increase yield while still optimally using silicon wafer area, this way these technologies will appeal more to designers. Many problems arise with such requirements, most important is the failure of plain layout geometric checks to capture yield limiting features in designs, if these features are recognized at an early stage of design, it can save a lot of efforts at the fabrication end. A new trend of verification is to couple geometric checks with lithography simulations at the designer space. A lithography process has critical parameters that control the quality of its resulting output. Unfortunately some of these parameters can not be kept constant during the exposure process, and the variability of these parameters should be taken into consideration during the lithography simulations, and the lithography simulations are performed multiple times with these variables set at the different values they can have during the actual process. This significantly affects the runtime for verification. In this paper the authors are presenting a methodology to carefully select only needed values for varying lithography parameters; that would capture the process variations and improve runtime due to reduced simulations. The selected values depend on the desired variation for each parameter considered in the simulations. The method is implemented as a tool for qualification of different design techniques.
Proceedings of SPIE | 2009
Mohamed Al-Imam
In the modern photolithography process for manufacturing integrated circuits, geometry dimensions need to be realized on silicon which are much smaller than the exposure wavelength. Thus Resolution Enhancement Techniques have an indispensable role towards the implementation of a successful technology process node. Finding an appropriate RET recipe, that answers the needs of a certain fabrication process, usually involves intensive computational simulations. These simulations have to reflect how different elements in the lithography process under study will behave. In order to achieve this, accurate models are needed that truly represent the transmission of patterns from mask to silicon. A common practice in calibrating lithography models is to collect data for the dimensions of some test structures created on the exposure mask along with the corresponding dimensions of these test structures on silicon after exposure. This data is used to tune the models for good predictions. The models will be guaranteed to accurately predict the test structures that has been used in its tuning. However, real designs might have a much greater variety of structures that might not have been included in the test structures. This paper explores a method for compiling the test structures to be used in the model calibration process using design layouts as an input. The method relies on reducing structures in the design layout to the essential unique structure from the lithography models point of view, and thus ensuring that the test structures represent what the model would actually have to predict during the simulations.
Design and process integration for microelectronic manufacturing. Conference | 2006
M. Bahnas; Mohamed Al-Imam; A. Seoud; Pat LaCour; H. F. Ragai
The OPC treatment of aerial mage ripples (local variations in aerial contour relative to constant target edges) is one of the growing issues with very low-k1 lithography employing hard off-axis illumination. The maxima and minima points in the aerial image, if not optimally treated within the existing model based OPC methodologies, could induce severe necking or bridging in the printed layout. The current fragmentation schemes and the subsequent site simulations are rule-based, and hence not optimized according to the aerial image profile at key points. The authors are primarily exploring more automated software methods to detect the location of the ripple peaks as well as implementing a simplified implementation strategy that is less costly. We define this to be an adaptive site placement methodology based on aerial image ripples. Recently, the phenomenon of aerial image ripples was considered within the analysis of the lithography process for cutting-edge technologies such as chromeless phase shifting masks and strong off-axis illumination approaches [3,4]. Effort is spent during the process development of conventional model-based OPC with the mere goal of locating these troublesome points. This process leads to longer development cycles and so far only partial success was reported in suppressing them (the causality of ripple occurrence has not yet fully been explored). We present here our success in the implementation of a more flexible model-based OPC solution that will dynamically locate these ripples based on the local aerial image profile nearby the features edges. This model-based dynamic tracking of ripples will cut down some time in the OPC code development phase and avoid specifying some rule-based recipes. Our implementation will include classification of the ripples bumps within one edge and the allocation of different weights in the OPC solution. This results in a new strategy of adapting site locations and OPC shifts of edge fragments to avoid any aggressive correction that may lead to increasing the ripples or propagating them to a new location. More advanced adaptation will be the ripples-aware fragmentation as a second control knob, beside the automated site placement.
Proceedings of SPIE | 2008
Mohamed Bahnas; Mohamed Al-Imam
OPC models have been improving their accuracy over the years by modeling more error sources in the lithographic systems, but model calibration techniques are improving at a slower pace. One area of modeling calibration that has garnered little interest is the statistical variance of the calibration data set. OPC models are very susceptible to parameter divergence with statistical variance, but modest caution is given to the data variance once the calibration sequence has started. Not only should the calibration data be a good representation of the design intent, but measure redundancy is required to take into consideration the process and metrology variance. Considering it takes five to nine redundant measurements to generate a good statistical distribution for averaging and it takes tens of thousands of measurements to mimic the design intent, the data volume requirements become overwhelming. Typically, the data redundancy is reduced due to this data explosion, so some level of variance will creep into the model-tuning process. This is a feasibility study for treatment of data variance during model calibration. This approach was developed to improve the model fitness for primary out-of-specification features present in the calibration test pattern by performing small manipulations of the measured data combined with data weighting during the model calibration process. This data manipulation is executed in image-parameter groups (Imin, Imax, slope and curvature) to control model convergence. These critical-CD perturbations are typically fractions of nanometers, which is consistent with the residual variance of the statically valid data set. With this datamanipulation approach the critical features are pulled into specification without diverging other feature types. This paper will detail this model calibration technique and the use of imaging parameters and weights to converge the model for key feature types. It will also demonstrate its effectiveness on realistic applications.
Proceedings of SPIE, the International Society for Optical Engineering | 2007
Mohamed Bahnas; Mohamed Al-Imam; James Word
Virtual manufacturing that is enabled by rapid, accurate, full-chip simulation is a main pillar in achieving successful mask tape-out in the cutting-edge low-k1 lithography. It facilitates detecting printing failures before a costly and time-consuming mask tape-out and wafer print occur. The OPC verification step role is critical at the early production phases of a new process development, since various layout patterns will be suspected that they might to fail or cause performance degradation, and in turn need to be accurately flagged to be fed back to the OPC Engineer for further learning and enhancing in the OPC recipe. At the advanced phases of the process development, there is much less probability of detecting failures but still the OPC Verification step act as the last-line-of-defense for the whole RET implemented work. In recent publication the optimum approach of responding to these detected failures was addressed, and a solution was proposed to repair these defects in an automated methodology and fully integrated and compatible with the main RET/OPC flow. In this paper the authors will present further work and optimizations of this Repair flow. An automated analysis methodology for root causes of the defects and classification of them to cover all possible causes will be discussed. This automated analysis approach will include all the learning experience of the previously highlighted causes and include any new discoveries. Next, according to the automated pre-classification of the defects, application of the appropriate approach of OPC repair (i.e. OPC knob) on each classified defect location can be easily selected, instead of applying all approaches on all locations. This will help in cutting down the runtime of the OPC repair processing and reduce the needed number of iterations to reach the status of zero defects. An output report for existing causes of defects and how the tool handled them will be generated. The report will with help further learning and facilitate the enhancement of the main OPC recipe. Accordingly, the main OPC recipe can be more robust, converging faster and probably in a fewer number of iterations. This knowledge feedback loop is one of the fruitful benefits of the Automatic OPC Repair flow.