Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark A. Lavin is active.

Publication


Featured researches published by Mark A. Lavin.


Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing | 1986

Fast Hough transform: A hierarchical approach

Hungwen Li; Mark A. Lavin; Ronald J Le Master

Abstract We have developed a fast algorithm for the Hough transform that can be incorporated into the solutions to many problems in computer vision such as line detection, plane detection, segmentation, and motion estimation. The fast Hough transform (FHT) algorithm assumes that image space features “vote” for sets of points lying on hyperplanes in the parameter space. It recursively divides the parameter space into hypercubes from low to high resolution and performs the Hough transform only on the hypercubes with votes exceeding a selected threshold. The decision on whether a hypercube receives a vote from a hyperplane depends on whether the hyperplane intersects the hypercube. This hierarchical approach leads to a significant reduction of both computation and storage. Due to the hyperplane formulation of the problem and the hierarchical representation of the hypercube, the computation in the FHT is incremental and does not require multiplication, which further contributes to efficiency.


Ibm Journal of Research and Development | 1980

A geometric modeling system for automated mechanical assembly

Michael A. Wesley; Tomás Lozano-Pérez; Lawrence Isaac Lieberman; Mark A. Lavin; David D. Grossman

Very high level languages for describing mechanical assembly require a representation of the geometric and physical properties of 3-D objects including parts, tools, and the assembler itself. This paper describes a geometric modeling system that generates a data base in which objects and assemblies are represented by nodes in a graph structure. The edges of the graph represent relationships among objects such as part-of, attachment, constraint, and assembly. The nodes also store positional relationships between objects and physical properties such as material type. The user designs objects by combining positive and negative parameterized primitive volumes, for example, cubes and cones, which are represented internally as polyhedra. The data base is built by invoking a procedural representation of the primitive volumes, which generates vertex, edge, and surface lists of instances of the volumes. Several applications in the automatic assembly domain have been implemented using the geometric modeling system as a basis.


Ibm Journal of Research and Development | 2001

TCAD development for lithography resolution enhancement

Lars W. Liebmann; Scott M. Mansfield; Alfred K. K. Wong; Mark A. Lavin; William C. Leipold; Timothy G. Dunham

Advances in lithography have contributed significantly to the advancement of the integrated circuit technology. While nonoptical next-generation lithography (NGL) solutions are being developed, optical lithography continues to be the workhorse for high-throughput very-large-scale integrated (VLSI) lithography. Extending optical lithography to the resolution levels necessary to support today’s aggressive product road maps increasingly requires the use of resolution-enhancement techniques. This paper presents an overview of several resolution-enhancement techniques being developed and implemented in IBM for its leading-edge CMOS logic and memory products.


Design and process integration for microelectronic manufacturing. Conference | 2005

Integrating DfM components into a cohesive design-to-silicon solution (Invited Paper)

Lars W. Liebmann; Dan Maynard; Kevin W. McCullen; Nakgeuon Seong; Ed Buturla; Mark A. Lavin; Jason D. Hibbeler

Two primary tracks of DfM, one originating from physical design characterization, the other from low-k1 lithography, are described. Examples of specific DfM efforts are given and potentially conflicting layout optimization goals are pointed out. The need for an integrated DfM solution than ties together currently parallel DfM efforts of increasing sophistication and layout impact is identified and a novel DfM-enabling design flow is introduced.


IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 2000

EDA in IBM: past, present, and future

John A. Darringer; Evan E. Davidson; David J. Hathaway; Bernd Koenemann; Mark A. Lavin; Joseph Morrell; Khalid Rahmat; Wolfgang Roesner; Erich C. Schanzenbach; Gustavo E. Tellez; Louise H. Trevillyan

Throughout its history, from the early four-circuit gate-array chips of the late 1960s to todays billion-transistor multichip module, IBM has invested in tools to support its leading-edge technology and high-performance product development. The combination of demanding designs and close cooperation among product, technology, and tool development has given rise to many innovations in the electronic design automation (EDA) area and provided IBM with a significant competitive advantage. This paper highlights IBMs contributions over the last four decades and presents a view of the future, where the best methods of multimillion gate ASIC and gigahertz microprocessor design are converged to enable highly productive system-on-a-chip designs that include widely diverse hardware and software components.


international conference on computer aided design | 2004

Backend CAD flows for "restrictive design rules"

Mark A. Lavin; Fook-Luen Heng; Gregory A. Northrop

To meet challenges of deep-subwavelength technologies (particularly 130 nm and following), lithography has come to rely increasingly on data processes such as shape fill, optical proximity correction, and RETs like altPSM. For emerging technologies (65 nm and following) the computation cost and complexity of these techniques are themselves becoming bottlenecks in the design-silicon flow. This has motivated the recent calls for restrictive design rules such as fixed width/pitch/orientation of gate-forming polysilicon features. We have been exploring how design might take advantage of these restrictions, and present some preliminary ideas for how we might reduce the computational cost throughout the back end of the design flow through the post-tapeout data processes while improving quality of results: the reliability of OPC/RET algorithms and the accuracy of models of manufactured products. We also believe that the underlying technology, including simulation and analysis, may be applicable to a variety of approaches to design for manufacturability (DFM).


14th Annual BACUS Symposium on Photomask Technology and Management | 1994

Optical proximity correction: a first look at manufacturability

Lars W. Liebmann; Brian J. Grenon; Mark A. Lavin; Thomas Zell

The feasibility of large scale optical proximity correction with a focus on mask manufacturability is demonstrated on the support and logic gates of a leading edge 64 Mb DRAM chip. Analysis of post reactive ion etch SEM data of the 500 - 600 nm, DUV exposed gates indicates two major contributors to across chip line width variation: first order proximity, that is, the minimum spacing to the nearest neighboring structure, and local area density or pattern loading. Data presented show a very long range (approximately equals 1 mm) impact of pattern density on post reactive ion etch line widths, favoring optical proximity correction approaches that are not based on biasing patterns to compensate for these effects. In this project, pattern density induced effects were alleviated by homogenizing the pattern loading across the chip to approximately 50% instead of biasing the gate structures to compensate for pattern density differences. Proximity induced effects were compensated for with a one- dimensional, single parameter (distance to nearest neighbor), four bucket proximity correction routine with a strong focus on mask manufacturability. Even though the unbiased 64 Mb DRAM gate level challenges mask makers with 480 MB of MEBES data, the optical proximity corrected mask posed no substantial post-processing, writing, or inspection problems in IBMs Burlington, Vermont maskhouse. A very significant 80% reduction in post reactive ion etch across chip line width variation was achieved with this corrected mask.


Optical Microlithography XVIII | 2005

The problem of optimal placement of sub-resolution assist features (SRAF)

Maharaj Mukherjee; Scott M. Mansfield; Lars W. Liebmann; Alexey Lvov; Evanthia Papadapoulou; Mark A. Lavin; Zengqin Zhao

In this paper, we present a formulation of the Sub-Resolution Assist Feature (SRAF) placement problem as a geometric optimization problem. We present three independent geometric methodologies that use the above formulation to optimize SRAF placements under mask and lithographic process constraints. Traditional rules-based methodology, are mainly one dimensional in nature. These methods, though apparently very simple, has proven to be inadequate for complex two-dimensional layouts. The methodologies presented in this paper, on the other hand, are inherently two-dimensional and attempt to maximize SRAF coverage on real and complex designs, and minimizes mask rule and lithographic violations.


international conference on pattern recognition | 1990

An object-oriented language for image and vision execution (OLIVE)

Myron Flickner; Mark A. Lavin; Sujata Das

The object-oriented language for image and vision environments (OLIVE), which is intended to make it easier to develop efficient, portable applications is presented. OLIVEs principal object types, called images and loci (abstractions of point sets and geometric entities), and their corresponding operations, including the use of locuses as generalized indexes for images, are defined. Several examples of OLIVE for typical image processing and machine vision tasks are presented. Issues concerning the implementation of OLIVE, including a hardware architecture that simplifies the implementation while enhancing its performance, are discussed.<<ETX>>


Design and process integration for microelectronic manufacturing. Conference | 2004

Merits of cellwise model-based OPC

Puneet Gupta; Fook-Luen Heng; Mark A. Lavin

One of the most compute intensive dataprep operations for 90nm PC level is the model-based optical proximity correction (MBOPC). The running time and output data size are growing unacceptably, particularly for ASICs and designs containing large macros built out of library cells (books). The reason for this growth is that the region-of-interest for MBOPC is approximately 600nm, which means that most library cells “see” interactions with adjacent books in the same row and also in adjacent rows. In this paper, we investigate the merits of doing cellwise MBOPC. In its simplest form, the approach is to perform dataprep for each cell once per cell definition rather than once per placement. By inspection, this will reduce the computation time and output data size by a factor of P/D, where P is the number of book placements (100s to millions) and D is the number of book definitions. Our preliminary finding indicates that there is negligible difference between nominal CD for cellwise corrected cells and chipwise corrected cells. We will present our finding in terms of average CD and contact coverage, as well as runtime reduction.

Researchain Logo
Decentralizing Knowledge