0706EF3-Magma.doc

Keywords: DFM, DFY, OPC, DRC, unified data model, lithography

Editorial Feature Tab: Methods & Tool – DFM/DFY Models

@head: True DFM/DFY Solutions Requires More Than OPC & DRC

@deck: Lithographic-aware R&R engines coupled with statistical analysis engines need unified data model to ensure all tools in data flow have real-time access to design data.

@text: The core problem underlying the vast majority of today’s design for manufacturability (DFM) or design for yield (DFY) issues revolves around IC features that are now smaller than the wavelength of the light used to create them (see Figure 1). This is akin to trying to paint a 1/4-inch wide line using a 1-inch diameter paintbrush. Device manufacturers currently address this dilemma by post-processing a GDSII file with a variety of resolution enhancement techniques (RETs), such as optical proximity correction (OPC) and phase shift mask (PSM). In the case of OPC, for example, the tool modifies the GDSII file by augmenting existing features or adding new ones – known as sub-resolution assist features (SRAF) – to obtain better printability.

The problem is that – in the context of the final GDSII files and photomasks – every structure in the design is affected by other structures in close proximity. For instance, if there are two geometric shapes isolated from one another in the GDSII file and photomask, these shapes will print in a certain way. But if the same shapes are located near each other, the resulting shadowing that interrupts the light used to create these shapes will modify each shape, often in non-intuitive ways.

In order to employ an effective DFM/DFY solution, IC designers must address manufacturing or yield problems caused by catastrophic or parametric failures that are either systematic (feature-driven) or statistical (random) in nature (see Figure 2). Unfortunately, existing design flows do not adequately address these DFM or DFY problems for the 90-nm and below technology nodes.

What is required is a full RTL-to-GDSII design flow in which all of the design and analysis engines are DFM/DFY-aware. In particular, a true DFM/DFY solution should feature lithographic-aware placement and routing engines coupled with statistical analysis engines. Such a solution also should include a unified data model that provides all of the tools in the flow – from synthesis to placement and routing, timing, extraction, power and signal integrity analysis – immediate and concurrent access to exactly the same data.

The combination of lithographic-aware placement and routing engines not only will minimize the need for reticle enhancement techniques (RET) such as post-layout OPC, but also will increase the effectiveness of any such OPC used. These engines also can mark portions of the layout where OPC is not required and pass this information to the subsequent OPC tools, thereby preventing unnecessary OPC. Since OPC is minimized or anticipated in advance during the design process, its impact on timing and area will be minimal, and correct-by-construction design closure will be achieved.

In the past, the design and manufacturing worlds have been treated as separate, distinct entities. Until now, designers have been shielded from the intricacies of the fabrication process by the use of “design rules” and “recommended rules” provided by the foundry. In earlier technology nodes, if designers – and their tools – rigorously met these rules, they could safely assume that the chip could be manufactured; any yield problems were considered the foundry’s responsibility and were addressed by improving the capabilities of the fabrication process or by bringing that process under tighter control. In the case of today’s ultra-deep submicron technologies, however, these rules no longer reflect the underlying physics of the fabrication process. This means that even if designers meticulously follow all of the rules provided by the foundry, the chips may still suffer unacceptable yields.

Limitations Associated with Design Rules

Design rules are becoming much more complex with every new technology generation. At the 130-nm node, for example, the design rules were relatively few and simple. Design rules started to proliferate and become significantly more complicated at the 90-nm node, and they have become extremely complicated at the 65-nm node; for example, even a simple end-of-line rule now has a plethora of parameters (see Figure 3).

The end result is that the number and complexity of such design rules is spiraling out of control, and employing these rules consumes huge amounts of memory and requires excessive run times. Furthermore, as was previously noted, these rules are not actually capturing the underlying physics of what’s happening.

As was previously discussed, current design flows are based on the use of design rules and recommended rules to generate initial GDSII files, which are then post-processed via a variety of resolution enhancement techniques (RET) such as OPC and PSM.

The problem is that RET takes place following layout (place-and-route), which is too late in the design flow. When the input to the RET tools is poor – as it is with existing flows –data sizes and run times explode, thereby causing mask-generation costs to skyrocket. Furthermore, in some cases, it is simply not possible to satisfy the amount of RET required (for example, structures that need to be added) using the initial design; this requires a time-consuming physical design cycle to modify the design in order to create room for the RET, which in turn causes modifications in the performance characteristics of the design.

Within any IC fabrication process, there will be unavoidable process variations that lead to fluctuations in device topology, behavior and performance. If the performance falls outside of the specification, it is classified as a form of yield failure known as parametric yield loss.

In the case of conventional design methodologies, such variations are addressed by defining worst-case conditions and then ensuring that the performance will meet the specification under any condition. This worst-case approach is facing serious challenges at tighter design nodes. The only real solution available for conventional design flows is to guard-band the design by including excessive safety margins in the specification. However, this makes it harder to successfully complete the design and leaves an unacceptable amount of performance “on the table.”

Summary of the Limitations

The limitations associated with traditional DFM/DFY can be summarized by returning to the original matrix of manufacturing and yield problems (see Figure 4).

In the systematic-catastrophic category, the problem is that the design rules used by the layout (place-and-route) tools don’t have the ability to account for complex lithographic interactions and effects; that is, the fact that placing a component or track near another component or track may negatively affect the printing of both structures. Similarly, analysis cannot account for lithographic interactions and effects in the systematic-parametric category; in this case, the fact that placing a component or track near another component or track may affect the properties and timing of both structures.

Once again, in the statistical-catastrophic category, recommended rules such as the adding of redundant vias don’t have the ability to account for lithographic interactions and effects. The result is that adding a particular via may create a lithographically unfriendly situation that actually negatively impacts the yield: the exact opposite of what was intended. Finally, in the statistical-parametric category, the fact that conventional analysis tools cannot account for statistical effects means that the design has to be created using worst-case scenarios, which can impact performance and yield.

Requirements for True DFM/DFY

The central point that has been largely missed by conventional DFM/DFY approaches is that, in both of these terms, the “D” stands for design. That is, DFM/DFY implies analysis, prevention, correction and verification during the design phase; it does not imply post-GDSII fixes like OPC.

Ideally, manufacturability and yield considerations should be brought forward all the way into the synthesis stage in the design flow. Conventional synthesis engines perform their selections and optimizations based on the timing, area and power characteristics of the various cells in the library coupled with the design constraints provided by the designer. If cell libraries also were characterized in terms of yield, synthesis engines could perform trade-offs in terms of timing, area, power and yield to produce optimum performance with better yield.

There are a number of key technology enablers required for yield-aware synthesis to achieve its goal, including accurate yield models and analysis, concurrent optimization and the use of a unified data model. Each cell should be associated with an accurate yield model in relation to other cells in the library to ensure that the yield estimation during the subsequent design optimization is reliable. Concurrent optimization should be available to facilitate continuous trade-offs between competing design objectives. Meanwhile, a unified data model is essential to ensure the real-time availability of the most up-to-date information required by the concurrent optimization algorithms.

The printability of cells has to be addressed by a more intelligent manufacturing and yield-aware placement engine that has knowledge of the limitations and requirements of the downstream OPC. Embedding a printability (lithographic) analysis capability in the placement engine will allow this engine to recognize patterns that must be avoided and to identify locations where extra space must be added for use by downstream OPC. Due to the fact that such analysis needs to be performed many times “on-the-fly,” its algorithms must be extremely efficient in terms of runtime and memory usage, and should employ as many physics-based models as possible.

In addition to design rule-related yield failure, there are two other major failure mechanisms that can take place in the routed portion of a design. One is failure due to defects on the wafer at random locations; the other is caused by printability problems becoming more pronounced as design rules further shrink into the submicron realm. As was stated earlier, the combination of lithographic-aware placement and routing engines will minimize the need for post-layout OPC and will increase the effectiveness of any such OPC required, minimizing its impact on timing and area, and enabling correct-by-construction design closure.

Inevitable variations in the manufacturing process and environment also will cause chip performance to vary in what can be described as a distribution. Parametric yield loss occurs when chip performance drifts out of specification. In order to maximize parametric yield, a new statistical design methodology must be employed that includes placing the mean of the distribution in the middle of the specification window – a technique called design centering – and keeping the spread of the distribution within the window – known as design desensitization.

New analysis tools will employ statistical methods using these parametric models and extraction. For example, a statistical static timing analysis (SSTA) can be used to calculate the statistical timing associated with each path and node in the design; these timings are no longer represented by single values, but by distributions determined by the distributions of the variational parameters. Such parametric models and extraction can be used to account for both intra-die and inter-die variations, and any variation in the process or environment can be directly linked to variations in design performance.

The requirements associated with a DFM/DFY design environment that can address the needs of today’s (and future) ultra-deep submicron technology nodes can be summarized by returning to the original matrix of manufacturing and yield problems (see Figure 5).

A true DFM/DFY solution, which should feature lithographic-aware placement and routing engines coupled with statistical analysis engines, also requires the use of a unified data model that ensures all tools in the flow have real-time access to the same design data. The combination of lithographic-aware placement and routing engines will lessen the need for, and increase of the effectiveness of, post-layout OPC, and lead to the use of simpler design rules. When coupled with analysis tools that can account for timing variations caused by lithographic and statistical effects, a true DFM/DFY environment can meet the design for manufacturing and yield requirements of today’s technology nodes and also the technologies of the future.

Behrooz Zahiri is the Senior Director of Business Development and Marketing, Design Implementation Business Unit at Magma Design Automation. He has been with Magma since June 2003. The author of numerous IC industry articles, Mr. Zahiri has 14 years of experience in computer and IC chip design. He has a MS in Electrical Engineering from Stanford University along with a BS in Electrical Engineering and Computer Science from UC Berkeley.

++++++++++++++

Captions:

Figure 1. What you see is not always what you get.

Figure 2. Manufacturing and yield problems fall into four main categories as shown.

Figure 3. Design rules at 65 nm are extremely complicated.

Figure 4. This stop conveys some of the limitations associated with traditional DFM/DFY approaches

Figure 5. Here, the basic requirements associated with a true DFM/DFY approach are shown.

© 2005 Magma Design Automation, Inc. Page 1