0706EF1-Mentor_Nano.doc
Keywords: nanotechnology, DFM, yield, ROI
@head: Challenges Abound When Implementing DFM Tools in Nanometer Flows
@deck: By focusing on improving lower-level block constraints and integration, it’s possible to ease the cost and design pain of nanometer DFM development.
@text: The semiconductor industry is one of the key drivers of nanotechnology. The feature size and density advancements of integrated-circuit designs have consistently put pressure on manufacturing processes. Nanotechnology implementation hasn’t been painless for either the design or process-development sides of the industry. Designing products with acceptable yields for cost has become a difficult goal to reach. As a result, the march toward nanometer designs is forcing designers and manufacturers to consider new tools that bridge the communication gap between design and process engineering. Design-for-manufacturing (DFM) tools are now striving to enable the nanometer era in semiconductors.
The complexity of silicon chip design has risen with the addition of many tools based on DFM methodologies. This increasing complexity put design flows in a state of confusion. Not enough information exists about how to use the flood of DFM software within the current priorities. To make matters worse, not enough of the DFM tools work in a collaborative way. They remain modular in nature for the designer. The cost of implementation is therefore prohibitive from both the licensing and resource perspectives.
Many DFM tools provide comprehensive design analysis at various points in the design flow. These tools range from doing yield enhancements with recommended rules to critical-area analysis (CAA) to improve designs with regard to random defects. In addition, model-based process-simulation packages increase design robustness to systematic defects. All of these tools provide metrics that designers can use to better their layouts with respect to manufacturability. Each tool provides analysis to improve yield and consistency from a particular perspective.
Cost of Implementation
Two major issues exist for design teams that are attempting to integrate DFM tools into their flows. The first issue is that these tools cost money, time, and resources without the benefit of knowing the return on investment (ROI) that will be achieved. Secondly, many of these tools conflict with each other in terms of how to maximize yield potential for the final product. As a result, it’s nearly impossible to integrate multiple tools in the flow without increasing the number of design loops. If one could manage such a feat, however, the resulting product would be more robust and provide a lower cost of sale.
The design engineer must maintain the priorities that ensure successful chip design. The first priority will always be timing sign-off. Yield-enhancing technology can be implemented as long as there’s enough time or if the project manager has added extra time to the design cycle. Currently, there is a push to force project managers to add this time. This push comes from a fear that manufacturing won’t be able to produce the product efficiently without it. Integrating DFM tools into the design flow is therefore critical, as additional development time is very costly.
For the DFM tools to enhance their potential ROI, they’ll have to integrate with the current design flows without increasing the number of design loops already created by timing closure. Some may infer that this can only be accomplished by implementing these tools at the back end of the design flow (i.e., doing a full-chip analysis when all factors affecting manufacturability are completed). The DFM tool will then be expensive, as modifications at this design stage are difficult to do. Because run times become the major issue, accuracy and usefulness must be sacrificed to maintain feasibility.
Implementation Challenges
With creativity and ingenuity, many of the potential problems relating to DFM can be addressed at the lower-level blocks or even the leaf cell level. The engineer(s) who can accomplish this will need an understanding of both manufacturing and design flows. Although these engineers are few, they’re growing in number. As more IDM companies move toward a fabless business model, many high-quality process engineers are moving into the design world. As the design engineers increase their knowledge of the processing issues that drive DFM technology, creative ways will emerge to incorporate improvements.
The remainder of the DFM technology will remain at the top level of chip design. Some tools must integrate with current-flow tools, such as the router. Some are benign enough to the timing flow to remain modular. The tools that can be integrated together with other flow components will keep the cost of implementation manageable. By combining with current design-flow tools, the number of design loops will be kept to a minimum.
At the 130-nm node, the concept of metal-density checking for chemical-mechanical-processing (CMP) uniformity was introduced into the design flow. The tools for dummy fill and window-based density checking were confusing. As a result, fear was created in the design world. As the need for these checks and their usage became more understood, however, creative ideas were implemented into the flow. The final chip design-rule-check (DRC) signoff then became much easier (see Figure 1).
This flow change was easier to implement, thanks to lower-block density checking with margins built in for contextual effects at full chip. Creating layout constraints with regard to density at lower-level blocks also made routing-level block integration more stable. These creative ideas were based along the same lines as creating routing pitches to assemble standard cell macros. These types of layout constraints can make DFM implementation much less costly.
With the complexity of today’s DFM tools, many teams must be involved in using the tools in the design flow. One problem that manifests itself is that each team only understands a part of the issue that the DFM tool is attempting to address. Successfully implementing new DFM technology will involve individuals who understand all factors of semiconductor design, silicon process, and product management. The lead person--the DFM Czar if you will--must have knowledge in each of these areas. The complexity of balancing the DFM-tool analysis is making the implementation of these new technologies difficult.
Implementation Issues
Various problems can arise when implementing DFM. Yield-enhancing tools, such as via doublers and metal-density rebalancing, can break timing closure. CAA tools can push layout-block extents past their allotted floorplan allocation. In addition, lithography-simulation tools can show that the transistor sizes used for timing signoff may not be the actual sizes that will print during the manufacturing process. A layout change to improve the score of one DFM tool may even cause an error in another DFM tool.
The DFM Czar must find ways to integrate the DFM tools so that they’ll collaborate with the timing flow as well as with each other. When multiple DFM tools conflict with each other over layout improvement, the Czar must algorithmically decide how to maximize the ROI for the DFM analysis. The designer must receive clear direction with regard to consideration of the conflicting outputs. To maximize DFM-tool ROI, the number of routing loops to achieve timing signoff must be kept under control.
This type of DFM ROI equation, which balances the results of the various tool technologies, can only be achieved with silicon data relating to yield and electrical predictability. This data is both theoretical in nature and process-technology-specific. Model-based process simulations have mathematical formulas based on physics as well as constants that must be derived for each process line and technology. The yield curves based on DFM methodologies are very difficult to generate with confidence, as they require large amounts of data. Over the next process generation, more DFM tools will find their way into multiple design flows. The information that’s needed will then be acquired. Cost models for DFM should be developed in the near future.
With Respect to Timing Flow
Several DFM tools are already known to directly affect timing analysis for the 65-nm technology node. These tools must receive special attention. After all, making timing closure more difficult than it already is will not be accepted without much debate. Among them are tools that do the following: simulate lithography and etch effects over the process window, add additional vias to reduce random defects, and change wire routing to balance metal densities. They also include CMP models to better predict resistance and capacitance values. The older tools include metal fill generators and via maximizers in power-grid routes.
Lithography and etch simulators can now extract the transistor parametric (L/W) based on manufacturing parameters. They also can be used to highlight design areas in which the printed image varies too far from the drawn image. The designer then has the capability to adjust the design so that it will be robust over the process window with regard to predictability. In addition, the advanced designer will have the opportunity to create designs that are context-independent. This is extremely important for design teams to understand--especially at the small block level. It follows the same methodology as density learning in 130-nm technologies. The earlier the issues can be stabilized in the design flow, the faster the final stages of design will be complete (see Figure 2).
By creating cell and block designs that are context-independent, the current methods of chip-level timing analysis can be maintained. This concept cannot be maintained for 100% of the DFM-tool implementation. But it’s better if more concepts can be accomplished to achieve this goal. Here, ingenuity will separate the advanced flows from the basic. EDA vendors and technology-access teams must work together to drive many of these concepts.
Transistor and via layout for DFM should follow the hierarchy methodologies that are used for timing closure. This way, little will need to be revalidated at the full-chip level. At 65-nm layout, both experiments and library designs are showing that this is possible. More ideas are still coming forward. Like the density requirements that appeared in 130-nm processes, the new DFM tools will find creative implementations to make better, more robust designs without breaking the router/timing-closure flows (see Figure 3).
DFM tools must deliver the metrics required for timing tools, such as the new Silicon Static Timing Analysis technology. They can then increase their value to design teams. Across chip variations of transistors--separated by each type of transistor (core voltage, IO voltage, etc.)--is a must for analysis. Individual transistor sizes will need to be outputted in formats that allow the Spice models to predict performance parameters. These will need to be adapted when Spice models advance to the point of accepting contour-based inputs.
All DFM tools will need to be summarized in a common database so that the results can be valued as a cumulative score. This collaboration among DFM tools is required to reduce loops through the design cycle. Designers need to know how each DFM score will react without having to run multiple tools again. The EDA vendor that can combine its DFM modules into one summarized output provides the greatest value. The vendor that integrates DFM with the layout tool, making it possible to create a clean layout on the fly, will achieve a better design flow.
The nanometer era has brought enormous changes to the IC design flow. Design technology teams need improved understanding of the nature of DFM rules. Advanced methods of implementing these concepts can be obtained while minimizing the cost increase of design. By focusing on improving the lower-level block constraints, one can make the full-chip DFM analysis almost painless. This methodology won’t disrupt current timing and signal-integrity analysis. Integrating modular DFM tools into one common analysis at the full-chip level will reduce the number of added loops in the design cycle.
The success of DFM technology depends on this type of creativity in DFM use models. To state that the design timing flow must be broken to accommodate DFM tools is not enough without knowing the ROI of using this technology. The amount of data required to create yield-improvement estimates is a timely and costly process. This data must be generated throughout the product life cycle. Yet it’s difficult to estimate at the beginning of the product design cycle.
The best approach is to reduce the cost of implementing DFM technology tools, thereby making it easier for design flows to adopt the change. This method also promotes increased understanding between the technology teams working to make a product successful. The synergy of a collaborative design flow, which is well thought-out with minimized impact to designers, will create a winning environment and robust products in the nanometer era.
William (Bill) Graupp received a B.S. in Electrical Engineering from Drexel University in Philadelphia, PA. He has an extensive background in CMOS development and manufacturing and was on the forefront of DFM for Hewlett-Packard and Agilent Technologies. Graupp is a Technical Marketing Engineer for the Calibre Litho-Friendly Design product line at Mentor Graphics.
++++++++++++
Captions:
Figure 1: When shown within the full design context, final results help designers make informed decisions and tradeoffs.
Figure 2: Here is a sensitivity analysis of ring oscillator to gate size.
Figure 3: DFM improvements should begin at the lowest level of hierarchy.