1205Viewpoint_back.doc

Keywords: Methods-Tools, ESL, EDA

@head:Myths Yield to Reality in Study of ESL Methodologies

@deck:Part II: In Search of an ESL Methodology

@text:During December 2004 and June of 2005, Daya Nadamuni and I went on a search for an electronic-system-level (ESL) methodology. We ended up disproving a few of the current ESL myths. The first myth was that the U.S. doesn’t do ESL design. While doing our interviews, we found that ESL is alive and well in the U.S. We also discovered that SystemC is the language of choice. When we discussed today’s commercial tools, the general input was that they were fine for algorithmic design. Yet they didn’t work for the majority of the design work going on in the U.S. The importance of the design methodologies then became obvious.

From there, we started asking architects how they started their system designs. Keep in mind that we’re looking at this as far more of a black and white issue than it actually is. Many (if not most) systems combine the following methodologies to create complete designs. We were trying to find out the most important design methodology that was needed. An example of this is the most complete methodology--the algorithmic methodology.

EDA vendors have been producing tools for ES-level algorithmic design for nine years now. Those tools specialize in datapath design. For a complete algorithmic solution, however, some control logic must be included. When we asked, the system designers reported that they wanted very solid datapath design coupled with good enough control-logic design. With this information, we then attacked the problem of coming up with ESL methodologies.

Once we asked the right questions, we started getting answers that made sense. The system designers who are mainly addressed by today’s ESL vendors start with an algorithm. This approach is very common in consumer-electronics and some datacom design work. The algorithmic methodology has a tool flow that is coming close to being addressed. These tools are popular in Europe and Japan because a lot of consumer design work is done in these regions.

In North America, the situation is quite different. When we asked those system architects how they started their designs, the answers were usually either C/C++ models that weren’t algorithmic or state charts. The two other methodologies—the processor/memory and control-logic methodologies—are therein defined. This answer was often followed by the complaint that the EDA vendors weren’t serving their needs. To get their jobs done, the architects were forced to develop internal tools.

Another question also was answered. We discovered that we could talk to one group from a European vendor that was fairly satisfied with today’s ESL tools. Yet we could then talk to another group in the same company and get nothing but complaints. The satisfied group was doing algorithmic consumer designs, while the other group was invariably the automotive design group.

The Methodologies

From our discussions with semiconductor companies, system companies, vendors, and academics, we now see that we’re looking at three ESL methodologies:

  • Algorithmic
  • Processor/memory
  • Control logic

To complicate things further, the three methodologies appear to have two sub-methodologies: platform-based and architectural design (see Table 1). To keep the terminology straight, Gartner Dataquest defines a platform as a frozen architecture. The architectural-design sub-methodology is thus more of a freeform, language-based design style—not the model-based design style of the platform-based-design sub-methodology.

Although they both have their place, the anecdotal input that we’re starting to get from North American designers is that the platform-based design doesn't provide sufficient competitive differentiation. Comments from all regions point out that the lack of ESL models and model standards are another problem plaguing platform-based design. That issue probably explains why we aren't seeing the expected amount of reuse. Although it has been relatively flat for four years, we’ve seen 55% to 70% reuse.

Another part of the problem is memory blocks. We seem to be stuck at an average of a little more than 30% of the die being memory. We had thought that number would be closer to 50% by now. Unfortunately, the power issue has kept down the use of large on-chip memory. This trend appears to be similar to the initial years of the register-transfer-level (RTL) methodology. We tended to code and synthesize 40% to 60% of the design and then do the rest using gate-level design. As a result, we aren't seeing many very large designs. Once you start designing in straight RTL (that is, no reuse of outside designware), your productivity drops to 25 engineers per million gates. With a reuse methodology, it takes five engineers per million gates.

Power And Programmability

There are two major "red bricks"—the problems or challenges that must be resolved--in the design roadmap (in the context of the overall ITRS roadmap). They are power and programmability. It's common to look at power in the context of a battery-operated consumer appliance. Actually, the main power problems are reliability and performance. We’re literally burning holes into some poorly designed integrated circuits (ICs). The ES level is by far the best place to attack the power problem. At the behavioral level, we can develop power-efficient algorithms, state charts, or power-efficient processor/memory architectures. The design can then be partitioned into the most power-efficient hardware/software configuration.

The other issue is programmability. Once we start outputting multiple processors on a system-on-a-chip (SoC), we need to figure out how to program them. Programming is primarily a sequential process. Although a programmer can program using threads, a vast majority of programmers are incapable of doing a complex design using more than four threads. Some specific applications take advantage of a massively parallel multiprocessor environment. Unfortunately, not too many of them exist. Let's face it: Supercomputers are almost always application-specific machines. Past efforts to develop general-purpose multiprocessor systems have run out of steam at four threads.

Very-long-instruction-word (VLIW) processors served as an attempt to stretch the four-thread boundary. Unfortunately, few VLIW programmers are available because of the concurrent programming challenge. For the past 25 years, the software-design community has made various attempts to develop a true concurrent software compiler. Unfortunately, all of those attempts failed. The situation is different today. In the past, success meant being able to address a new market--not a bad goal. Today, the lack of a concurrent software compiler could well stand in the way of growth in the electronics market.

Solving these issues will become the basis of a completely new market segment. This segment will comprise neither EDA nor embedded-software design tools. It's something completely different. The market will be larger than the ESL and embedded-software design-tools market combined.

Gary Smith is a Chief Analyst at Gartner Dataquest, where he is part of the Design & Engineering group and serves in the Electronic Design Automation Worldwide program. He oversees annual Market Share and Market Forecast reports; develops the Market Trends Report covering the EDA market; and provides user wants and needs studies. Smith is a current member of the Design TWG for the International Semiconductor Road Map (ITRS).

Daya Nadamuni is a research vice president with the Design & Engineeringgroup in Semiconductors. She leads the research effort for the embeddedsoftware development tools and RTOS program and contributes to researchefforts for ESL and EDA.

+++++++++++

Captions:

Table 1: This table illustrates the various sub-methodologies of the three major ESL (Electronic System Level) development categories.