http://realworldtech.com/page.cfm?ArticleID=RWT021005084318

ISSCC 2005: The CELL Microprocessor

By:David T. Wang

Back to Basics

The fundamental task of a processor is to manage the flow of data through its computational units. However in the past two decades, each successive generation of processors for personal computers has added more transistors dedicated to increasing the performance of spaghetti-like integer code. For example, it is well known that typical integer codes are branchy and that branch mispredict penalties are expensive; in an effort to minimize the impact of branch instructions, transistors were used to develop highly accurate branch predictors. Aside from branch predictors, sophisticated cache hierarchies with large tag arrays and predictive cache prefetch units attempt to hide the complexity of data movement from the software, and further increase the performance of single threaded applications. The pursuit of single threaded performance can be observed in recent years in the proposal of extraordinarily deeply pipelined processors designed primarily to increase the performance of single threaded applications, at the cost of higher power consumption and larger transistor budgets.

The fundamental idea of the CELL processor project is to reverse this trend and give up the pursuit of single threaded performance, in favor of allocating additional hardware resources to perform parallel computations. That is, minimal resources are devoted toward the execution of single threaded workloads, so that multiple DSP-like processing elements can be added to perform more parallelizable multimedia-type computations. In the examination of the first implementation of the CELL processor, the theme of the shift in focus from the pursuit of single threaded integer performance to the pursuit of multiply threaded, easily parallelizable multimedia-type performance is repeated throughout.

CELL Basics

The CELL processor is a collaboration between IBM, Sony and Toshiba. The CELL processor is expected by this consortium to provide computing power an order of magnitude above and beyond what is currently available to its competitors. The International Solid-State Circuits Conference (ISSCC) 2005 was chosen by the group as the location to describe the basic hardware architecture of the processor and announce the first incarnation of the CELL processor family.

Members of the CELL processor family share basic building blocks, and depending on the requirement of the application, specific versions of the CELL processor can be quickly configured and manufactured to meet that need. The basic building blocks shared by members of the CELL family of processor are the following:

·  The PowerPC Processing Element (PPE)

·  The Synergistic Processing Element (SPE)

·  The L2 Cache

·  The internal Element Interconnect Bus(EIB)

·  The shared Memory Interface Controller (MIC) and

·  The FlexIO interface

Each SPE is in essence a private system-on-chip (SoC), with the processing unit connected directly to 256KB of private Load Store (LS) memory. The PPE is a dual threaded (SMT) PowerPC processor connected to the SPE's through the EIB. The PPE and SPE processing elements access system memory through the MIC, which is connected to two independent channels of Rambus XDR memory, providing 25 GB/s of memory bandwidth. The connection to I/O is done through the FlexIO interface, also provided by Rambus, providing 44.8 GB/s of raw outbound BW and 32 GB/s of raw inbound bandwidth for total I/O bandwidth of 76.8 GB/s. At ISSCC 2005, IBM announced that the first implementation of the CELL processor has been tested to operate at frequencies above 4 GHz. In the CELL processor, each SPE is capable of sustaining 4 FMADD operations per cycle. At an operating frequency of 4 GHz, the CELL processor is thus capable of achieving a peak throughput rate of 256 GFlops from the 8 SPE's. Moreover, the PPE can contribute some amount of additional compute power with its own FP and VMX units.

Processor Overview


Figure 1 - Die photo of CELL processor with block diagram overlay

Figure 1 shows the die photo of the first CELL processor implementation with 8 SPE’s. The sample processor tested was able to operate at a frequency of 4 GHz with Vdd of 1.1V. The power consumption characteristics of the processor were not disclosed by IBM. However, estimates in the range of 50 to 80 Watts @ 4 GHz and 1.1 V were given. One unconfirmed report claims that at the extreme end of the frequency/voltage/power spectrum, one sample CELL processor was observed to operate at 5.6 GHz with 1.4 V Vdd and consumed 180 W of power.

As described previously, the CELL processor with 8 SPE’s operating at 4 GHz has a peak throughput rate of over 256 GFlops. To provide the proper balance between processing power and data bandwidth, an enormously capable system interconnects and memory system interface is required for the CELL processor. For that task, the CELL processor was designed as a Rambus Sandwich, with Redwood Rambus Asic Cell (RRAC) acting as the system interface on one end of the CELL processor, and the XDR (formerly Yellowstone) high bandwidth DRAM memory system interface on the other end of the CELL processor. Finally, the CELL processor has 2954 C4 contacts to the 3-2-3 organic package, and the BGA package is 42.5 mm by 42.5 mm in size. The BGA package contains 1236 contacts, 506 of which are signal interconnects and the remainder are devoted to power and ground interconnects.

Logic Depth, Circuit Design, Die Size and Process Shrink


Figure 2 - Per stage circuit delay depth of 11 FO4 often left only 5~8 FO4 for logic flow

The first incarnation of the CELL processor is implemented in a 90nm SOI process. IBM claims that while the logic complexity of each pipeline stage is roughly comparable to other processors with a per stage logic depth of 20 FO4, aggressive circuit design, efficient layout and logic simplification enabled the circuit designers of the CELL processor to reduced the per stage circuit delay to 11 FO4 throughout the entire design. The design methodology deployed for the CELL processor project provides an interesting contrast to that of other IBM processor projects in that the first incarnation of the CELL processor makes use of fully custom design. Moreover, the full custom design includes the use of dynamic logic circuits in critical data paths. In the first implementation of the CELL processor, dynamic logic was deployed for both area minimization as well as performance enhancement to reach the aggressive goal of 11 FO4 circuit delay per stage. Figure 2 shows that with the circuit delay depth of 11 FO4, oftentimes only 5~8 FO4 are left for inter-latch logic flow.

The use of dynamic logic presents itself as an interesting issue in that dynamic logic circuits rely on the capability of logic transistors to retain a capacitive load as temporary storage. The decreasing capacitance and increasing leakage of each successive process generation means that dynamic logic design becomes more challenging with each successive process generation. In addition, dynamic circuits are reportedly even more challenging on SOI based process technologies. However, circuit design engineers from IBM believe that the use of dynamic logic will not present itself as an issue in the scalability of the CELL processor down to 65 nm and below. The argument was put forth that since the CELL processor is a full custom design, the task of process porting with dynamic circuits is no more and no less challenging than the task of process porting on a design without dynamic circuits. That is, since the full custom design requires the re-examination and re-optimization of transistor and circuit characteristics for each process generation, if a given set of dynamic logic circuits become impractical for specific functions at a given process node, that set of circuits can be replaced with static circuits as needed.

The process portability of the CELL processor design is an interesting topic due to the fact that the prototype CELL processor is a large device that occupies 221 mm2 of silicon area on the 90 nm process. Comparatively, the IBM PPC970FX processor has a die size of 62 mm2 on the 90 nm process. The natural question then arises as to whether Sony will choose to reduce the number of SPE’s to 4 for the version of the CELL processor to appear in the next generation Playstation, or keep the 8 SPE’s and wait for the 65 nm process before it ramps up the production of the next generation Playstation. Although no announcements or hints have been given, IBM’s belief in regards to the process portability of the CELL processor design does bode well for the 8 SPE path since process shrinks can be relied on to bring down the cost of the CELL processor at the 65 nm node and further at the 45 nm node.

Floating Point Capability

As described previously, the prototype CELL processor’s claim to fame is its ability to sustain a high throughput rate of floating point operations. The peak rating of 256 GFlops for the prototype CELL processor is unmatched by any other device announced to date. However, the SPE’s are designed for speed rather than accuracy, and the 8 floating point operations per cycle are single precision (SP) operations. Moreover, these SP operations are not fully IEEE754 compliant in terms of rounding modes. In particular, the SP FPU in the SPE rounds to zero. In this manner, the CELL processor reveals its roots in Sony's Emotion Engine. Similar to the Emotion Engine, the SPE’s single precision FPU also eschewed rounding mode trivialities for speed. Unlike the Emotion Engine, the SPE contains a double precision (DP) unit. According to IBM, the SPE’s double precision unit is fully IEEE854 compliant. This improvement represents a significant capability, as it allows the SPE to handle applications that require DP arithmetic, which was not possible for the Emotion Engine.

Naturally, nothing comes for free and the cost of computation using the DP FPU is performance. Since multiple iterations of the same FPU resources are needed for each DP computation, peak throughput of DP FP computation is substantially lower than the peak throughput of SP FP computation. The estimate given by IBM at ISSCC 2005 was that the DP FP computation in the SPE has an approximate 10:1 disadvantage in terms of throughput compared to SP FP computation. Given this estimate, the peak DP FP throughput of an 8 SPE CELL processor is approximately 25~30 GFlops when the DP FP capability of the PPE is also taken into consideration. In comparison, Earth Simulator, the machine that previously held the honor as the world’s fastest supercomputer, uses a variant of NEC’s SX-5 CPU (0.15um, 500 MHz) and achieves a rating of 8 GFlops per CPU. Clearly, the CELL processor contains enough compute power to present itself as a serious competitor not only in the multimedia-entertainment industry, but also in the scientific community that covets DP FP performance. That is, if the non-trivial challenges presented by the programming model of the CELL processor can be overcome, the CELL processor may be a serious competitor in applications that its predecessor, the Emotion Engine, could not cover.

SPE Overview


Figure 3 - SPE die photo with functional unit overlay

Figure 3 shows the die photo of the Synergistic (or just plain SIMD) Processing Element (SPE). The SPE is a specialized processing element dedicated to the computation of SIMD-type data streams. The SPE has 256KB of private memory, referred to as the Load Store (LS) unit, implemented as four separate arrays of 64 KB each. The LS is a private, non-coherent address space that is separate from the system address space. The LS is implemented using ECC protected arrays of single ported SRAM. The LS has been optimized to sustain high bandwidth and small cell size. The cell size is 0.99µm2 on the 90nm SOI process, and access latency to LS is 6 cycles.

SPE Architecture

To minimize usage of non-computational hardware, the SPE does not have hardware for data fetch and branch prediction. These tasks are instead relegated to software. The SPE implements an improper subset of the VMX instruction set, and all instructions are 32 bits in length. The SPE instructions operate on a unified register file with 128 registers. The registers are 128 bits in width and most instructions operate on the 128 bit operands by treating them as four separate 32 bit operands. Due to the 18 cycle branch misprediction penalty and the lack of a branch predictor, tremendous effort will have to be devoted to avoiding branches. The inclusion of the large register file is thus a necessary element in eliminating unnecessary branches via loop unrolling.


Figure 4 - SPE Organization

The SPE is an in-order processor that can issue two instructions per cycle to seven execution units in two different pipelines. Typically, each instruction makes use of 3 source operands to produce 1 result. The operands are fetched from either the register file or the forward network. Due to the in-order nature of the pipeline and the strict issue rules, the processor makes use of the forwarding network to minimize execution bubbles. To support the dual issue pipeline, each of which can source 3 operands and produce one result per cycle, the register file has 6 read ports and 2 write ports. Register file access takes 2 cycles.

Load Store Unit

The Load Store unit is a privately addressed, non-coherent address space for the SPE. Data is moved in and out of the Load Store unit in 128 Byte lines by the DMA engine. Due to the fact that the LS must simultaneously support DMA transfers into the SPE, DMA transfers out of the SPE as well as local accesses by the execution units, IBM expects that the LS unit would have a utilization rate as high as 80~90% when the SPE is running optimally. As a result, the DMA engine must schedule data transfers to avoid contentions on the system bus and LS. While the use of the software controlled data movement mechanism and the lack of a cache increases the difficulty of programming the SPE, the explicit software management aspect of the SPE means that it is well suited to support real time applications.


Figure 5 - Software scheduled threads overlapping computation and data streaming