08 January 2008

Multicore Processors

Several years ago Motorola released the POWER4 microprocessor with a dual core. This had two cores which were basically PowerPC chips sharing a memory controller. Since then (2001), it has become commonplace for microprocessors to have large numbers of cores. Most introductory articles I've read mention that multicore processors allow multiprocessing on a single die (the advantage to this is explained below), but it already happens that single-core versions of most RISC chips already offer multitasking within the same processor. Also, it has become commonplace for chips to incorporate mixtures of CISC and RISC architecture: most of the architectural innovations are definitely RISC-inspired. As a fairly random example, the single-core PowerPC 604 had six parallel execution units and could issue four instructions concurrently (Wikipedia).

Why is a Single Die better?

Microprocessors are fabricated on a slice of silica wafer; during CMOS lithography, there are scores of chips on each wafer, which are subsequently detached. A reduction in the area of the chip has a substantial impact on the price of fabrication. Also, one of the most important gains in semiconductor technology over the last thirty years has been reduction in transistor size. Not only is it possible to fit millions of transistors onto a chip the size of a thumbnail, the same chip is likely to store millions of bytes of cached data, in addition to the registers. As a consequence, one approach for developers has been to package newer chips two or more to a die; interface between the two (for parallel processing) can be much more efficient than between two chips on separate dies, and there is a potential for plug compatibility: the new two-core chip can be plugged directly into an earlier model motherboard.


On the other hand, once a developer has perfected a processor mask, it's a fairly minor enhancement to incorporate two, four, or even 16 on a single die. Furthermore, multithreading in a single processor core can slow the process down in comparison to a single thread enjoying a monopoly of memory space, ports, virtual sockets, message queues, and distributed objects. Moreover, engineering has moved to exploit the potentials of multiple cores in the design of future cores. For example, since the 1980's there was a tendency to make microprocessors universal and unitary. Universal, in the sense that the processor would incorporate nearly all, or potentially all, of the functions of the computer itself (short of input, output, and power supply). Unitary, in the sense that the processor had a completely integrated system of program execution.

Now the evolutionary direction has reversed. A vestigial concept was the "bit-slice" architecture, in which 2-4 bits of the ALU resided on separate chips, i.e., a 32-bit word computer system might have 8 chips for the ALU, each including the instruction set, but only for 4 bits. It's not difficult to imagine chip designers opting for revitalized bit-slice architecture, rather than parallelism, since there are diminishing marginal returns to parallel processing.

*In the Apple Macintosh world, the contemporary Motorola 68040 chip likewise ingested an entire floating point unit (FPU).

AMD website, "AMD and 90nm Manufacturing: Paving the Way for Tomorrow, Today" (2008)

Elmer Epistola, "Semiconductor Manufacturing," SiliconFarEast (via Wikipedia; date unknown)

Tom R. Halfhill, "The Future of Multicore Processors" (31 Dec 2007)



Post a Comment

<< Home