Intel Larrabee architecture details revealed

A new chip unveiled by hardware maker Intel is hoping to take graphics processing back to the x86 instruction set while still offering DirectX and OpenGL support. The chip will be offered as discrete chip on motherboards as well as a standalone processor to compete directly with the GeForce and Radeon products. Dubbed "Larrabee", the chip was unveiled at this year's SIGGRAPH conference and sports a variety of new technical features, including a fully coherent memory subsystem which allows for more efficient multi-chip implementation. Should the x86-based graphical rendering capabilities prove viable to developers, Larrabee could serve as a cost-efficient alternative to expensive PC video cards, such as those produced by AMD and Nvidia.
- An x86-compatible basic processing core derived from the original Pentium. This core has been heavily modified to include a 16-ALU-wide vector unit for use in Larrabee. Each core has L1 instruction and data caches plus a 256KB L2 cache, all fully coherent

- A 1024-bit ring bus (512 bits in each direction) for inter-core communication. This bus will carry data between the cores and other major units on the chip, including data being shared between the cores' individual L2 cache partitions and the cache coherency traffic needed to keep track of it all.

- Very little fixed-function logic. Most stages of the traditional graphics pipeline will run as programs on Larabee's processing cores, including primitive setup, rasterization, and back-end frame buffer blending. The major exception here is texture sampling, where Intel has chosen to use custom logic for texture decompression and filtering. Intel expects this approach to yield efficiency benefits by allowing for very fine-grained load balancing; each stage of the graphics pipeline will occupy the shaders only as long as necessary, and no custom hardware will sit idle while other stages are processed.

- DirectX and OpenGL support via tile-based deferred rendering. With Larrabee's inherent programmability, support for traditional graphics APIs will run as software layers on Larrabee, and Intel has decided to implement those renderers using a tile-based deferred rendering approach similar to the one last seen on the PC in the Kyro II chip. My sense is that tile-based deferred rendering can be very bandwidth-efficient, but may present compatibility problems at first since it's not commonly used on the PC today. Should be interesting to see how it fares in the wild.

- A native C/C++ programming mode for all-software renderers. Graphics developers will have the option of bypassing APIs like OpenGL altogether and writing programs for Larrabee using Intel's C/C++-style programming model. They wouldn't get all of the built-in facilities of an existing graphics API, but they'd gain the ability to write their own custom rendering pipelines with whatever features they might wish to include at each and every stage.

Popular Posts