Blu-Ray and Graphics Processors: Revisits Of Recent-Past Posts
My good friend and long-time graphics pundit Peter Glaskowsky sent me an enlightening (and amusing) email sequence after seeing yesterday’s AMD-vs-Nvidia graphics analysis. Peter’s refreshingly not the type to mince words, and his proficiency encourages me to take his opinions seriously, so his suggestion that ‘I think you just drank too much of the AMD Kool-Aid here’ caught my eye.
Peter’s particular expertise, I’ve noticed many times in the past (and IMHO, of course), is in products’ (and the subsystems and systems containing them) feature sets, befitting his recent role as chief systems architect at Montalvo Systems. This emphasis was reflected in his feedback to me, all of which I agree with and actually knew ahead of time. Instead of unnecessarily revamping his words, I’ll repeat them verbatim:
There are huge drawbacks to the two-chips-on-a-card thing, including very inefficient memory usage and wasted shaders. It’s more expensive to get the same level of capability from multiple GPUs, and always has been.
Interactions such as yesterday’s with Peter are always interesting to me; after all, I recently devoted an entire post to the subject. When I published yesterday’s writeup, I didn’t consciously have a particular bias towards either AMD’s multi-GPU or Nvidia’s single-GPU approach; I was serious when I asked:
Which is a lower-cost path to a given performance threshold, assuming that the application is amenable to parallel processing; two 55 nm-fabricated ICs operating in tandem, or one monster 65 nm chip?
But in retrospect, I can see that a bit of predisposition leaked into my perspective as well, indicated by the word "monster" and indicative of my background as an employee of a high volume commodity semiconductor supplier. As such, for example, I’m not as concerned as is Peter about inefficient memory usage. Graphics card frame buffers come in granular capacities defined by per-DRAM chip densities, after all…increasingly coarse granularity as per-IC densities skyrocket in pace with manufacturing lithography advances.
Those same advanced lithographies are enabling graphics suppliers to cost-effectively squeeze amazing numbers of shader processors on a single sliver of silicon. As such, I’m not as concerned as Peter seems to be about "wasted shaders"…especially since GPU vendors’ enthusiasm for coincidentally tackling physics, audio and other processing tasks suggests to me that the era of 3D graphics-only functions maxing out GPUs’ capabilities is drawing to a close.
What I am concerned about, however, is the exponentially increasing yield loss impact of a linear increase in silicon die area, thereby explaining why I focused in my writeup on the higher per-die transistor count of Nvidia’s high-end GPUs as compared to their AMD counterparts, along with Nvidia’s less advanced (therefore less silicon area-efficient) manufacturing processes on those high-end GPUs. Granted, mid-range and low-end GPUs often share a common die with their high-end peers…if some of the on-chip shaders or other circuits don’t work as intended, they’re fuse- or otherwise disabled during the IC test flow. Some CPU suppliers employ similar techniques to boost sellable silicon yield, although in all cases the cost burden of the unusable on-chip silicon area makes profitability indefinite. And feature-disabling retrofits aren’t germane to this particular discussion of high-end, fully-feature-enabled GPUs, anyway.
That’s why I don’t automatically buy into, at least at face value, Peter’s contention that "It’s more expensive to get the same level of capability from multiple GPUs, and always has been." The inherent danger of covering any industry for a long time, as both Peter and I have done, is that there’s a tendency to continue to apply old rules to current situations, when the ball game has changed in the interim…sometimes drastically so. To wit, and given that AMD’s Radeon HD4870 X2 appears to be both performance- and price-competitive with Nvidia’s best from the metrics I’ve reviewed, what’s the yield-driven cost tradeoff of a single large-die Nvidia-based graphics card versus a card derived from two smaller-die AMD-based GPUs…especially when if only one of those two AMD GPUs passes testing, it can alternatively find a home in a single-GPU card? I welcome your thoughts.
In the same ‘differing perspectives’ spirit, I encourage you to peruse the discussion that ensued in response to last week’s Blu-ray post by me. Focus your attention in particular on the back-and-forth comments between myself and Bill Sheppard, a frequent past blog participant (by name only, with no proffered affiliation) who thankfully this time finally identified himself as Sun’s representative to the Blu-ray Association. What strikes me in reviewing our discourse is that Sheppard’s unilateral focus seems to be on market growth, while my attention is directed to profitable market growth. Sony’s in a unique situation with respect to Blu-ray; the company owns many of the technology’s fundamental patents (thereby garnering it a revenue stream from any licensed Blu-ray market participant), it sells profitable game console content (thereby counterbalancing initial losses on console sales), and several movie studios reside under the corporate umbrella (providing another profitable compensation for loss-making sales of both the PS3 and of standalone Blu-ray players).
Other Blu-ray market systems suppliers are in far less favorable straits. Fiscally challenged, too, are the semiconductor and software suppliers to the Blu-ray ecosystem, along with media manufacturers and other participants. In this context, I couldn’t help but notice that yesterday, Microsoft announced that its Xbox 360 would be the premier (but, I’ll wager, not the only) destination for high-definition titles delivered by Netflix’s Watch Instantly (aka Watch Now) service. Will HD Watch Instantly deliver equivalent image quality to Blu-ray? Certainly not, as Sheppard rightly points out. But will it deliver good enough quality for the masses, especially when other factors such as cost and convenience are factored in? I’d wager it will. And could the Blu-ray ecosystem ensure its ascendance by quickly crashing content and player prices to DVD-equivalent levels (or below, if necessary, to motivate installed-base hardware upgrades)? It certainly could…but it won’t, or at least it shouldn’t. Because that’d be fiscal suicide.
Sheppard’s right. We’ll know a lot more about Blu-ray’s future next February. What do you think now that we’ll clearly see then?