Intel and the FTC: Nvidia's Last-Gasp Relevance Opportunity?
Two weeks ago, as you already know if you’ve read my co-worker Suzanne’s excellent and timely report, the U.S. FTC reached a tentative settlement with Intel regarding various antitrust-related accusations against the company. Intel admitted no guilt and paid no fines, but among other things agreed going forward to not make favorable pricing agreements with its customers if they limit or eliminate their business with its competitors, as well as to not retaliate against customers who give business to its competitors.
What I’d like to focus today’s writeup on, however, are the more technical aspects of the agreement that the FTC and Intel crafted. Specifically, I’m intrigued by the opening they may provide long-suffering Nvidia to resurrect itself. Before proceeding, I’ll remind you that I’m not a lawyer (nor do I play one on TV), and I’m not privy to any particular inside stories regarding any of the companies I’m about to discuss. So take what I’m about to say as nothing more than the musings of someone who’s been around the tech industry for several decades and who’s seen many successful-and-not strategies in that time.
As regular readers already know, I’ve long been fundamentally skeptical of Nvidia’s long-term business outlook. While competitor ATI Technologies successfully convinced suitor AMD to acquire it, Nvidia stubbornly remained a standalone entity. And Nvidia has largely also remained focused on the graphics business (while attempting to broaden GPUs’ relevance to encompass general-purpose co-processor functions), along with a more recent flirtation-via-acquisition with integrated ARM-based processors for mobile applications.
Yet, the tech business foundations have shifted underneath Nvidia’s feet, to the company’s detriment, and it hasn’t executed particularly well to its aspirations either. Integrated graphics cores within core logic chipsets are increasingly able to adequately tackle the majority of computer users’ needs, in part because silicon capabilities have evolved faster than traditional 3-D software has evolved to harness those capabilities, in part because pixel- and polygon-performance-demanding apps haven’t expanded beyond the gaming niche, and in part because Nvidia’s been largely unsuccessful in ‘mainstreaming’ its general-purpose GPU aspirations with the possible exception of video encoding (and other processing) acceleration apps…but then again, as my recent cover story pointed out, few consumers do any editing of their captured video clips prior to uploading them to YouTube or otherwise archiving them.
Historically, the discrete GPU decline might not have been such a problem, since Nvidia also had a once-vibrant nForce integrated graphics chipset line. Yet AMD’s acquisition of ATI has largely shut Nvidia out of AMD processor-purchasing accounts. Nvidia’s increasingly antagonistic relationship with Intel hasn’t helped its chances with Intel’s customers, either, and its more recent inability to secure a license for the QPI (QuickPath Interconnect) or DMI (Direct Media Interface) buses has prevented it from selling chipsets for latest-generation Core i3/5/7 (Nehalem) and Pine Trail (Atom) CPU families.
With respect to the dwindling standalone graphics processor market, Nvidia is similarly absent from AMD accounts per the above-mentioned ATI-preference factor. Tensions with Intel don’t help its chances with Intel-favoring CPU accounts here, too. And Nvidia was also notably late to market with DirectX 11 API-supportive GPUs, in comparison to ATI/AMD. In fact, the latter company had fully rolled out its entire first generation DX11-cognizant GPU family before Nvidia got its first DX11-aware product out the door. And AMD’s second-generation DX11 family is reportedly coming soon.
Nvidia’s belated DX11 product portfolio is derived from the long-hyped Fermi core. Nvidia has never publicly confirmed what many in the industry have long suspected; that Fermi was originally never intended to be a graphics chip. Instead, it was architected for high-performance computing applications that the GPU-based Tesla line has long targeted. As past writeups of mine have pointed out, prior Tesla devices contained unused video outputs per their GeForce and Quadro heritage. Conversely, Fermi reportedly had no display-interface circuitry as originally designed (thereby requiring a separate NVIO chip); it instead contained circuits (and proportions of circuits) that were optimal for GPGPU functions but overkill for graphics.
The end results of this misstep are easy to see for anyone who’s followed the Fermi-based chips’ sequential rollouts. They’re power-hungry. They’re expensive. They offer little to no performance benefit versus ATI/AMD competitors at equivalent-to-lower pricing. And their availability, a likely reflection of their low yields in spite of feature retractions, is poor, even though they’re manufactured at the same foundry and process lithography generation as the competition. Some of this multi-faceted predicament is due to the added GPGPU-tailored circuitry on Fermi versus the graphics-optimized ATI/AMD counterparts, which inflates Nvidia chips’ comparative die sizes. And some of it, as I’ve written before, results from Nvidia’s continued reliance on a monolithic-die design for its upper-tier products, versus leveraging high-speed chip-to-chip interconnect and consequently migrating to a multi-GPU approach (with each smaller-die GPU exponentially higher-yielding than its Fermi-based alternative) at the high end, as ATI/AMD has done in its past few product family generations.
Nvidia just lost the discrete graphics shipment lead to AMD. And it also seems to have just lost Apple’s business, at least in the short term. Even these and other setbacks, which clearly manifested in the company’s most recent quarterly fiscal report, wouldn’t have been so crippling had Nvidia not stuck to such a one-track graphics strategy. Granted, as I earlier mentioned, there’s the Tegra ARM-based SoC series. But the first-generation Tegra achieved no visible design wins that went into production, save for Microsoft’s Zune HD portable multimedia player and Kin cellular handset, neither of which generated much volume. Tegra 2 was formally unveiled at the 2010 CES amidst abundant claims of tablet (and other) high-volume designs to come, but those predictions haven’t yet translated into reality.
I personally saw a Tegra 2-based tablet running the Google Android O/S and app suite reasonably robustly just a few weeks ago at SIGGRAPH, so I wouldn’t be surprised if the Google-fueled tablet rumors end up being true. But as I previously wrote, the commercial viability of the Google Chrome O/S is yet to be determined. And by not integrating a baseband processor onto the silicon, as competitors such as Qualcomm (with Snapdragon) have done, Nvidia has so far been largely shut out of cellular handset accounts. Nvidia can argue all it wants regarding the claimed technical merit of decoupling the application and baseband processors, thereby not hindering capability advancement of the former by the transistor payload and other shackles of the latter. But so far, the company doesn’t seem to have convinced any potential customers of the wisdom of its ’swimming upstream against the current’ non-integration vision.
Amidst all this gloom-and-doom talk are several rays of potential sunshine for Nvidia, gift-wrapped and delivered to the company by the FTC as a result of its recent tentative settlement with Intel. Quoting from Suzanne’s writeup, Intel agreed first to:
Maintain the PCI Express Bus interface for at least six years in a way that will not limit the performance of graphics processing chips;
Actually, one place within the proposed agreement literature says that Intel must include a PCI Express bus in its few-year-future CPUs, but elsewhere it says the bus must only be Standard PCI. Regardless, this agreement enables Nvidia to intimately tether a discrete GPU to the CPUs no matter how highly integrated they might become over the six-year horizon, and without needing to obtaining licenses for newer Intel bus standards such as QPI and DMI.
No matter how tempted Intel might be to only follow the letter of the law, thereby tossing an ancient PCI 1.0 bus onto its CPU designs, the resultant pin count explosion would be cost- and real estate-prohibitive. Therefore, expect Intel to include at minimum a single-lane 1x speed PCI Express bus. Admittedly, it won’t be performance- or price-optimum, in part since the inability to leverage system memory (which is on the other i.e. CPU side of the PCI Express tether to the GPU) as a frame buffer will require redundant local memory direct-connected to the GPU. But it’ll still toss Nvidia a potential platform inclusion bone.
Continuing the analysis of the proposed FTC settlement, Intel will be required to:
Offer to extend Via’s x86 licensing agreement for five years beyond the current agreement, which expires in 2013
This passage is clearly meant to ensure that the x86 CPU market has at least three viable suppliers going forward…AMD, Intel and Via. However, note that longstanding industry scuttlebutt has suggested that Nvidia is interested in acquiring Via, thereby conceptually putting it on equivalent footing with Intel (with both integrated and discrete graphics capabilities, the latter exemplified by the delayed-but-not-dead Larrabee program), and AMD (which acquired graphics expertise when it bought ATI). Historically, such a potential acquisition was viewed as not beneficial to Nvidia, because Via’s x86 license was not believed to be transferable to an acquiring entity. However, check out what else Intel agreed to do per the pending FTC settlement:
Modify its intellectual property agreements with AMD, Nvidia, and Via so that those companies have more freedom to consider mergers or joint ventures with other companies, without the threat of being sued by Intel for patent infringement
This passage has been (rightly, IMHO) viewed by industry observers as squelching Intel’s attempts to sue AMD for transferring x86 manufacturing rights to GlobalFoundries, over which AMD owns a less-than-majority share. However, it seemingly would also allow (for example) Nvidia to acquire Via, thereby providing a compelling alternative to the internally developed x86 design that some believe has long been in development. Whether or not Nvidia could turbo-charge the perpetually also-ran C7 and Nano processor series into serious PC contention versus AMD and Intel competitors is unclear. But Nvidia desperately needs a beyond-graphics ’second act’ in order to remain relevant as a semiconductor supplier, and x86 may provide the ticket.