Embedded x86: GPU Case Studies, Strengths, Shortcomings, And Additional Perspectives
Brian Dipert - June 13, 2008
The enemy of my enemy is my friend (at least temporarily)
Those were the words that went through my mind yesterday morning, when I saw that that AMD was partnering with Havok for physics algorithm development on both CPUs and GPUs. Why? As I pointed out Wednesday night, Intel bought Havok last fall. So why did Intel climb in bed with its primary CPU competitor and, when Intel’s Larrabee launches, a primary GPU competitor as well? Simple. Nvidia bought Havok competitor Ageia in early February. So Intel brings AMD on board to form a united front; combined with seeding the market with free development tools, you’ve got a decent chance of freezing Nvidia/Ageia out. Smart moves.
By the way, for any of you who dispute that an array of Atom x86 CPU cores can create a credible shader processor array (for graphics and other massively parallel tasks), check out the following ExtremeTech links:
SwiftShader is a DirectX 9 (Shader Model 2)-based software renderer; i.e. no dedicated graphics driver is required. Yes, the performance is currently underwhelming, but the code is running on a quad-core CPU whose overall complexity caps clock speeds, and whose core-to-core inter-communication scheme is substandard. How will SwiftShader run on a 16-or-more simple-therefore-fast-core array with robust core-to-core links? I daresay we’ll find out soon, and Nvidia’s recent bravado may be a pre-emptive move based on its fears of Larrabee’s robustness in this regard.
A few months back, widespread rumours suggested that Larrabee was going to be a ray tracing-optimized architecture. That’s silly; although Intel regularly talks up ray tracing in various public forums (last fall’s IDF, for example, or this week’s research event), very little graphics-intensive code currently employs the extremely accurate but computationally intensive (thereby fundamentally explaining Intel’s interest) technique; rasterization dominates today (and will for some time to come). For more on the debate, see the following additional-info links:
- Ray Tracing for Gaming Explored
- Intel Researchers Consider Ray-Tracing for Mobile Devices
- NVIDIA Doubts Ray Tracing Is the Future of Games
- Crytek Bashes Intel’s Ray Tracing Plans
- Larrabee Team Is Focused On Rasterization
I told you Wednesday night that I’d been collecting links to interesting GPGPU applications beyond those I’ve already mentioned. Here you go!
- GPGPU drastically accelerates anti-virus software
- PyGPU - write software for the GPU in Python
- New Password Recovery Technique Uses CPU and GPU Together (Ars Technica also weighs in, and not surprisingly the technique’s feasible on FPGAs and the Cell processor, too)
- Student Maps Brain to Image Search
- Folding@home GPU2 Beta Released, Examined
- Use video RAM as swap in Linux
For more, periodically visit GPGPU.ORG. And on that note, both specific to GPGPU and generically to the future of GPUs, I encourage you to peruse a recently published, excellent writeup series in ACM Queue:
- GPUs: A Closer Look
- Scalable Parallel Programming with CUDA
- Future Graphics Architectures
- A Conversation with Kurt Akeley and Pat Hanrahan
Comments as always are welcomed. Happy weekend, all!
Share your thoughts.
Currently no items