Google Goes After JPEG: Good Luck With That Grandiose Gig
I thought I’d wrap up the week with a follow-up to a two-days-back writeup. Most of the time in the tech industry, company announcements leave me feeling little to no surprise…much of the time, in fact, companies ‘telegraph’ their intentions so significantly that when they finally get around to unveiling their latest masterpiece, my reaction is something along the lines of “it’s about friggin’ time, eh?” But yesterday’s news coming out of Google was pretty eyebrow-raising…and in this particular case, that’s not a particularly good thing.
On Wednesday, I discussed the pros, cons and future fortune (or not) forecasts for Google’s WebM video codec, which is a relabeling and evolution of the VP8 technology that the company acquired when it bought On2 Technologies a bit over a year ago. Yesterday, we all learned that Google had also applied the lossy compression advancements found in WebM to the still image realm, when the company consequently revealed the WebP codec (’P’ presumably standing for photo, which extrapolates to a guess that the ‘M’ in WebM is short for ‘movies’).
So far, Google’s released decoder and converter (to-and-from BMP, JPEG, and PNG) code, which you can find here. Google claims that transcoding a random set of 1,000,000 images (mostly JPG, plus some GIF and PNG) to WebM format resulted in a 39% reduction in file size “without perceptibly compromising visual quality.” You can see some sample results here. Even better, “We expect that developers will achieve in practice even better file size reduction with WebP when starting from an uncompressed image.”
How’d the company accomplish this bit-slimming bit of magic?
To improve on the compression that JPEG provides, we used an image compressor based on the VP8 codec that Google open-sourced in May 2010. We applied the techniques from VP8 video intra frame coding to push the envelope in still image coding. We also adapted a very lightweight container based on RIFF. While this container format contributes a minimal overhead of only 20 bytes per image, it is extensible to allow authors to save meta-data they would like to store.
And why did Google bother tackling such a project? It’s all about enabling web pages to load as fast as possible:
Images and photos make up about 65% of the bytes transmitted per web page today. They can significantly slow down a user’s web experience, especially on bandwidth-constrained networks such as a mobile network. Images on the web consist primarily of lossy formats such as JPEG, and to a lesser extent lossless formats such as PNG and GIF. Our team focused on improving compression of the lossy images, which constitute the larger percentage of images on the web today.
So what’s with my skepticism? After all, JPEG is almost 20 years old; the standards committee that created it was first formed almost 25 years ago. Isn’t it time for JPEG to step aside and make room for the WebP heir apparent? Not necessarily. Look back at Wednesday’s piece from me, again. MPEG-2 also dates from the 1990s, yet today it remains dominant from an application implementation standpoint.
MPEG-2 is in ATSC as well as numerous international digital television counterparts, such as DVB and ISDB-T. It’s the sole video format for DVD, as well as one of three sanctioned formats for Blu-ray. HDV and XDCAM camcorders use it, as do PVRs and innumerable embedded and other designs. JPEG is even more wildly popular than MPEG-2, for similar reasons. Bottom line: it’s ‘good enough’, it’s pervasive, it’s cost-effective, and its implementation treadmill therefore rolls smoothly onward and upward.
WebP isn’t the first upstart to attempt to knock JPEG off the photography pedestal. Even the standards committee-sanctioned JPEG 2000 and JPEG XR successors were unsuccessful in tangibly blunting JPEG’s momentum. The former wavelet-based supposed descendant achieved at least a modicum of success, ironically forming the foundation of the video codec employed by the digital cinema industry. And the latter has an interesting history; it started out in 2006 as Microsoft’s Windows Media Photo codec, a still image derivative of Windows Media Video (which as I discussed on Wednesday, eventually became standardized by SMPTE as VC-1). Sound familiar? Microsoft renamed it HD Photo a few months later, and eventually took it to JPEG for industry standardization.
Granted, Google has a higher likelihood of establishing a beachhead of success with WebP as compared to competitor Microsoft’s past effort with Windows Media Photo. Google has already revealed that it is working on integrating support for WebP within its Chrome web browser; I also anticipate that the company will shortly expand support to the Android and Chrome operating systems, as well as its Picasa online photo service and other image-rich Google properties.
But WebP’s decode scratchpad memory and processing horsepower needs are unknown at this point, both absolutely and relative to JPEG. What good is a somewhat faster page load time if a mobile device’s battery drains much faster than before, for example, as a tradeoff? And more generally, given JPEG’s deeply entrenched state, I’m anticipating that WebM’s adoption will be limited and slow in coming. Apparently Google agrees; Maximum PC’s coverage of the announcement contains a telling quote:
“The challenges are tremendous,” said Google’s Richard Rabbat. “We foresee it’s going to be a very long conversation.”