Software Development: When Does It Make Sense To Discard Legacy (And Disregard Leading-Edge) Reinforcement?
Speaking of Apple…the other night, after getting my Power Mac G5 tuned up, I fired up the copy of OpenOffice v3.0.1 that I’d (successfully, seemingly) installed a few days before, after having first copied the installation file from my MacBook Air. I was surprised to encounter an error message indicating that OpenOffice wouldn’t run because my system’s processor wasn’t supported. Wasn’t OpenOffice 3 a Universal binary? Apparently not. And the ‘gold’ PowerPC-only English-version OpenOffice release for OS X is still archaic v2.4.0, which runs through the X11 X Windows system versus OpenOffice 3’s OS X-native status. Granted, I could settle for an obsolete OpenOffice v3 release candidate from mid-January. Or maybe I should brush up on the French I learned in high school, or pick up German, Japanese, Macedonian (??), or Swedish?
Some background; back in early June 2005 when Apple announced that it was planning to migrate its computer product line from PowerPC to Intel CPUs, the company also released a dynamic translation technology called Rosetta (based on Transitive’s QuickTransit) that would enable users to run most legacy PowerPC-only binaries on the new hardware. Going forward, the Xcode development environment would optionally support the creation of Universal binaries that bundled both Intel and PowerPC versions of a program. And here’s a nearly four-year-later update: upcoming OS 10.6 (aka ‘Snow Leopard’) will reportedly be Intel-only, as suggested by the developer builds currently streaming out of the company under NDA.
My OpenOffice experience prompted me to research what other current OS X applications weren’t Universal, and why. Emulators such as CodeWeavers’ CrossOverMac (and its Wine foundation) are on the list, as are virtualization environments like Parallels, VirtualBox and VMware Fusion. That the virtualizers are Intel-only isn’t at all surprising. And I suppose that an emulator’s instruction set compatibility requirement also simplifies the development effort and performance-boosts the result, especially if it’s application-specific and API-centric in its focus (as with Wine). With that said, Microsoft’s now-retired Virtual PC for Mac (and Connectix, the company who Microsoft acquired to get the emulation technology) shows that a common CPU spanning the emulation chasm isn’t an absolute necessary, as does the earlier-mentioned QuickTransit…although come to think of it, for endian and other reasons, final-generation Virtual PC 7 was PowerPC G5-only…
However, I was very surprised to find out that several of the applications included in Adobe’s Creative Suite for OS X are also Intel-only. Check out the scathing critiques by John Gruber and Scott Stevenson. The primary justification I can come up with for this decision involves simplifying support and minimizing requisite costs. Unlike a longstanding Apple-tailored program like Photoshop, Encore and Soundbooth are relatively new to the Mac, and Premier recently returned to the Mac after many years away. I can therefore see why Adobe might decide to focus its limited resources on the Intel-only future in these cases. Ironically, however, and speaking of ‘future’, Adobe’s also ‘on the outs’ with the Mac user community for to its continued lack of 64-bit application support on OS X even though it’s already embraced the 64-bit inevitability on Windows.
To expand on this topic, let’s look beyond Macs to other Apple widgets. Longtime developer and popular blogger Erica Sadun recently published an interesting sequence of articles:
The series documents her inadvertent compilation of one of her iPhone and iPod touch applications only for the latest iteration of the handhelds’ firmware, and her subsequent research into what percentage of the user community her program would therefore be unable to target.
In reading through the data she obtained from several knowledgeable industry representatives, I kept in mind that the iPhone and iPod touch aren’t over-the-air updateable. Instead, you need to tether them to a computer running iTunes in order to transfer and execute the flash memory-based firmware upgrade, thereby creating awareness and lack-of-convenience barriers to timely revision installs. Still, the statistics gave me pause. According to Headlight Software, a more encompassing firmware version compile could grow the accessible market for your application by 50%. And even more dramatically, Medialets suggests that by focusing your attention only on latest firmware v2.2.1, you’d be chopping out 90% of your potential customer base.
Erica presents several thought-provoking counterpoints to a knee-jerk broader-is-better conclusion. By focusing only on a newer firmware release or few, for example, you don’t have to struggle with any bugs that might be present in earlier revisions. Also, you’re able to gain access to operating system ‘hooks’ that didn’t initially exist. Indicative of this latter line of reasoning, note the more than 1,000 new APIs that Apple announced yesterday it will be providing with the SDK for upcoming iPhone and iPod touch firmware v3.
Arguably, the narrower-versus-broader version span is at the core of both the asset and the challenge Microsoft and its development partners face versus Apple. The most dramatic hardware support constriction that I can recall Microsoft ever making with Windows was when Windows 3.1 dropped real mode support and therefore required at minimum a 286-class CPU. That decision aside, the operating system still runs (to one degree or another) a vast time- and version-span of applications and interacts with a diversity of hardware, attributes which are both a notable strength and a substantial development and support burden.
The prevalence of misdirected ‘blue screen’ blame at Microsoft that is actually the fault of a third-party driver is indicative of the downside of this strategy. Conversely, Apple has consciously removed support for legacy hardware with each operating system and internally developed application suite up-rev. I’m blocked from installing ‘Leopard’ OS 10.5 on my ‘Quicksilver’ Power Mac G4, for example, unless I employ a risky hack workaround, because the computer doesn’t have an 867 MHz or faster CPU (even though it includes two 800 MHz G4 PowerPCs). Practically speaking, I can’t even install OS 10.4 on it unless I’m willing to dispense with support for its Adaptec SCSI controller. Mozilla’s already dropped OS 10.3 support in Firefox v3. And looking forward, as I’ve already mentioned above, the likely accurate rumours suggest that none of my PowerPC-based gear will be capable of an OS 10.6 upgrade.
The narrower-versus-broader tradeoff isn’t, of course, solely restricted to the hardware I’ve already mentioned. It’s a concern that I suspect many of you have faced at some point in your professional careers. I’m thinking, for example, of those of you versed in Windows Embedded versions, both those based on Windows NT and Windows CE cores, and targeting both native and .NET-intermediary compilations. Java developers (speaking of virtual machines) have similar targeting struggles. And given the diversity of Linux distros and kernel iterations, that O/S’s advocates also have to make conceptually comparable tough decisions. So I’ll wrap up this particular writeup with a multi-part question for my readers:
What version focus decisions (both forwards- and backwards-looking) have you had to make, what was your ultimate determination, and what factor(s) swayed you to go in one particular direction?