ARM vs incumbent microprocessor architectures
Robert Cravotta, Embedded Insights Inc - November 13, 2012
Parts one, two, and three of this series offer a brief overview of the processor architecture ecosystem, identify and map the processing sweet-spot spectrum of mainstream processing architectures, and cover the issues addressed by low-power, small-data-width processors. This part discusses how incumbent microprocessor architectures can compete with ARM-based processors.
The explosion of ARM-based processors contained in mobile devices has caused some people to ask whether ARM will displace other microprocessor architectures in other markets. The incumbent microprocessor architectures, however, have a secret weapon that is analogous to the 8-bit microcontrollers: domain knowledge that is embedded in the architecture and ecosystem of the incumbent architecture.
Consider that specific variants of a microprocessor architecture will include features—developed, tested, and refined over the years—that make those variants especially well suited to the target application’s specific requirements. Also consider the body of software that serves the given market. A strong incumbent microprocessor architecture, much like the 8-bit microcontrollers, is surrounded by a strong and mature ecosystem of developers, tools, operating systems, and middleware that provides a buffer for the incumbent to respond to a challenger.
A specific example of incumbent advantage can be seen in the current battle to determine which microprocessor architecture will ultimately own the tablet space. The ARM architecture currently has the apparent incumbent advantage because many tablet designs treat the device as a large smartphone, and the ARM architecture has many man-years of design knowledge tied up in the hardware and software to support the smartphone market. If the tablet space stays anchored to the smartphone model, the ARM architecture is well positioned. There are tablet products based on other microprocessors, however, that may define tablets differently. For example, if Microsoft can redefine the tablet market to leverage the ecosystem for its Windows OS, the market could be completely different from today’s tablet market.
By some estimates, vendors have introduced more than 200 processor architectures over the past few decades. Most of those have disappeared or have been absorbed into other architectures. The few dozen architectures that currently provide developers with the tools and means to create today’s applications encompass complex ecosystems of processors and development tools, along with domain-specific engineering and software support. Would the developer market be better served if there were even fewer architectures from which to choose?
The massive churn in the processor market is a testament to the complexity and difficulty of figuring out the correct way to serve that market. The uncertainty is not an artifact of the past but remains very much a part of today’s technology. One indicator of that uncertainty is the lingering question of whether 8-bit is dead.
I have recently learned that some companies are quietly exploring ways to increase raw processing performance significantly by reducing the data word size in certain DSP applications. Part of the challenge is determining an acceptable trade-off between the problems that the short word size might introduce and the benefits of the resultant higher performance at lower power. There are some DSPs available today that support 8×8 MACs (multiply-accumulates) within a larger execution engine. In short, is an 8-bit DSP in our future? You never know from what direction the next best idea will come. If we had fewer processor architectures to choose from, there would be fewer opportunities for crazy ideas like an 8-bit DSP to bubble up.
Many commenters argue that if we had fewer architectural choices, software code would be easier to maintain because a larger base of developers would be able to access, use, and maintain it. Would a common architectural base improve the transferability of existing domain expertise and, more important, the development of new domain expertise?
Based on what I see large semiconductor companies doing, I suspect fewer architectural choices would lead to slower innovation because there would be only enough resources within the development support ecosystem to address the engineering issues of the largest volume applications. That could negatively affect efforts toward discovering emergent applications that would otherwise replace the current large-volume applications.
Like the different races of Middle-earth, each available processor architecture encompasses its own unique domain culture or development ecosystem that enables it to be the best at performing some tasks better than the alternatives. Most designs already use multiple processors, and the wide variety of processor offerings lets developers pick and use bestin- class devices and software in their designs. The successful rise of a single architecture to rule them all may be the key to opening up developer productivity—but it could also become a shackle that enforces conformity and limits the direction and opportunity for disrupting innovation.
Be careful what you wish for; you might get it.
Robert Cravotta is principal analyst and cofounder at Embedded Insights. He covered embedded processors as a technical editor at EDN, and before that he worked in the aerospace industry on electronics and controls for pathfinding projects such as fully autonomous vehicles, space and aircraft power management systems, and space-shuttle payloads, as well as building automation systems. He received a master’s degree in engineering management from California State University Northridge and a bachelor’s degree in computer-science engineering from the University of California, Los Angeles.