Robust Design : Good, Fast, Cheap – pick two
Robert Cravotta - February 10, 2010
Reading Battar’s response to the introduction post for this series has suggested to me that it is worth exploring the relationship of the popular expression “good, fast, and cheap – pick two” in the context of robust design principles. The basis for this expression is that it is not possible to globally maximize/minimize all three of these vectors in the same design. Nor does this relationship apply only to engineering. For example, Jacob Cass applied it to Pricing Freelance Work.
There are a few problems with this form of the expression, but the concept of pick (n-1) from (n) choices to optimize is a common trade-off relationship. With regards to embedded processors, the “three P’s”, Performance, Power, and Price capture the essence of the expression, but with a focus on the value to the end user.
One problem is that this expression implies that the end user is interested in the extremes of these trade-offs. The focus is on realizing the full potential of an approach and robustness is assumed. This is an extremely dangerous assumption as you push further beyond the capabilities of real designs that can survive in the real world.
The danger is not in the complexity of delivering the robustness, but rather our inexperience with it because our ability to accommodate that complexity changes over time. For example, I would not want the fastest processor possible if it means it will take a whole star to power it. However, someday that amount of energy might be readily accessible (but not while we currently only have the energy from a single star to power everything on our planet). The fact that it might not be absurd to harness the full output of a star to power a future processor points out that there is a context to the trade-offs designers make. This is the relevant point to remember in robust design principles.
The danger is underestimating the “distance” of our target thresholds from the well-understood threshold points. Moore’s law implicitly captures this concept by observing that the number of transistors in a given area doubles in a constant time relationship. This rate is really driven by our ability to adjust to and maintain a minimum level of robustness with each new threshold for these new devices. The fact that Moore’s law observed a constant time relationship that has stood the test of time, versus a linear or worse relationship, suggests the processor industry has found a good-enough equilibrium point between pushing design and manufacturing thresholds with the offsetting complexity of verifying, validating, and maintaining the robustness of the new approaches.
Robust design principles are the tools and applied lessons learned when designers are pushing the threshold of a system’s performance, power, and/or price beyond the tried and tested thresholds of previous designs.
The four categories of robust design principles I propose – fault-tolerance, sandbox, patch-it, and disposable (which does not mean cheap) – provide context relevant tools and approaches for capturing and adding to our understanding when we push system thresholds beyond our comfort points while maintaining a system that can better survive what the real world will throw at it.
I encourage you to read all of the posts for the robust design series; maybe they will inspire you to share your observations. I would love to be able to consolidate different perspectives and lessons learned with regards to robust design practices here. I suspect there are some valuable lessons to be gleaned from comparing such stories.
To make following this series easier (especially as multiple series overlap each other), I am including the index below to previous posts, both for this and the guest post channels.
none – be the first, contact me if you are interested
Posts made here:
2010, February 4 : Robust Design
Currently no items