For error analysis, it’s hip to be squared
I’m going to deviate from my normal coverage of modular instruments and alternative testing techniques, and focus on an associated design and test issue: error analysis.
Almost all analog designs require an error analysis. Components are not ideal; they all vary with regards to their specifications. Those variations add up in interesting ways.
Was Huey Lewis and the News smash hit “It’s hip to be square” subliminally talking about error analysis? We may never know.
While in school, all my projects assumed ideal component values. After all, if I was creating a single unit of some kind, I didn’t worry about the statistical distribution as if I had built thousands of the same design. I only had to get one to work.
A summer job at Hewlett-Packard taught me to look at the range of values of key components when specifying a design. I’m sure almost all EDN readers are familiar with this: Choose components such that all combinations of variability within key specifications still allows the entire design to work, as long as each actual value is within the desired range.
After school I took a permanent job at HP, with significant precision analog design. Now I was deep into error analysis including initial values, temperature drift, and time drift. Though Huey Lewis and the News hadn’t formed yet, that’s when I learned that it is hip to be squared.
What do I mean by that?
Basically, we choose components, each around a nominal value, to create a design. Lots of components. To statistically compute error bands of the entire design, I squared the possible error due to each component, summed them all up, and took the square root of the total. That would be error limit of the entire design. This is also known as RSS: Root of the Sum of the Squares. You can use RSS as a verb too, “I RSS’d the errors together.”
Whoa! Why didn’t I just add up all the errors?