If there is some parameter, let it be voltage or current or frequency or temperature or time or humidity or pressure or whatever, some parameter which we will call X and which somehow we need to measure, the measurement process will be at least somewhat imperfect. If we call the measurement result Y, we would like it to be an ideal representation of X like this:
However, we always end up with three basic error sources that may be described as follows:
The "percent of reading" and the "percent of full scale" are graphically obvious, but I have often seen the quantization error term get overlooked. When we are digitizing some measurement result, we cannot know how close we are to crossing the threshold to the next digit value going up or going down. It is just possible that the adjacent digit, the next digit up or the next digit down, might be the true value rather than the digit you can see and there is no way to avoid that uncertainty.
Think of it as arising from the maximum resolution. In assigning your error budget, you mustn't miss that one.
Sometimes I think it's smoke and mirrors time when I look at the measurement accuracy specifications of various test instruments. Contributors to "percentage of reading" are sometimes quoted in entirely separate places on the data sheet. Ditto for the "percentage of full scale" contributors. It can be quite a chore to get the true specifications of each error term properly tallied and quantization is sometimes not even mentioned.