Measure of politics: Polls

-October 04, 2012

In measuring choice, pollsters face two issues that also plague engineers and scientists: the interplay of systematic and statistical uncertainties. The statistics are easy; in a random sample, the uncertainty is easy to calculate from the size of that sample. The systematics are tough but not impossible; an estimate of how well you don’t know something.

For an easy back-of-the-envelope bluffer’s guide, use the fact that the statistical uncertainty of a Gaussian distribution is the square root of the sample, sqrt(N), where N is the sample size. The percent uncertainty is then 1/sqrt(N). Most polls quote a “margin of error” of 3-4% which means their sample is about a 1000 (i.e., 1/sqrt(1000) = 0.0032).

Proper measurement analysis distinguishes statistical and systematic uncertainties, but of course pollsters have enough trouble conveying their results to the mathematically illiterate masses without complicating things by quoting two uncertainties. In a political poll, for example, if the poll quotes Candidate A with 37% and B with 43% and uncertainties “±3% (stat) ±4% (syst)” Jack bottle-of-merlot would have no idea what to think. So instead pollsters just quote statistical uncertainty. If they have a handle on the systematic uncertainty, they added the two together like the sides of a right triangle: total uncertainty = sqrt(stat2 + syst2), good old root-sum-of-squares.

When polls quote their uncertainty as “margin of error” they give the impression of a hard bound when it’s really a 68% confidence interval. The area under a Gaussian distribution beyond one standard deviation is 32% of the total possibility. It means that the probability of a random fluctuation exceeding that margin happens in about 1 of every 3 polls. Nice and tidy but still neglecting the bias of systematic uncertainty.


Figure 1. A Gaussian distribution showing the 68% / 1σ confidence interval.

Systematic uncertainties are altogether tricky. In the last few weeks, one side of the mud-slinger-of-choice race has complained about poll results. If the mud-slinger’s goal concerns precision, they’d do well to revisit their prob-n-stats class (and we can only hope that they ever took one).

The key sources of systematic bias in opinion polling are:

•    How the sample is chosen
Voluntary or “pull” polling, like online polls, that are open to anyone who wants to take them suffer at least an additional 10% uncertainty over the sqrt(N) statistical uncertainty. People who willfully pursue having their opinions sampled tend to come from the outliers of a distribution. If a radio hack has been on a rampage or a celebrity draws attention to an online poll, forget the results. Think Major League Baseball’s Allstar starting lineup.

•    Bayesian trouble or insight?
Say you have two political parties, if there are 43% Plutocrats, 41% Pinkos, and 16% independents, you might be tempted to partition the sample accordingly. It’s an easy choice to defend, trimming the likelihood of random fluctuations within the sample sounds like a good way to beat back systematic uncertainty, right?

Including prior knowledge of a distribution in the calculation of probabilities is the heart of Bayes’ Theorem. The cost is that you’re trading systematic uncertainty for statistical uncertainty. In cherry picking the sample, the ability to measure the uncertainty is also reduced.

In a random sample the different parties are sampled in the appropriate proportions anyway and the uncertainty of possible fluctuations goes into the easily calculated statistical column rather than the imprecise systematic column.

Remember how your college chemistry TA returned your lab reports if you didn’t include error bars on your graphs? A measurement with poorly understood uncertainty is a poorly understood measurement.

•    The phrasing of a question
Here’s a useful tool. Authentic pollsters offer concise binary questions and then randomly vary whether the question is answered yes or no for a given result. For example, randomly vary “Would you vote for A or B?” with “Would you vote for B or A?”

Another use of this tool is that multiple-phrasings of a question can help determine the systematic uncertainty. If the difference in results for two or more phrasings is statistically significant, that difference is a measure of the systematic error.

The power of statistics is in keeping things random. As soon as a human injects bias, things get messy. As an example, it’s easy to prove that Monte Carlo techniques – using a random sample – is more efficient than planned sampling for applications from multivariable integration to circuit simulation.

(The author, Ransom Stephens, is a big fan of representative democracy but in a contradiction typical of these humans, believes that voters should have to prove that they’ve passed a one year course in calculus before being admitted to the polls.)



Loading comments...

Write a Comment

To comment please Log In

FEATURED RESOURCES