Since it is not feasible to question the entire voting population in a political poll, statisticians have developed formulas to calculate trends based on information provided by small representative sample groups. The margin of error in a poll represents the probable difference in percentages that would occur between two randomly selected sample groups of the same size, recruited in the same manner. It’s labeled as “+/- x“ because the percentage that would be calculated if the entire population were polled would typically be that many points higher or lower, disregarding other factors such as question order, question wording and shifting opinions. The smaller the margin of error, the more accurate the poll. Most pollsters strive to keep the margin of error below 4. Pollsters often use weighting, a method of counteracting the unequal probabilities that sometimes occur in polling, to keep the margin of error low. Pollsters trying to push an agenda may manipulate weighting to lend more credibility to their preferred results.
Why Are Polls Weighted?
As of 2007, telephone polls had a typical response rate of about 20 percent, down from 50 to 70 percent 40 years prior. Caller ID and cell phones are credited with the decline in response rates, and shrinking pools of respondents make it more likely that the samples will include disproportionate numbers of respondents in certain demographic groups. Some pollsters include party affiliation on questionnaires and use weighting to adjust the results based on previously gathered election data. Weighting is also used to account for multiple adults in households, since each individual household has equal odds of being contacted for some polls, despite their occupancy.
Benefits of Weighting
The National Council on Public Polls (NCPP) says weighting makes polls more accurate when there is a clear relationship between the weighting variable and the survey data. For example, raw data collected during an election phone poll in a state may show a result, but if closer analysis reveals that 90 percent of respondents resided in rural areas of the state, and 10 percent of respondents resided in major cities, the data would likely be non-representative. The pollster should recognize that rural households are more likely than urban households to have residential landlines, and that rural populations and urban populations vote differently. The pollster would then weight the results from the urban respondents to make the poll representative of the entire state. NCPP says it is essential for polls to be weighted to account for houses with multiple landlines (or cell phones if cell phone dialing is used in the poll), and multiple adult residents. Polls that do not account for these unequal probabilities will always be inaccurate. Demographic weighting used in online polling can help eliminate some of the bias that occurs when pools of respondents are limited to those with internet access, but the NCPP says demographic weighting alone is not sufficient in online polling. Many internet pollsters develop complex formulas that attempt to make their results representative of populations. Some have solid track records of using these weightings to accurately predict results, while others are criticized for conducting biased and inaccurate polls.
When Weighting Misleads
Humphrey Taylor, chairman of The Harris Poll, says pollsters often consider polling to be as much art as science. Which groups are weighted, and the degree to which they are weighted, are not consistent and are calculated at the whim of the pollster. Internet polling companies, such as The Harris Poll, use secret formulas to process the raw data they collect from online respondents. The only way for poll readers to judge whether a pollster’s weighting method is solid is to observe the organization’s track record. A pollster that consistently predicts accurate results is likely using a balanced and calculated weighting method. Some pollsters misuse weighting in a way that makes results inaccurate and misleads those who read the poll. The NCPP advises against pollsters weighting based on party affiliation, since this is always changing. For example, applying weights to a 2012 poll based on stated party affiliations in 2008 exit polling would be inaccurate, since many conservatives now associate with the Tea Party and reject the Republican label, and some left-leaning demographic groups are becoming increasingly disenfranchised from the Democratic party. The NCPP also discourages weighting for variables that are irrelevant based on the data being collected. For example, differences in the voting preferences of males versus females may be relevant in some elections, but not others. When weighting for gender is applied across the board in all polls, inaccuracies will increase in those polls that were superfluously weighted.
The NCPP states, “The rules of weighting are simple. Always weight for the different probabilities of selecting the sample and be cautious about weighting for other things.” Ideally, polls should be conducted with random samples, contacted via random landline and cell phone dialing, since this method encompasses the largest segment of the population. When these and other NCPP standards are adhered to, there is usually no need for weighting beyond accounting for differences in probability. Any poll that is conducted online, or described as having been weighted for non-demographic variables, should be considered with caution. Research the track record of the polling organization, and aggregate compilations of similar poll results, to reach an unbiased conclusion.
O’Brien, M. Gallup Poll Sees Growing Support for Third Party in GOP, Tea Party. The Hill. Accessed October 11, 2011.
Singh, V. Young People Becoming Disenfranchised from Democratic Party. Newstimes. Accessed October 11, 2011
Taylor, H. The Case For Publishing (Some) Online Polls. Polling Report. Accessed October 11, 2011.
National Council on Public Polls. Internet Polls. Accessed October 11, 2011.
National Council on Public Polls. The Good and Bad of Weighting the Data. Accessed October 11, 2011.
Decoding Science. One article at a time.