Let's Take A Closer Look

Explaining complicated subject matter simply since 1986

I’ve asked hundreds of people if they’d rather eat a bowl of sand or a bowl of gravel. Every single one said they wouldn’t eat either one, just like you did. “Okay, okay,” I say, and ask them (and you) to imagine a situation where you are forced to eat one or the other – now which one would you pick? This time, everyone grudgingly chooses sand (you, too). Not because anyone likes the idea of eating sand, but because it’s the lesser of two evils. Gravel, most people reason, would break my teeth, so that leaves sand, which I could maybe choke down if I had to. I have long used this experiment to illustrate the concept of forced-choice, a type of question seen in many surveys that insists you to choose A or B. 

Advocates say forcing people to chose provides statisticians and analysts with better measures of commitment

Opponents say they unfairly force people to choose from two options they would never consider in the first place. The folly of forced-choice questions is this: you are not always enthusiastic about the option you pick. Instead, when you chose sand it was because it was the least objectionable of two very unappetizing choices.

Too often, these forced choices are reported as “preferences” when they are most definitely not

If we wanted to know about eating preferences, we would ask an open-ended question, such as “What are some of the things you most like to eat?” Most every food you can think of gets mentioned first by someone – pizza, ice cream, tacos, Oregon blackberries, wienerschnitzel, roti, doubles, lutefisk. Two things I’ve never heard yet are sand and gravel. When forced to make a choice between two things they don’t want, people reluctantly choose the one they dislike least. That’s not the same thing as “preferring” something, is it?

In 1846, a wagon train attempted to take a supposedly shorter route over the mountains to California

They eventually became trapped by heavy snowfall a mile high in the Sierra Nevadas and were snowbound for the entire winter. When their food ran out and they had eaten all their dead pack animals, The Donner Party had only two choices: become cannibals or starve to death. How’s that for two unappealing choices? 

Political polls

The earliest polls asked only one question – do you plan to vote for Candidate A or Candidate B? Brevity was once the big difference between polls and surveys, but modern polls continue to pile on additional questions. Some are demographic questions like age, gender, ethnicity, education, and the like. They are used to discover such things as “Candidate B outperformed Candidate A two-to-one among female college graduates.”

Political polls are also used to gauge public opinion so politicians can adjust their messages to take better aim at their targets. Remember, they’re politicians, and what they want more than anything else is to get elected.

Three famous presidential election polls

In 1948, a Gallup poll predicted Thomas E. Dewey would win the presidential election by between 5 and 15 percentage points. Such was the their confidence in the polls that predicted a Dewey victory, the Chicago Daily Tribune printed their famous banner headline before all the votes were in and counted. Grinning from ear to ear, Harry S. Truman, who won by 4.4 percentage points, held the paper up high for all to see.


In 1952, UNIVAC, the first supercomputer, correctly predicted a 1952 presidential election landslide for Eisenhower after sampling only one percent of the voting population. CBS, covering the elections, immediately declared UNIVAC’s prediction a huge mistake.

In 2016, 101 of 104 polls predicted a popular vote victory for Hillary Clinton, but overestimated the size of her lead, saying she had as much as a 99% chance of winning. Donald Trump’s electoral college victory was a shock.

How can polls be so off-target?

Pew Research says such errors are easy to understand when you know there are an increasingly wide variety of methodologies being reported via the mainstream media and other channels.

  • CNN describes polling as the blind leading the blind.
  • Forbes says polls are not to be trusted.
  • The conversation.com says polling is what mathematicians might call a “black art,” a tongue-in-cheek way of saying it does not have the precision of pure mathematics.

It is only logical that different ways of polling for different reasons will produce polls whose findings are divided and even contradictory. Remember, pollsters are trying to predict the future – always a challenging task. The most sensible way of dealing with poll is to take the average of a number of reputable ones.

Pollsters say they’ve learned lessons from 2016 that will be applied to make this year’s election polls more accurate 

Some of these involve sampling issues. An obvious one is that everyone who answers an opinion poll won’t be voting. To start with, one in four eligible adults in the US are not registered voters, yet many of them were included in 2016 polling data. Add to that how that same year, 40% of registered voters did not bother to vote at all. Lots of them had been polled, too. Some polled voters changed their minds. Seventy-one percent of voters 65 and over voted in the last presidential election, while only 46% of 18-29 years old voted. Sixty-five percent of whites voted and only 47% of Hispanics voted. Weather, health, access to transportation all affect voting, too, and so on. Statisticians have their hands full trying to adjust for all these and other factors.

FiveThirtyEight warns not all polls are the same in their article How to Read Polls in 2020. Their #1 recommendation is that people should check the pollsters’ track records by looking for the ones with long-standing records of accuracy. Three places to check those records are FiveThirtyEight Pollster Ratings, The American Association for Public Opinion’s Transparency Initiative, and the Roper Center for Public Opinion Research. They say people should consider the source, because “partisan groups and the campaigns themselves want to make their candidate look good.”

The American Association for Public Opinion Research conducted a study titled An Evaluation of 2016 Election Polls. In it, they said that there was widespread consensus that the polls failed but it is a mistake to think that all polling is not to be trusted. They cautioned that surveys need to be well-designed and rigorously executed to be able to produce accurate information. It is easy to conclude from 2016’s example that many of the polls were not well-designed OR rigorously executed.

Polls are only indicators of how someone might vote. 

Scientists know survey respondents tend to give answers that they think will be favorably viewed by others. A study of the 2016 election said “members of both parties are likely to conceal support for the opposing party’s candidates.” So some Democrats said they would vote for Clinton, but voted for Trump instead, as did some Republicans voting for Clinton after telling pollsters they’d vote for Trump. 

An unusually high one in four voters wanted someone other than Clinton or Trump, and refused to choose between sand or gravel. You may not remember the other candidates (Colin Powell, Faith Spotted Eagle, Rocky De La Fuente, and others), but people voted for them.

Every choice has a consequence and every consequence influences results 

The problems with political polling are the same as the problems with any kind of surveying. Data scientists know every survey can go wrong in three different places:

  1. How the data are collected.
  2. How the data are corrected.
  3. How the data are interpreted.

Most surveys and polls are flawed on at least one of those three dimensions. Many are defective on two. Some, like the 2016 US Presidential election, mangled all three.

1. Pressured to provide information quickly and cheaply, pollsters take shortcuts and cut corners when they collect the data. These errors are mostly mechanical.

2. Once the often-flawed data have been collected, statisticians “correct” the numbers by a process they call weighting, where responses are subjectively adjusted to conform to a set of expectations. Think of it as the voices of subgroups (students, women, retirees, etc.) being amplified or muted. There is much evidence that weighting schemes are often deeply flawed, incorrectly magnifying some findings and diminishing others.

3. Interpretations are the most personal of these three sources of error. As every psychologist knows, human beings generally make up their minds without carefully considering alternatives. Then two other things happen: they ignore evidence that conflicts with the positions they’ve taken and they seek out evidence that supports their beliefs. Reporters believed Clinton would win, so that was the story they told.

Some political polls have ulterior motives

Polling is frequently employed in political races to obtain immediate feedback on issues and hot buttons so candidates can adjust their messages accordingly. Some polls masquerade as research when their real goal is to sway opinion. Many media polls are just ways to find headlines and create stories they think will appeal to viewers and advertisers. This time around, be particularly aware of interweb polls that exist only to collect your contact information and sell you something.

The current presidential polls

Here’s a table for you, organized alphabetically by pollster, and here’s how I read it. Polls favoring Trump vary from NPR’s and Pew’s high of 44% to the New York Times’ low of 36%. CNN says Biden is the choice of 55% while Reuters says only 46% favor him.

All five of these well-regarded polls put Biden ahead, but by different margins

CNN and the NYT say the gap between Trump and Biden is 14 percentage points, while NPR and Reuters say it’s only 8. The NYT tells us that 14% of voters say they will not vote for either one, while Pew says only 2% are undecided. Big differences like these indicate we should look at the averages of many polls to get a more accurate picture than any single poll can provide. Charles Franklin, director of The Marquette Law School Poll, said the most important lesson from the 2016 election is to “put less weight on what might happen in the unknown future and put more weight on where things stand today.”

As the barrage of poll-driven headlines commences

Take a few minutes and see for yourselves if the story line created by pollsters is believable. That way, you can be better informed than most as the election unfolds. Look for links to the study sources and who and how they chose to poll. Browse through the demographic and party affiliation data to make your own decision on a poll’s accuracy. Chose a few accurate ones like in the table above and average the findings. My example gives us these figures: 41% for Trump, 51% for Biden, and 6% for neither one.

Skeptics know that polls are only barometers of what some people say now. Elections are won and lost by those who vote – not by those who were polled.

Vote Early and Vote Often 

This sort-of-humorous-but-not-really quote has been attributed to notorious election-swayers Al Capone, Mayor Richard Daley, and Mayor Big Bill Thompson. They are all Chicagoans, so make of that what you will.

Enter your email address to subscribe to this blog and receive notifications of new posts by email.