Here's how to read election news like a scientist

Tom Steyer (L), Sen. Elizabeth Warren (D-MA), Sen. Bernie Sanders (I-VT) and former South Bend, Indiana Mayor Pete Buttigieg (R) listen as former Vice President Joe Biden (C) speaks during the Democratic presidential primary debate at Drake University on Jan. 14, 2020 in Des Moines, Iowa.
Tom Steyer (L), Sen. Elizabeth Warren (D-MA), Sen. Bernie Sanders (I-VT) and former South Bend, Indiana Mayor Pete Buttigieg (R) listen as former Vice President Joe Biden (C) speaks during the Democratic presidential primary debate at Drake University on Jan. 14, 2020 in Des Moines, Iowa. (Image credit: Scott Olson/Getty Images)

To understand politics, it helps to think like a scientist.

Campaign coverage of the upcoming presidential election is everywhere, with various polls showing this or that candidate on top. There are national approval ratings, local approval ratings, polls about primary candidates, polls about issues, polls about electability. All of these numbers add up to a cacophony of information that can be difficult to make heads or tails of. In that way, scientists say, they're a lot like the data a researcher might collect: The individual polls mostly aren't too useful on their own, without context. But taken together and approached thoughtfully, the polls can add up to the sort of information a scientist would find useful.

"There are plenty of methodologically sound political polls that closely resemble methods used in scientific contexts, but there are also some quite poorly designed — and/or purposefully biased — political surveys out there," said Sara Burke, a research psychologist and expert in intergroup biases at Syracuse University. "The best of the best in political polling do a good job with the tools available and maintain — and attempt to communicate — a clear understanding of the limitations that still exist in their methods."

Related: 10 Things You Didn’t Know About You

In other words, whether a poll is valuable or interesting depends a lot on how it was conducted and how it's presented.

Often, these polls are presented as "Here are some percentages," according to Jillian Scudder, an astrophysicist studying galaxies at Oberlin College in Ohio. "So you might do a political poll, you might say, 'We did a poll in this state, and we got these numbers,' and you might put that in the news. When I do statistics and I come up with a percent, that percent comes with a lot of other numbers," Scudder told Live Science.

Scudder's work involves statistical tests that look a lot like polling, she said. She might collect millions of data points on the behavior of galaxies to try to figure out how they're behaving. But it would be a waste of time to go through each one individually. So she'll take smaller samples of her data and study them, using statistical methods similar to the ones pollsters use to draw conclusions about the whole population of galaxies.

But for that research to work, and for it to have any meaning to other scientists, the numbers must come with data that gives them context, she said.

"Was this a sample of 100 [data points]? Was this a sample of 1,000? Was this a sample of 1 million? How much do changes in sample size change the result? If I go from 1,000 to 10,000, do the percentages change, or are they pretty robust? Things like that," Scudder said.

Polls, similarly, are much more useful when you know how many people were sampled, how consistent the results are with other polls, and how exactly the polls were done, said Chris Schatschneider, an educational psychologist and expert in statistics and research design at Florida State University.

In Schatschneider's own research, he said, he uses statistics to separate "signal" from "noise" — to determine whether the result of an experiment likely tells you something meaningful about how the world works or might be the result of random chance. He also thinks carefully about precisely what questions a particular set of data can answer, and what questions it can't.

Those statistical methods are different from the ones pollsters use, he said. But it's important to ask similar questions when hearing polling data in the news: How big was the sample size? Who exactly was sampled? What questions did the pollsters ask exactly? All of that context can tell you whether a poll is meaningful in the way a few floating numbers next to, say, a candidate's name can't.

It's also important to understand the methods a pollster used, he said.

Related: What is a scientific hypothesis?

For example, many polls involve "stratified sampling." That means that if a particular group — college students, for example — is underrepresented in a poll sample compared with the general population, pollsters will tweak the numbers so that the college students who were surveyed become more important. This can be a legitimate technique in principle, Schatschneider said. But it can skew results as well when a tiny group of surveyed people end up standing in for thousands. He gave an example: The New York Times reported in 2016 that a single 19-year-old black man who supported Donald Trump in that year's election was wildly skewing poll results due to this kind of data-massaging, leading to news stories suggesting that Trump was much more popular with black voters than was the case.

The reality, Schatschneider said, is that unless it's your full-time job you probably don't have time to evaluate polls individually in this way to determine which ones are scientific and which are less so. Most people are better off not paying too much attention to news about individual polls, which can be misleading, and should instead look at averages of recent polls like the ones RealClearPolitics publishes, he said.

Scientists do something similar with research data, when they average together data from multiple papers in bigger papers called "meta-analyses," Schatschneider said. If anything, he said, an average of polls is more trustworthy, because polls tend to get released whether or not they're interesting. But scientific papers tend to be biased toward more interesting results because they're still easier to get published, according to Schatschneider.

Election forecasts based on huge groups of polls can also be interesting and useful, Scudder said, but unlike with scientific research where methods and raw numbers are published, pollsters don't show their work — keeping it all in a proprietary black box.

Generally, Scudder said, she would deem a group of polls trustworthy and interesting if they all point in the same direction, and less meaningful if they're all over the place — suggesting problems in the data collection.

Just because the findings fit a trend doesn’t make them accurate. With any dataset available, Scudder said, you also have to know how to interpret the results.

"You do have to be careful that the statistical test you're using is answering the question that you want to answer," she said.

In science, that might mean figuring out whether a dataset rules an idea out entirely — say, that all stars are made of cheese — or just doesn't prove it — say, all stars might still be made of cheese, but we haven't seen the cheese yet.

When it comes to political polls, the questions are different. But understanding what they mean is just as important. An approval rating isn't a measure of how people plan to vote. Asking people who they like during a primary doesn't necessarily tell you how they'll feel during a general election. Asking who they plan to vote for in February doesn't predict how they'll vote in November, Schatschneider said.

In that way, Schatschneider said, polling is a lot like taking a patient's temperature. It's a perfectly scientific enterprise, he said. But it's important for people following polls to be clear on what exactly they mean.

Originally published on Live Science.

How It Works Banner

Want more science? Get a subscription of our sister publication "How It Works" magazine, for the latest amazing science news.  (Image credit: Future plc)
Rafi Letzter
Staff Writer
Rafi joined Live Science in 2017. He has a bachelor's degree in journalism from Northwestern University’s Medill School of journalism. You can find his past science reporting at Inverse, Business Insider and Popular Science, and his past photojournalism on the Flash90 wire service and in the pages of The Courier Post of southern New Jersey.