Home of the Jim Heath Channel and Fact News

In 2016, the polls were more right than wrong.

The final national surveys had Hillary Clinton ahead of Donald Trump ahead by an average of 3 points; Clinton ended up winning the popular vote by 2 points.

But some state polling missed the mark badly, especially in Midwest and Rust Belt states.

Trump narrowly carried Michigan, Pennsylvania and Wisconsin, where some long-respected state polls had shown Clinton with a narrow lead.

In the end, Trump won the election by about 77,000 votes in three states that put him over the top in the electoral college.

One of the most prominent explanations for 2016’s polling whiffs was that these state polls surveyed more college-educated voters than the actual share of college-educated voters in the population.

As a result, they were missing participants with lower education levels.

But last fall, NBC News’ state polling partner — Marist College’s Institute for Public Opinion — conducted an experiment with Edison Research by polling Kentucky’s competitive gubernatorial race and reached a different conclusion: Education appears to be a piece of a bigger geographic puzzle with how pollsters build their samples.

And that geographic lesson, plus a finding that enhanced sample lists skew to more upscale urban voters, convinced Marist of how it should conduct its polls going forward.

“You need to keep geography in mind,” Barbara Carvalho, director of Marist’s polling, said about her biggest takeaway from the Kentucky experiment.

“We do have to pay more attention to education,” Edison Research co-founder Joe Lenski added to NBC News. “But it’s way more complicated than just that.”

Dealing with an overrepresentation of higher-educated voters isn’t a new phenomenon for pollsters in the modern era.

Studies show that college graduates are simply more likely to complete a survey than their less educated peers.

“People with higher levels of education are more likely to participate in a poll — to answer the phone, to fill out a survey. Most national pollsters knew about this and had a protocol in place. But that practice was not being done by most organizations doing state polling” in 2016, says Courtney Kennedy, director of Survey Research for the Pew Research Center.

But before the last few elections, an overrepresentation of college grads didn’t necessarily doom a survey to be ideologically skewed.

That’s because a voter’s education level wasn’t strongly correlated with their partisan identity.

In fact, between the mid-1990s and the 2008 election, college-educated whites were equally likely to support Democrats and Republicans.

The Obama era saw that trend begin to change, however, and Trump’s election blew it apart.

In 2016, college graduates voted for Hillary Clinton by a 10-point margin, 52 percent to 42 percent.

Voters without a degree selected Trump by a 7-point margin.

Accounting for only whites without a degree showed the most stunning gap; those voters — the same one that state pollsters were accused of missing — broke for Trump by nearly 40 points.

Advocates of education weighting in political polling say that partisan divide makes some statistical tinkering on the back end necessary.

But the first challenge, which those same advocates acknowledge, is figuring out what the ideal proportions of voters at each level of education should be.

Different sets of demographic data, such as the Census and presidential election exit polls, peg the college-educated voting population at significantly different levels.

But even if pollsters could pick the perfect proportion, what if the white non-college voters they do reach — and base their weighted results upon — aren’t actually representative of the group as a whole?

In other words, a white man without a college degree who lives in an affluent suburb of a small city may be politically very different than a white man without a college degree in a poorer, more rural part of the same state.

And there’s good reason to believe that too few of the latter, and too many of the former, may be making their way into many pollsters’ samples.

Beyond education, how pollsters select their sample from the pool of telephone numbers is another issue at play.

Not all survey samples are created the same, even in scientifically random digit dial (RDD) live interviews.

Pollsters make judgments about how to draw their samples, including using enhanced lists like listed phone numbers and known billing addresses to increase the likelihood they will make contact with a person rather than non-working numbers and businesses.

But there’s a potential problem with that, too, Lenski points out.

“The people who are more easily matched [to outside data] are less transient, they have more economic activity, they have more magazine subscriptions,” he says. “That can produce a bias in terms of the people you’re most likely to reach in your sample.”

In other words, a pollster who uses this kind of data matching may be much more likely to interview the white man without a degree in the well-to-do suburb than the one in the rural, poorer place.

So what’s the alternative?

It’s definitely more time-consuming, but pollsters can always return to one of the industry’s oldest, time-tested methods. Use pure random-digit dialing — without enhanced lists — which gives every voter with a phone a known chance of being reached.

Faced with these ongoing debates, Marist used Kentucky’s competitive gubernatorial contest last fall to test some of these theories.

The race pitted incumbent Republican Gov. Matt Bevin against a well-known Democrat and son of a former governor, Andy Beshear.

The findings from the experiment: The pre-election poll results that most closely matched the final vote totals in that contest came 1) when the samples were drawn to account for geography (metropolitan versus non-metropolitan regions), and 2) when they incorporated random-digit dialing without an enhanced sample list.

Indeed, the final Kentucky pre-election poll that incorporated both of these adjustments showed a tie, 47 percent to 47 percent.

Other treatments in the experiment included enhanced sample lists and did not account for urban/suburban/exurban/rural differences.

Beshear ended up winning the contest last November by 5,100 votes out of 1.4 million cast.

NBC News and Marist did not release the results of the Kentucky polling experiment until now.

Kentucky was chosen as the subject for this experiment given how competitive the race was, given the state’s geographical and educational divides, and given how Bevin’s win in 2015 — a precursor to Trump’s in 2016 — contradicted the public and private polls.

While Marist’s Carvalho cautions that the experiment was conducted in just one state during the 2019 off-year election, she said it informed Marist how it should conduct its polls going forward, including the state surveys for NBC News.

One, it will incorporate geographic checks — for urban, suburban, exurban and rural areas — in drawing representative statewide surveys.

And two, it will limit the use of enhanced samples.

“The weight-by-education fix post-2016, if applied in 2020, may result in pollsters just fighting the last war, and missing the new realities of 2020,” Carvalho says of Marist’s experiment.

 

Source: NBC News

 

Pin It on Pinterest

Shares
Share This