close
close

Trump polls wrong again? Why experts are worried about “herding.”

Trump polls wrong again? Why experts are worried about “herding.”

Early voting patterns are interesting but potentially misleading.
Photo: Yasuyoshi Chiba/AFP/Getty Images

Less than a week after the deal was signed, polls show an incredibly close race between Donald Trump and Kamala Harris, both nationally and in the seven battleground states. Lately it’s been hard to find polls that show otherwise. And that has led to the suspicion, as often happens in the home stretch, that pollsters are “herding,” that is, making their numbers as close as possible to those of other pollsters, as Nate Silver put it:

Silver has been focused on herding for some time now. When he ran FiveThirtyEight, he offered an explanation for why some pollsters hoard:

A further complication is “herding,” or the tendency of polls to produce very similar results to other polls, especially toward the end of a campaign. A methodologically inferior pollster can publish superficially good results by manipulating his polls to match those of the stronger pollsters. If left to its own devices – without stronger polls to guide it – it might not do as well. When we looked at Senate polls from 2006 to 2013, we found that methodologically poor pollsters improve their accuracy by about 2 points when there are also good polls in that area.

In other words, no one wants to experience the embarrassment of releasing a final preelection poll that turns out to be a complete outlier. Perhaps that’s why some of the worst outliers are actually produced by “stronger pollsters” that aren’t worried about their reputation for accuracy, like New York’s Just-Sienna outfit, which Silver highlights as honest with his data. Just-Siena had Joe Biden ahead among likely voters by nine points in its last poll in mid-October 2020 (Biden won the national popular vote with 4.5 percent). Worse still, Just-A late October poll in Siena showed Biden leading by 11 points in Wisconsin, the tipping point state where he actually won by 0.7 percent.

This very recent phenomenon raises something of a philosophical question: When polls prove misleading, the bigger culprit is the high-quality pollster who publishes an outlier poll or the low-quality pollsters who “herd” in the same direction. ? That’s hard to say. An underlying question is how this “herd behavior” occurs, assuming pollsters don’t just look around and force their numbers to emulate everyone else’s. Earlier this week, political scientist Josh Clinton explained how the decisions all pollsters must make about their samples can significantly alter their results without overtly affecting the final result:

After survey data is collected, pollsters must assess whether they need to adjust or “weight” the data to account for the very real possibility that the people who responded to the survey are different from those who did not respond. This involves answering four questions:

1. Do respondents demographically match the electorate in terms of gender, age, education, race, etc.? (This was a problem in 2016.)

2. Do the respondents fit the electorate? political, after adjusting the sample for demographic factors? (That was the problem in 2020.)

3. Which respondents will vote?

4. Should the pollster trust the data?

Clinton goes on to show that pollsters’ answers to these questions can result in a difference of up to eight percentage points in the horse race results. The fact that the polls for the 2024 general election actually don’t show much difference is probably the best evidence that pollsters are answering these questions as a “herd,” even if they don’t give Trump or Harris their thumb on the scale.

We won’t know whether the herd was right or wrong until after the election, but in defense of the pollsters, they’ve mostly worked hard to address the problems that led to major errors in state and national polls in 2016 State polls in 2016 have led to 2020. (After all, they were very accurate in 2022.) Still, as Clinton argues in a recent article, the error may be the same this year and more systematically shared:

The fact that so many swing state polls are reporting similarly close margins is a problem because it raises the question of whether the polls in these races are indecisive because of voters or pollsters. Will 2024 be as close as 2020 because our politics are stable, or will polls in 2024 look like 2020 results just because of the decisions government pollsters make? The fact that the polls appear to be more tightly clustered than we would expect in a perfect polling world raises serious questions about the second scenario.

Aside from the polls, concerned pundits and supporters of both presidential candidates are understandably looking for signs that the election might be narrowly defeated at the last minute. Some in both camps are obsessed with the fool’s gold of early voting data; Given the massive uncertainties involved in determining who these people are over time and whether their “saved” votes would have been cast later, early voting allows you to “prove” whatever you want. Others obsess over subjective signs of “enthusiasm,” which may be significant, but only insofar as they go beyond the certain choice and are contagious (a “not enthusiastic” vote counts as much as an “enthusiastic” vote). A more relevant factor is the volume and effectiveness of last-minute advertisements and voter mobilization efforts, but the former tend to cancel each other out and the latter are generally too subtle to weigh with any degree of certainty.

Finally, some observers place emphasis on late trends related to the country’s objective situation, particularly improvements in macroeconomic data. There are two problems with this approach: first, perceptions of the economy tend to be entrenched long before Election Day, and second, current voter perceptions of all sorts of phenomena have relatively little to do with objective evidence. Large swaths of the electorate believe, against all evidence, for example, that the economy is terrible and deteriorating, that we are in the midst of a nationwide crime wave, and that millions of undocumented immigrants are pouring into heartland communities to commit crimes and vote illegally. This is not an environment in which many voters will anxiously check statistics to see how America is doing.

If the polls turn out to be way off, we will almost certainly see a wave of post-election ignorance, with angry or frustrated people arguing that we should throw away any objective indicators of how an election is going and focus instead on sentiment leave. “Gut impulses” and our own prejudices. I hope that doesn’t happen. As imperfect as polls are (and, for that matter, economic indicators and crime or immigration statistics), they are much better than relying on cynical partisan hype, spin and disinformation, all of which tend to reinforce themselves, once they are given credibility. And as we already know — and may be reminded on November 5th — it’s a short jump from rejecting polls to rejecting actual election results. And so lies another January 6th or something worse.

Leave a Reply

Your email address will not be published. Required fields are marked *