US Polling Was Quietly Still Bad in 2024 - After nearly a decade of fine-tuning, the industry still hasn’t figured out how to reach enough Donald Trump supporters.

Polling Was Quietly Still Bad in 2024
The Atlantic (archive.ph)
By Marc Novicoff
2025-05-19 16:36:45GMT

Pollsters seemed to finally get it right in 2024. After years of bad misses, they said the presidential election would be close, and it was.

In fact, the industry did not solve its problems last year. In 2016, pollsters famously underestimated Donald Trump by about 3.2 points on average. In 2024, after eight years of introspection, they underestimated Trump by … 2.9 points. Many of the most accurate pollsters last year were partisan Republican outfits; many of the least accurate were rigorous university polls run by political scientists.

Polls can’t be perfect; after all, they come with a margin of error. But they should not be missing in the same direction over and over. And chances are the problem extends beyond election polling to opinion surveys more generally. When Trump dismisses his low approval ratings as “fake polls,” he might just have a point.

For years, the media have been covering the travails of the polling industry, always with the premise that next time might be different. That premise is getting harder and harder to accept.

Polling used to be simple. You picked up the phone and dialed random digits. People answered their landline and answered your survey. Then, you published the results. In 2000, nearly every national pollster used this methodology, known as random-digit dialing, and their average error was about two points. In subsequent elections, they got even closer, and the error, small as it was, shifted from overestimating Bush in 2000 to underestimating him in 2004—a good sign that the error was random.

Then came the Great Polling Miss of 2016. National polls actually came pretty close to predicting the final popular-vote total, but at the state level, particularly in swing states, they missed badly, feeding into the narrative that Hillary Clinton’s win was inevitable.

The 2016 miss was widely blamed on education polarization. College graduates preferred Clinton and were more likely to respond to polls. So, going forward, most pollsters began adjusting, or “weighting,” their results to counteract the underrepresentation of non-college-educated voters. In 2018, the polls nailed the midterms, and pollsters rejoiced.

That reaction turned out to be premature. The 2020 election went even worse for the polling industry than 2016 had. On average, pollsters had underestimated Trump again, this time by four points. Joe Biden won, but by a much slimmer margin than had been predicted.

This sent pollsters searching for a solution yet again. If weighting by education didn’t work, then there must be something specific about Trump voters—even Trump voters with a college degree—that made them less likely to answer a poll. So, many pollsters figured, the best way to solve this would be weighting by whether the respondent had previously voted for Trump, or identified as a Republican. This was a controversial move in polling circles. The proportion of the electorate that is Democratic or Republican, or Trump-voting, changes from election to election; that’s why polls exist in the first place. Could such elaborate modeling turn polls into something more like predictions than surveys?

“This is where some of the art and science get a little mixed up,” Michael Bailey, a Georgetown professor who studies polling, told me. If you weight a sample to be 30 percent Republican, 30 percent Democrat, and 40 percent independent—because that’s roughly how people self-identify when asked—you are making an assumption about how the three groups will behave, not merely matching a poll to population demographics such as age, gender, and education.

These assumptions vary from pollster to pollster, often reflecting their unconscious biases. And for most pollsters, these biases seem to point in the same direction: underestimating Trump and overestimating his opponent. “Most pollsters, like most other people in the expert class, are probably not huge fans of Trump,” the election-forecasting expert Nate Silver told me. This personal dislike may not seem to matter much—after all, this should be a science—but every decision about weighting is a judgment call. Will suburban women show up to vote in 2024? Will young men? What about people who voted for Trump in 2020? All three of these respondent groups have a different weight in an adjusted sample, and the weight that a pollster chooses reflects what the pollster, not the respondents, thinks about the election. Some pollsters might even adjust their weights after the fact if they see a result they find hard to believe. The problem is that sometimes, things that are hard to believe happen, such as Latino voters moving 16 points to the right.

This dynamic might explain a curious exception to the trend last year. Overall, most polls missed yet again: The average error was a three-point underestimate of Trump, the same as 2016. But Republican-aligned pollsters did better. In fact, according to Silver’s model (others have similar results), four of the five most accurate pollsters in 2024, and seven of the top 10, were right-leaning firms—not because their methods were different, but because their biases were.

The most basic problem in 2024 was the same as in 2016: nonresponse bias, the name for the error that is introduced by the fact that people who take polls are different from those who don’t.

A pollster can weight their way out of this problem if the difference between those who respond and those who don’t is an observable demographic characteristic, such as age and gender. If the difference is not easily observable, and it’s correlated with how people vote, then the problem becomes extremely difficult to surmount.

Take the fact that Trump voters tend to be, on average, less trusting of institutions and less engaged with politics. Even if you perfectly sample the right proportion of men, the right proportions of each age group and education level, and even the right proportion of past Trump voters, you will still pick up the most engaged and trusting voters within each of those groups—who else would spend 10 minutes filling out a poll?—and such people were less likely to vote for Trump in 2024. So after all that weighting and modeling, you still wind up with an underestimate of Trump. (This probably explains why pollsters did quite well in 2018 and 2022: disengaged voters tend to turn out less during midterm elections.)

This problem almost certainly afflicts presidential-approval polls too, though there’s no election to test their accuracy against. Low-trust voters who don’t answer polls don’t suddenly transform into reliable respondents once the election’s over. According to Nate Silver’s Silver Bulletin poll aggregator, Trump’s approval is currently six percentage points underwater. But if those approval polls are plagued by the same nonresponse bias as election surveys were last year—which could well be the case—then he’s at only negative 3 percent. That might not seem like a big difference, but it would make Trump’s approval rate historically pedestrian, in line with where Gerald Ford was at roughly this point in his presidency, rather than historically low.

Jason Barabas, a Dartmouth College political scientist, knows something about nonresponse bias. Last year, he directed the new Dartmouth Poll, described by the college as “an initiative aimed at establishing best practices for polling in New Hampshire.” Barabas and his students mailed out more than 100,000 postcards across New Hampshire, each with a unique code to complete a poll online. This method is not cheap, but it delivers randomness, like old-school random-digit dialing.

The Dartmouth Poll also applied all the latest statistical techniques. It was weighted on gender, age, education, partisanship, county, and congressional district, and then fed through a turnout model based on even more of the respondent’s biographical details. The methodology was set beforehand, in keeping with scientific best practices, so that Barabas and his research assistant couldn’t mess with the weights after the fact to get a result that fit with their expectations. They also experimented with ways to increase response rates: Some respondents were motivated by the chance to win $250, some were sent reminders to respond, and some received a version of the poll that was framed in terms of “issues” rather than the upcoming election.

In the end, none of it mattered. Dartmouth’s polling was a disaster. Its final survey showed Kamala Harris up by 28 points in New Hampshire. That was wrong by an order of magnitude; she would win the state by 2.8 points the next day. A six-figure budget, sophisticated methodology, the integrity necessary to preregister their methodology, and the bravery necessary to still release their outlier poll—all that, only to produce what appears to have been the most inaccurate poll of the entire 2024 cycle, and one of the worst results in American polling history.

Barabas isn’t totally sure what happened. But he and his students do have one theory: their poll’s name. Trust in higher education is polarized on political lines. Under this theory, Trump-voting New Hampshirites saw a postcard from Dartmouth, an Ivy League school with a mostly liberal faculty and student body, and didn’t respond—whereas anti-Trump voters in the state leaped at the opportunity to answer mail from their favorite institution. The Dartmouth Poll is an extreme example, but the same thing is happening basically everywhere: People who take surveys are people who have more trust in institutions, and people who have more trust in institutions are less likely to vote for Trump.

Once a pollster wraps their head around this point, their options become slim. They could pay poll respondents in order to reach people who wouldn’t otherwise be inclined to answer. The New York Times tried this in collaboration with the polling firm Ipsos, paying up to $25 to each respondent. They found that they reached more moderate voters who usually don’t answer the phone and who were more likely to vote for Trump, but said the differences were “relatively small.”

Or pollsters can get more creative with their weights. Jesse Stinebring, a co-founder of the Democratic polling firm Blue Rose Research, told me that his company asks whether respondents “believe that sometimes a child needs a good hard spanking”—a belief disproportionately held by the type of American who doesn’t respond to surveys—and uses the answer alongside the usual weights.

Bailey, the Georgetown professor, has an even more out-there proposal. Say you run a poll with a 5 percent response rate that shows Harris winning by four points, and a second poll with a 35 percent response rate that shows her winning by one point. In that situation, Bailey says, you can infer that every 10 points of response rate increases Trump’s margin by one percentage point. So if the election has a 65 percent turnout rate, that should mean a two-point Trump victory. It’s “a new way of thinking,” Bailey admitted, in a bit of an understatement. But can you blame him?

To be clear, political polls can be valuable even if they underestimate Republicans by a few points. For example, Biden likely would have stayed in the 2024 race if polls hadn’t shown him losing to Trump by an insurmountable margin—one that was, in retrospect, almost certainly understated.

The problem is that people expect the most from polls when elections are close, but that is when polls are the least reliable, given the inevitability of error. And if the act of answering a survey, or engaging in politics at all, correlates so strongly with one side, then pollsters can only do so much.

The legendary Iowa pollster Ann Selzer has long hated the idea of baking your own assumptions into a poll, which is why she used weights for only a few variables, all demographic. For decades, this stubborn refusal to guess in advance earned her both accurate poll results and the adoration of those who study polling: In 2016, a 538 article called her “The Best Pollster in Politics.”

Selzer’s final poll of 2024 showed Harris leading in Iowa by three percentage points. Three days later, Trump would win the state by 13 points, a stunning 16-point miss.

A few weeks after the election, Selzer released an investigation into what might have gone wrong. “To cut to the chase,” she concluded, “I found nothing to illuminate the miss.” The same day the analysis was published, she retired from election polling.
 
Cope. Rich Baris, Rasmussen Reports, Robert Cahaly and the Trafalgar Group are all capable of polling accurately. The mainstream media's poling was off because they're just propagandists, and here we have The Atlantic, the biggest of all the propagandists, spewing forth some carefully constructed denial of reality once again
 
Last edited:
Where did those 80 mil supposed Biden voters suddenly disappear to? Like magic they simply poofed away... I'm sure "voting harder" will save a co-opted nation that's been on the downward slope since 1913, when Bankerbergs gained control over the issuance of our fiat currency. "Voting" has always been able to save co-opted dying empires. They should have voted harder in Rome, that would have stopped the barbarians at the gates. Also...
Jason Barabas, a Dartmouth College political scientist
No wonder academia is Judeo-Communist as fuck.
 
I don't know anyone who answers unknown phone numbers anymore like the old mainstream pollsters use to reach out besides old ladies in my town. Of course it's going to be pretty off. Plus mainstream polling is out of date and known for fucking up. Most won't feel comfortable telling who they vote for unless they know you share similar beliefs and/or won't chimp out at them.
 
Gotta be honest guys, when they poll me I lie. Why you ask? Because they'll never fucking learn.

I don't know anyone who answers unknown phone numbers anymore like the old mainstream pollsters use to reach out besides old ladies in my town. Of course it's going to be pretty off. Plus mainstream polling is out of date and known for fucking up. Most won't feel comfortable telling who they vote for unless they know you share similar beliefs and/or won't chimp out at them.

That's how you get stupid bitches like Ann Seltzer getting results such as Harris winning Iowa by three.

But, to my point, if you get numbers from a video game lobby the people polling aren't going to get the answers they want so it's right back to old bags who think they're SS is going to get cut. That's why at this point they're just weaponized.
 
"They said it would be close and it was"

Close? The only positive that can be said of Kamala's campaig is she did not lose as badly as Mondale. An election where one side clean sweeps the swing states and flips several opposition strongholds is not a close election. It's a landslide.
It was 49.8 to 48.3. That's the difference of a couple people out of one hundred.
 
Let's be honest

The pollsters we correct, they were just afraid to say the truth. Trump was always ahead in the polls but that wasn't allowed right? Look what happened to Rassmussen when they started saying Trump was leading in the polls, they were dog piled by the MSM and called GoP plants and shills by every other voice.

Every pollster just assumed a 3am ballot dump would happen and they didn't dare be the ones to ruin the plan. So yah, they massaged their data to make sure they had the right opinions to support a Harris win.

"why yes, our polls always showed that Harris was in the lead. The fact she won each swing state at 3am is just how votes are counted silly bigot' etc etc.
 
It was 49.8 to 48.3. That's the difference of a couple people out of one hundred.

-First time a Republican won the popular vote in 20 years.

-Won all the swing states.

-Created an impossible three-color map showing she didn't flip a single county.

-Made gains among all demographics, including ones from "floating piles of garbage" that I was told were definitely upset and were switching their vote.

Call it whatever you want, when things that don't ever happen actually happen, you got your ass kicked.

The pollsters we correct, they were just afraid to say the truth. Trump was always ahead in the polls but that wasn't allowed right? Look what happened to Rassmussen when they started saying Trump was leading in the polls, they were dog piled by the MSM and called GoP plants and shills by every other voice.

Weren't they (Rasmussen) kicked out of 538's aggregate? You know, I haven't seen them around lately, how they doin'?
 
Many of the most accurate pollsters last year were partisan Republican outfits; many of the least accurate were rigorous university polls run by political scientists.
You're telling me that the pollsters who didn't give in to the gaslighting of the Dems & Media had the most accurate results? Yet this piece labels them partisan hacks. Meanwhile the pollsters who fell for YAAS BRAT QUEEN SUMMER were the least accurate and they get called "rigorous" lolololololololololol
If weighting by education didn’t work, then there must be something specific about Trump voters—even Trump voters with a college degree—that made them less likely to answer a poll.
We don't like being called Bigots or Deplorables.
Selzer’s final poll of 2024 showed Harris leading in Iowa by three percentage points. Three days later, Trump would win the state by 13 points, a stunning 16-point miss.
She fell for the gaslighting. Same for that one dude using his "keys".
 
Oh that’s an easy problem to solve. You just need to trick MAGA voters into opening up instead of flipping their dick out and yelling let’s go Brandon. Start with softball questions like “Do you think 1st cousin marriage should be legal.” or “How much stipend should the government give you for lotto tickets.”
 
That's how you get stupid bitches like Ann Seltzer getting results such as Harris winning Iowa by three.
Selzer wasn't stupid. she cherry picked poll results from the AWFL demographic to try and force a surprise showing of support the same way Trump has in all his elections. she was purposefully lying to skew public perception. probably for money

the other thing that swayed this election is the republican party having FIVE HUNDERED lawyers on standby for election night to immediately challenge any election tomfoolery. i wonder why that's never mentioned in any of these articles
 
I lie too. Whoever is at the door always has a political bias and the answer they get if I didn't manage to hide that I'm home is whatever answer is least likely to get a chimp out and the fastest way to them out of my hair. And that means figuring out which is easy, which view theyr from. Wher I am it's always the left and they won't leave me alone so if course I agree as quick as I can get them to fuck right off.

I'd also like like a dog on any survey I can't trust the anonymity of which is all of them . I tell people what they wanna hear on this shit then I go actually vote or behave the way I feel is most honest to myself.

I'm not here to get cancelled, fired, and shit upon by the latest liberal bullshit or the local niggers and chinks.
 
No shit. Maybe if the left wing zeitgeist didn’t like to threaten life ruination on the other half the country they might be able to get people who disagree with them to answer phone pollsters with something other than”gargle my balls”. And this doesn’t even include the fact that polls are used to impose consensus rather than observe it. Total pollster death. Total journalist death.
 
Last edited:
Back