US Polling Was Quietly Still Bad in 2024 - After nearly a decade of fine-tuning, the industry still hasn’t figured out how to reach enough Donald Trump supporters.

Polling Was Quietly Still Bad in 2024
The Atlantic (archive.ph)
By Marc Novicoff
2025-05-19 16:36:45GMT

Pollsters seemed to finally get it right in 2024. After years of bad misses, they said the presidential election would be close, and it was.

In fact, the industry did not solve its problems last year. In 2016, pollsters famously underestimated Donald Trump by about 3.2 points on average. In 2024, after eight years of introspection, they underestimated Trump by … 2.9 points. Many of the most accurate pollsters last year were partisan Republican outfits; many of the least accurate were rigorous university polls run by political scientists.

Polls can’t be perfect; after all, they come with a margin of error. But they should not be missing in the same direction over and over. And chances are the problem extends beyond election polling to opinion surveys more generally. When Trump dismisses his low approval ratings as “fake polls,” he might just have a point.

For years, the media have been covering the travails of the polling industry, always with the premise that next time might be different. That premise is getting harder and harder to accept.

Polling used to be simple. You picked up the phone and dialed random digits. People answered their landline and answered your survey. Then, you published the results. In 2000, nearly every national pollster used this methodology, known as random-digit dialing, and their average error was about two points. In subsequent elections, they got even closer, and the error, small as it was, shifted from overestimating Bush in 2000 to underestimating him in 2004—a good sign that the error was random.

Then came the Great Polling Miss of 2016. National polls actually came pretty close to predicting the final popular-vote total, but at the state level, particularly in swing states, they missed badly, feeding into the narrative that Hillary Clinton’s win was inevitable.

The 2016 miss was widely blamed on education polarization. College graduates preferred Clinton and were more likely to respond to polls. So, going forward, most pollsters began adjusting, or “weighting,” their results to counteract the underrepresentation of non-college-educated voters. In 2018, the polls nailed the midterms, and pollsters rejoiced.

That reaction turned out to be premature. The 2020 election went even worse for the polling industry than 2016 had. On average, pollsters had underestimated Trump again, this time by four points. Joe Biden won, but by a much slimmer margin than had been predicted.

This sent pollsters searching for a solution yet again. If weighting by education didn’t work, then there must be something specific about Trump voters—even Trump voters with a college degree—that made them less likely to answer a poll. So, many pollsters figured, the best way to solve this would be weighting by whether the respondent had previously voted for Trump, or identified as a Republican. This was a controversial move in polling circles. The proportion of the electorate that is Democratic or Republican, or Trump-voting, changes from election to election; that’s why polls exist in the first place. Could such elaborate modeling turn polls into something more like predictions than surveys?

“This is where some of the art and science get a little mixed up,” Michael Bailey, a Georgetown professor who studies polling, told me. If you weight a sample to be 30 percent Republican, 30 percent Democrat, and 40 percent independent—because that’s roughly how people self-identify when asked—you are making an assumption about how the three groups will behave, not merely matching a poll to population demographics such as age, gender, and education.

These assumptions vary from pollster to pollster, often reflecting their unconscious biases. And for most pollsters, these biases seem to point in the same direction: underestimating Trump and overestimating his opponent. “Most pollsters, like most other people in the expert class, are probably not huge fans of Trump,” the election-forecasting expert Nate Silver told me. This personal dislike may not seem to matter much—after all, this should be a science—but every decision about weighting is a judgment call. Will suburban women show up to vote in 2024? Will young men? What about people who voted for Trump in 2020? All three of these respondent groups have a different weight in an adjusted sample, and the weight that a pollster chooses reflects what the pollster, not the respondents, thinks about the election. Some pollsters might even adjust their weights after the fact if they see a result they find hard to believe. The problem is that sometimes, things that are hard to believe happen, such as Latino voters moving 16 points to the right.

This dynamic might explain a curious exception to the trend last year. Overall, most polls missed yet again: The average error was a three-point underestimate of Trump, the same as 2016. But Republican-aligned pollsters did better. In fact, according to Silver’s model (others have similar results), four of the five most accurate pollsters in 2024, and seven of the top 10, were right-leaning firms—not because their methods were different, but because their biases were.

The most basic problem in 2024 was the same as in 2016: nonresponse bias, the name for the error that is introduced by the fact that people who take polls are different from those who don’t.

A pollster can weight their way out of this problem if the difference between those who respond and those who don’t is an observable demographic characteristic, such as age and gender. If the difference is not easily observable, and it’s correlated with how people vote, then the problem becomes extremely difficult to surmount.

Take the fact that Trump voters tend to be, on average, less trusting of institutions and less engaged with politics. Even if you perfectly sample the right proportion of men, the right proportions of each age group and education level, and even the right proportion of past Trump voters, you will still pick up the most engaged and trusting voters within each of those groups—who else would spend 10 minutes filling out a poll?—and such people were less likely to vote for Trump in 2024. So after all that weighting and modeling, you still wind up with an underestimate of Trump. (This probably explains why pollsters did quite well in 2018 and 2022: disengaged voters tend to turn out less during midterm elections.)

This problem almost certainly afflicts presidential-approval polls too, though there’s no election to test their accuracy against. Low-trust voters who don’t answer polls don’t suddenly transform into reliable respondents once the election’s over. According to Nate Silver’s Silver Bulletin poll aggregator, Trump’s approval is currently six percentage points underwater. But if those approval polls are plagued by the same nonresponse bias as election surveys were last year—which could well be the case—then he’s at only negative 3 percent. That might not seem like a big difference, but it would make Trump’s approval rate historically pedestrian, in line with where Gerald Ford was at roughly this point in his presidency, rather than historically low.

Jason Barabas, a Dartmouth College political scientist, knows something about nonresponse bias. Last year, he directed the new Dartmouth Poll, described by the college as “an initiative aimed at establishing best practices for polling in New Hampshire.” Barabas and his students mailed out more than 100,000 postcards across New Hampshire, each with a unique code to complete a poll online. This method is not cheap, but it delivers randomness, like old-school random-digit dialing.

The Dartmouth Poll also applied all the latest statistical techniques. It was weighted on gender, age, education, partisanship, county, and congressional district, and then fed through a turnout model based on even more of the respondent’s biographical details. The methodology was set beforehand, in keeping with scientific best practices, so that Barabas and his research assistant couldn’t mess with the weights after the fact to get a result that fit with their expectations. They also experimented with ways to increase response rates: Some respondents were motivated by the chance to win $250, some were sent reminders to respond, and some received a version of the poll that was framed in terms of “issues” rather than the upcoming election.

In the end, none of it mattered. Dartmouth’s polling was a disaster. Its final survey showed Kamala Harris up by 28 points in New Hampshire. That was wrong by an order of magnitude; she would win the state by 2.8 points the next day. A six-figure budget, sophisticated methodology, the integrity necessary to preregister their methodology, and the bravery necessary to still release their outlier poll—all that, only to produce what appears to have been the most inaccurate poll of the entire 2024 cycle, and one of the worst results in American polling history.

Barabas isn’t totally sure what happened. But he and his students do have one theory: their poll’s name. Trust in higher education is polarized on political lines. Under this theory, Trump-voting New Hampshirites saw a postcard from Dartmouth, an Ivy League school with a mostly liberal faculty and student body, and didn’t respond—whereas anti-Trump voters in the state leaped at the opportunity to answer mail from their favorite institution. The Dartmouth Poll is an extreme example, but the same thing is happening basically everywhere: People who take surveys are people who have more trust in institutions, and people who have more trust in institutions are less likely to vote for Trump.

Once a pollster wraps their head around this point, their options become slim. They could pay poll respondents in order to reach people who wouldn’t otherwise be inclined to answer. The New York Times tried this in collaboration with the polling firm Ipsos, paying up to $25 to each respondent. They found that they reached more moderate voters who usually don’t answer the phone and who were more likely to vote for Trump, but said the differences were “relatively small.”

Or pollsters can get more creative with their weights. Jesse Stinebring, a co-founder of the Democratic polling firm Blue Rose Research, told me that his company asks whether respondents “believe that sometimes a child needs a good hard spanking”—a belief disproportionately held by the type of American who doesn’t respond to surveys—and uses the answer alongside the usual weights.

Bailey, the Georgetown professor, has an even more out-there proposal. Say you run a poll with a 5 percent response rate that shows Harris winning by four points, and a second poll with a 35 percent response rate that shows her winning by one point. In that situation, Bailey says, you can infer that every 10 points of response rate increases Trump’s margin by one percentage point. So if the election has a 65 percent turnout rate, that should mean a two-point Trump victory. It’s “a new way of thinking,” Bailey admitted, in a bit of an understatement. But can you blame him?

To be clear, political polls can be valuable even if they underestimate Republicans by a few points. For example, Biden likely would have stayed in the 2024 race if polls hadn’t shown him losing to Trump by an insurmountable margin—one that was, in retrospect, almost certainly understated.

The problem is that people expect the most from polls when elections are close, but that is when polls are the least reliable, given the inevitability of error. And if the act of answering a survey, or engaging in politics at all, correlates so strongly with one side, then pollsters can only do so much.

The legendary Iowa pollster Ann Selzer has long hated the idea of baking your own assumptions into a poll, which is why she used weights for only a few variables, all demographic. For decades, this stubborn refusal to guess in advance earned her both accurate poll results and the adoration of those who study polling: In 2016, a 538 article called her “The Best Pollster in Politics.”

Selzer’s final poll of 2024 showed Harris leading in Iowa by three percentage points. Three days later, Trump would win the state by 13 points, a stunning 16-point miss.

A few weeks after the election, Selzer released an investigation into what might have gone wrong. “To cut to the chase,” she concluded, “I found nothing to illuminate the miss.” The same day the analysis was published, she retired from election polling.
 
the answer is simple, "partisan republican" pollsters put their thumb on the scale much less. "rigorous" polls arent intended to deliver accurate results, they're intended to create the narrative that the democrat is winning

it's a basic concept: new competitors ("partisan republican" pollsters) try to grab market share by delivering a better product, while the established competitors ("rigorous" pollsters) stick to the grooves in the market they made over a long time. after all, the grooves worked for them for that long time
 
She fell for the gaslighting. Same for that one dude using his "keys".

Allan Lichtman. His methodology is actually brilliant and fun to play around with. Too bad he has terminal TDS and a dipshit son who doesn't have the balls to tell him to shut up.

Selzer wasn't stupid. she cherry picked poll results from the AWFL demographic to try and force a surprise showing of support the same way Trump has in all his elections. she was purposefully lying to skew public perception. probably for money

When you ruin your reputation because "Orange Man Bad" like Selzer and Litchman did you're stupid.

the other thing that swayed this election is the republican party having FIVE HUNDERED lawyers on standby for election night to immediately challenge any election tomfoolery. i wonder why that's never mentioned in any of these articles

Trump's reputation isn't built on predicting elections and considering the Democrats have perfected lawfare 500 lawyers doesn't seem like enough. Marc Elias isn't so tough when someone's there to push back, is he?

No shame in that - I do the same thing, mostly because "fuck pollsters".

Brother, I'm not ashamed, if someone asks me I'll openly brag about it.

Why can't we reach these fucking retards who we hate and portray as nothing but stupid and evil? Why won't these goddam pieces of shit answer our polls? We're just from a wittle ole university uwu

Well, this is their other problem, which technically isn't their fault: If these people did try to change their methodology and reach out to people like us would you give them the time of day? I wouldn't.
 
They don’t mention that parties and candidates themselves constantly send fake “polls” designed to get someone to engage, but aren’t actually polling anything.

I don’t think this problem can be fixed. People are buried under an avalanche of spam texts, calls, and emails daily. Who responds to anything that isn’t familiar or expected?

This won’t stop companies from charging campaigns and organizations millions for doing it though.
 
  • Thunk-Provoking
Reactions: IAmNotAlpharius
It was 49.8 to 48.3. That's the difference of a couple people out of one hundred.
The problem is a lot of that margin was packed into states like California and New York which were foregone conclusions, while the electoral outlook looked miserable for Harris.
EAT A BAG OF RAZOR BLADES YOU FRAUDULENT SHITHEADS. Hillary rigged both sides of that election and still lost.
She literally rigged it to lose because that moron Robbie Mook thought the electoral victory was certain and wanted to drive up the popular vote, so ignored swing states to focus on driving up the vote in already-won states. She had the most flabbergastingly incompetent campaign I've ever seen.
 
Well, this is their other problem, which technically isn't their fault: If these people did try to change their methodology and reach out to people like us would you give them the time of day? I wouldn't.
For a long time, like mid 2010's IMDB/Rotten Tomatoes did by deleting bad reviews of DEI-infested movies as "trolls"... pollsters simply rejected answers they didn't like as not being genuine as they fully believed in the myths of the "modern voter" and "demographic destiny".

And when they finally realized that no, that's how people really did feel about things? After 15 years of trying to cram a false narrative down everyone's throats, first out of naivety, then out of denial? It was too late, their reputations were shot and nobody thinks they are trustworthy going forward.

So, even if they honestly don't put a thumb on the scale? So many people think they will that they just won't engage with them and they get the same useless biased results.


that moron Robbie Mook thought the electoral victory was certain and wanted to drive up the popular vote, so ignored swing states to focus on driving up the vote in already-won states. She had the most flabbergastingly incompetent campaign I've ever seen.
20 trips to California... never once set foot in a half-dozen midwest states. If you'd told a political commentator in the late 90's that in only 20 years, a mainstream Dem would make that mistake? They'd never have believed you.

She fell for the gaslighting. Same for that one dude using his "keys".
His methodology actually is not glaringly wrong, it's just that his TDS was so strong he couldn't objectively award the keys..... he had everything in front of him to make the right prediction, and then vigorously talked himself out of it with personal "yeahbutts" and "that doesn't count" exceptions to what the raw numbers were telling him.

He had a piece of paper that said "2+2 = " and he wrote "5" because "I don't' like even numbers", and then broke down in tears when everyone else in the room answered "4".
 
Last edited:
For a long time, like mid 2010's IMDB/Rotten Tomatoes did by deleting bad reviews of DEI-infested movies as "trolls"... pollsters simply rejected answers they didn't like as not being genuine as they fully believed in the myths of the "modern voter" and "demographic destiny".
They do the same shit with climate 'science'. Just ignore any results that don't prove their hypothesis.
20 trips to California... never once set foot in a half-dozen midwest states. If you'd told a political commentator in the late 90's that in only 20 years, a mainstream Dem would make that mistake? They'd never have believed you
Not just a mainstream Dem, the wife of Bill Clinton. Who, by the way, told her repeatedly she needed to reach out to working class, Rust Belt voters. But apparently she knew better. She's won so many Presidential elections
 
She fell for the gaslighting. Same for that one dude using his "keys".

Selzter didn't fall for anything, she was outright bought by the Harris campaign. Either via money (most likely as she was able to just retire after the election) or though social currency (offered DC job or oh you don't want to be the one who helped stop Trump type deals)

Never ever doubt the pull of the Left, they wield enormous cultural power and of course they have the big dollar donors. Shit it turns out Beyonce was paid 11 MILLION for a literal 30 second speech to try to boost Harris's campaign. Opah and others only got a pittance of a few million lol.

The Harris campaign blew thru 1.5 BILLION dollars in less them 4 months, it was the largest Leftist grift the US has ever seen. Where did all that money go? Who the fuck knows, it vanished into so many campaign "advisors", "adjusters", "contractors" and every other species of Leftist leach that could latch on.

And that's just the tip of the iceberg, the amount of money the Left PAC raised were astounding and 90% vanished into the pockets of the DNC machine to feed it's parasites.
 
David Dinkins was supposed to beat Rudy Giuliani because people told pollsters they were voting for Dinkins.

Then they turned around and voted for Giuliani.

The  only poll that matters happens on Election Day. Everything else is just bullshit and shuffle.
 
  • Informative
Reactions: IAmNotAlpharius
She literally rigged it to lose because that moron Robbie Mook thought the electoral victory was certain and wanted to drive up the popular vote, so ignored swing states to focus on driving up the vote in already-won states. She had the most flabbergastingly incompetent campaign I've ever seen.
To quote someone on Hillary's campaign staff when Bill told Hillary to get out and campaign or else risk losing, "Our data disproves your anecdotes". Whoops.
 
First time a Republican won the popular vote in 20 years.

-Won all the swing states.

-Created an impossible three-color map showing she didn't flip a single county.

-Made gains among all demographics, including ones from "floating piles of garbage" that I was told were definitely upset and were switching their vote.

Call it whatever you want, when things that don't ever happen actually happen, you got your ass kicked.
It was 226 to 312, he got nearly 30% more votes than she did. It was not close by any standard that matters.
He fucking vote the popular vote, something thought impossible for a Republican.
The electoral college was a sweep, but the popular vote is the real kicker.

Almost nobody expected this based on poll aggregates.
And when they finally realized that no, that's how people really did feel about things? After 15 years of trying to cram a false narrative down everyone's throats, first out of naivety, then out of denial? It was too late, their reputations were shot and nobody thinks they are trustworthy going forward.

So, even if they honestly don't put a thumb on the scale? So many people think they will that they just won't engage with them and they get the same useless biased results.
That's what made it feel so good. It was vindication. Their lies and fuck-fuck games failed in the end, and they've been impotently licking their wounds ever since.

Shitlibs now have to come to terms with the fact that they are *not* the highest forms of life to ever exist, and contend with conservatives without moronic strawmen, censorship, or gaslighting.
 
Polling Was Quietly Still Bad in 2024
The premise of this headline is even a lie.
There was nothing "quiet" about how the polls were bad in 2024.
On election night everyone was making fun of them, tearing Ann Selzer and her PRESTIGIOUS poll a new asshole.
Everyone knew half the major polls leading up to 2024 were Democrat wishcrafting that Kamala was going to sweep the swing states.
 
Weren't they (Rasmussen) kicked out of 538's aggregate? You know, I haven't seen them around lately, how they doin'?
they are still around, they now also show how the polling sausage is made(and raging against fake polls)
 
By whose standards was it close? Don’t even like trump, but he blew the democrats out of the water
These retards must be talking about the popular vote which was close, but ignoring that the actual metric that matters, the electoral votes, wasn't even fucking close. Just journos doing journo things
 
Back