2020 U.S. Presidential Election - Took place November 3, 2020. Former U.S. Vice President Joe Biden assumed office January 20, 2021.

Status
Not open for further replies.
Someone reversed engineered Nate Plastic's model and reaffirmed what everyone knew: that his model is bullshit.
Link / Archive

I don't know whether that's straight bullshit or just bad modeling. It looks like he's got bizarre correlations between states, but it's not clear that those are specific state-level correlations baked into the model, or a result of them tuning the state level results to react to national numbers.

Part of 538's problem is they deal in national polls, so instead of taking polling from 50 states to come up with a national number, they start with the national number and estimate down to state levels. If you were doing that, then your "links" between states would probably be demographic. So the state variables aren't quite independent, they're dependent on a 3rd set of variables not seen in the file. The negative correlations this guy is seeing might just be correlations between population demographics and party voting tendencies.

(Fun fact: in checking this, I learned New Jersey has a whiter population than Alaska.)

But if that was the case, then someone should have gone in and tweaked the model for regional consistency. Baris and Barnes have been talking about this non-stop for the last 3 weeks in the Rust Belt, saying Iowa votes to the left of Ohio, southeast Minnesota votes the same as Wisconsin, western Pennsylvania votes like east Ohio, Florida is the bellwether for Georgia, etc.

We know those correlations exist--they are objective, historical facts. You can't throw them out without a verifiable reason to explain why things changed. But nothing I've seen come out of 538 has ever tried to analyze them, or justify changing them. Those specific correlations are not reflected across the 538 simulations, or at least, they aren't reflected very well at all.

It's almost as if Nate Polyester generated a model out of numerical data instead of regional knowledge. This analysis covers the outcomes, not the source mechanisms. I bet 538's starting point was just raw state outcomes regressed against each other, then matched with exit polling demographics, with a proprietary weighting between the two. But since the resulting model consumes polling as input, and polling is more often divided by demographics not states, then the simulation's state outcomes are overly driven by the demographic portion.

You can't just say "white males 30-45 vote this way" and then slap it across the entire country; white males voting in California vote drastically different than white males in rural Pennsylvania. I assume some of that was taken into account in the 538 model, since it's not pure random noise. But clearly no one with regional electoral experience went in and tried to get the numbers right.

TL;DR I think this analysis shows the 538 guys are treating states as abstract numerical entities. They did some regressions on historical data and applied some weightings, and didn't bother to apply specific knowledge to the outcome. It's the naive form of "I only trust the data" mentality: you need expertise to know when you crunched your data wrong.

I won't regurgitate Baris' multiple rants about how the polling methodology itself is what skews polls now. Theoretically, a properly constructed simulation/model would rely on proper correlations to cut down on some of the errors produced by bad polling methodology. But 538's model appears to be improperly constructed.

The fact that 538's model exposes absurdities when fed polls as input means either the model's construction is way off, or it is correct and the input polls are garbage. Given what he's doing, it can't be the case that both model and polls are being properly constructed.
 
After seeing Biden's pitiful turnout in PA, I'm not seeing him win that state. So, what's he got left at this point?

I'm sure he's got all the votes from his olds folk home. (Which just so happens to be his basement.)

iirc that's Disneyland in California. The branding for the Florida one is The Most Magical Place On Earth.

Pfff. Fake advertising at its worst. Everyone knows the most magical place on earth is Portugal.

(For those that don't get it: They legalized all drug consumption. They now "treat it as a healthcare issue.")
 
I can't find it anymore, but I just saw a tally of the early voting or mail in ballots in PA and the Dems were waaaaay ahead.

Searched but I can't find it anymore.
I've seen that too, but it had PA at 10% of 2016 total turnout when most of the states around it were at 30% or even 40%. Most of their voting is going to happen Election Day. The lead means less than it would for any other state.
 
I can't find it anymore, but I just saw a tally of the early voting or mail in ballots in PA and the Dems were waaaaay ahead.

Searched but I can't find it anymore.
But are they meeting the expected numbers? The turnout may be record breaking and while the Dems are far ahead, are they meeting expectation?
 
TL;DR I think this analysis shows the 538 guys are treating states as abstract numerical entities. They did some regressions on historical data and applied some weightings, and didn't bother to apply specific knowledge to the outcome. It's the naive form of "I only trust the data" mentality: you need expertise to know when you crunched your data wrong.

I won't regurgitate Baris' multiple rants about how the polling methodology itself is what skews polls now. Theoretically, a properly constructed simulation/model would rely on proper correlations to cut down on some of the errors produced by bad polling methodology. But 538's model appears to be improperly constructed.

The fact that 538's model exposes absurdities when fed polls as input means either the model's construction is way off, or it is correct and the input polls are garbage. Given what he's doing, it can't be the case that both model and polls are being properly constructed.
No, 538 figures out the result they want first and then changes the polling to get that desired result.

What bothers me about Nate isn't that his numbers are wrong; it's that he doesn't question any of it like a rational person would.

Nate is predicting Biden to get 410-420 electoral votes on average. There's ZERO chance of that happening. But instead of questioning it, he calls people stupid for not believing it. BUT TEH POLEZ!!!
 
Even the faggot cheerleader propagandists at ERB can't muster up enthusiasm for Joe. (Though I like how they gave Biden twice Trump's time for verses.)
After they made a similarly stilted rap for 2016 showing their bias for Hillary, they got so assmad about the election results they cancelled their series for a time.

Anybody wanna bet on the odds they 41% after 45 gets another term?
 
I don't know whether that's straight bullshit or just bad modeling. It looks like he's got bizarre correlations between states, but it's not clear that those are specific state-level correlations baked into the model, or a result of them tuning the state level results to react to national numbers.

Part of 538's problem is they deal in national polls, so instead of taking polling from 50 states to come up with a national number, they start with the national number and estimate down to state levels. If you were doing that, then your "links" between states would probably be demographic. So the state variables aren't quite independent, they're dependent on a 3rd set of variables not seen in the file. The negative correlations this guy is seeing might just be correlations between population demographics and party voting tendencies.

(Fun fact: in checking this, I learned New Jersey has a whiter population than Alaska.)

But if that was the case, then someone should have gone in and tweaked the model for regional consistency. Baris and Barnes have been talking about this non-stop for the last 3 weeks in the Rust Belt, saying Iowa votes to the left of Ohio, southeast Minnesota votes the same as Wisconsin, western Pennsylvania votes like east Ohio, Florida is the bellwether for Georgia, etc.

We know those correlations exist--they are objective, historical facts. You can't throw them out without a verifiable reason to explain why things changed. But nothing I've seen come out of 538 has ever tried to analyze them, or justify changing them. Those specific correlations are not reflected across the 538 simulations, or at least, they aren't reflected very well at all.

It's almost as if Nate Polyester generated a model out of numerical data instead of regional knowledge. This analysis covers the outcomes, not the source mechanisms. I bet 538's starting point was just raw state outcomes regressed against each other, then matched with exit polling demographics, with a proprietary weighting between the two. But since the resulting model consumes polling as input, and polling is more often divided by demographics not states, then the simulation's state outcomes are overly driven by the demographic portion.

You can't just say "white males 30-45 vote this way" and then slap it across the entire country; white males voting in California vote drastically different than white males in rural Pennsylvania. I assume some of that was taken into account in the 538 model, since it's not pure random noise. But clearly no one with regional electoral experience went in and tried to get the numbers right.

TL;DR I think this analysis shows the 538 guys are treating states as abstract numerical entities. They did some regressions on historical data and applied some weightings, and didn't bother to apply specific knowledge to the outcome. It's the naive form of "I only trust the data" mentality: you need expertise to know when you crunched your data wrong.

I won't regurgitate Baris' multiple rants about how the polling methodology itself is what skews polls now. Theoretically, a properly constructed simulation/model would rely on proper correlations to cut down on some of the errors produced by bad polling methodology. But 538's model appears to be improperly constructed.

The fact that 538's model exposes absurdities when fed polls as input means either the model's construction is way off, or it is correct and the input polls are garbage. Given what he's doing, it can't be the case that both model and polls are being properly constructed.

Only way a demographics based model could work is if you went full "democracy (the game)" by taking into account every group a person fits into, and the weight that has in their identity per region.

And even then you'd have to adjust for programs, scandals, how much people trust what the candidate said, etc. So yeah... that's got many subjective variables in the end.
 
Antifa can only really muster in deep blue cities, in solidly blue states, that aren't controlled by a different democratic faction. If they try and upset anything outside there, they will be sent packing or arrested. And if they disrupt voting in Portland pr Seattle? Good, it'll only help Trump get more popular votes.



There's a fifty-fifty chance gravity will stop working in the next minute, because it either will or won't. If we extend the sample size to 10 minutes, that means our chance of having gravity in 10 minutes is 1 divided by 2 to the 10th power (1/1024), as it is a compounding probably so after 2 minutes we only have a 25% chance of having gravity and 3 miniutes equals 12.5%, etc. So you all need to grab on to something, so you don't float away.

I is very smartz in da maffs.

For my next business verture, since 1 woman makes a baby in 9 months (about 270 days), I will arrange for 269 women to help you produce a baby in a day, maybe in some sort of baby factory-spa. Think about it, you go out drinking with the girls on Friday night, take Saturday as a spa day, and can be back to drinking Sunday morning with a baby instead of waiting 9 months. It'll be very convenient. It's amazing how nobody came up with this before me.

But thats because I'm smartz with Maffs.

- Nate Plywood, Statistician
I think we should test the Baby Factory Spa theory. I will volunteer for science.
 
ERB has been a particular disappointment to me because I have enjoyed (and still do) like 90% of their content. I will still go back and listen to the stuff from time to time.

They also made a rap battle where they virtue signaled about transgenderism via Bruce Jenner, which ended up rocking both sides of the aisle because Bruce is apparently widely hated within the "trans community." So yeah they tried to be woke but ended up upsetting the woke in the process, which is one reason why you don't try to appease that crowd.

ETA: Actually I was under the impression they went on hiatus because of various life changes (NicePeter became a father) and then they realized they missed it too much.
 
Last edited:
ERB has been a particular disappointment to me because I have enjoyed (and still do) like 90% of their content. I will still go back and listen to the stuff from time to time.

They also made a rap battle where they virtue signaled about transgenderism via Bruce Jenner, which ended up rocking both sides of the aisle because Bruce is apparently widely hated within the "trans community." So yeah they tried to be woke but ended up upsetting the woke.

Some things are always true like:
The Woke always descend into purity spirals.
The Woke destroy all creativity and art.
 
No, 538 figures out the result they want first and then changes the polling to get that desired result.

My theory is they tuned the model to favor the result they wanted, then Nate publicly criticizes any people providing polls that would push it in the opposite direction. Eventually, the polls coincidentally start showing what Nate's benefactors want to see, and the feedback loop of skewed polls plus skewed model creates bad predictions.

Nate calls this process "un-skewing", of course, because the Orwellian leftist handbook recommends you label yourself with the opposite of what you actually are.

What bothers me about Nate isn't that his numbers are wrong; it's that he doesn't question any of it like a rational person would.
But it most certainly can be the case that neither model and polls are being properly constructed. Which is what I believe we have here.

Exactly. If I coded software that counted 25 hours in the day, I wouldn't jump on Twitter and tell NASA they got the earth's rotation wrong. I'd be suspicious of my software.

Doing it the first way means you think you are the Wisest Man On Earth, and that's almost never the case.
 
Even the faggot cheerleader propagandists at ERB can't muster up enthusiasm for Joe. (Though I like how they gave Biden twice Trump's time for verses.)
After they made a similarly stilted rap for 2016 showing their bias for Hillary, they got so assmad about the election results they cancelled their series for a time.

Anybody wanna bet on the odds they 41% after 45 gets another term?
These dudes are has been fags. ERB I mean.
 
These dudes are has been fags. ERB I mean.

They stopped being funny after 2013 or so when they started using actual celebrities instead of impersonators. Seeing Snoop Dogg, Key & Peele, and Weird Al in their videos was just plain cringe. The magic of their 2010-2011 videos was how they had the same two guys impersonating most of the celebrities and historical figures. I couldn't imagine any of the episodes being watchable now.

EDIT: I forgot to mention that satirical parody ERB channels are legit entertaining:


 
Status
Not open for further replies.
Back