- Joined
- Jun 9, 2020
As Donald Rumsfeld said: there are known knowns (things we know), known unknowns (things we know that we don't know), and unknown unknowns (things that we don't know, but that we don't know we don't know).
After the debacle in 2016, which diverged from the historical norms, they changed their models to take into account what happened in 2016 - in particular, polls being systematically wrong. But they couldn't have done that before they got the information. Thus, an election analysis that relies on historical facts is by its nature always reactive. Fighting the last war, as it were.
The problem with relying solely on historical data is that you can't predict things you've never seen before.
For example, in 2008, banks created financial derivative assets. They would be safe, as long as the housing market behaved about as it had done in the past. The housing market did, in fact, not behave as it had done in the past, and everyone lost their jobs as a result.
If your rules for turning predicting the future based on data are based on historical results and data, the predictions can't actually predict new situations. But if they aren't, then they're literally based on nothing and you're just making shit up out of thin air.
In other words, "forecasting" can't actually tell us anything unforeseen about the future. It can only tell us things we already know.
On some level, they realize this, and try to adjust their models. But based on what? The historical facts, or their personal feelings? It's utter garbage either way.
What actually happens is that they take their models, and if the results look unreasonable, they turn the knobs until they look fine and hope nobody complains. It's like when a CEO hires a consultancy firm to come up with reports that backing the decisions he's already made.
At the end of the day, garbage in equals garbage out. There is no way to get good information on how people are going to vote with the current situation. There is no algorithm that will fix the problem that you have no good information available. Even the betting sites can't conjure information out of nowhere. If you amplify noise, you don't get a good signal, you just get more noise.
What is the reasonable conclusion to draw from this?
Nothing. There's no conclusion. All else equal, the election is 50/50. There isn't any reliable information indicating anything at all. So the prediction should still be unchanged at 50/50.
If you had an honest election forecasting site, they would point out that the information in is garbage, their rules are garbage, and that it thus follows the results they get out are garbage and nobody actually knows what's going to happen. They don't and they aren't.
After the debacle in 2016, which diverged from the historical norms, they changed their models to take into account what happened in 2016 - in particular, polls being systematically wrong. But they couldn't have done that before they got the information. Thus, an election analysis that relies on historical facts is by its nature always reactive. Fighting the last war, as it were.
The problem with relying solely on historical data is that you can't predict things you've never seen before.
For example, in 2008, banks created financial derivative assets. They would be safe, as long as the housing market behaved about as it had done in the past. The housing market did, in fact, not behave as it had done in the past, and everyone lost their jobs as a result.
If your rules for turning predicting the future based on data are based on historical results and data, the predictions can't actually predict new situations. But if they aren't, then they're literally based on nothing and you're just making shit up out of thin air.
In other words, "forecasting" can't actually tell us anything unforeseen about the future. It can only tell us things we already know.
On some level, they realize this, and try to adjust their models. But based on what? The historical facts, or their personal feelings? It's utter garbage either way.
What actually happens is that they take their models, and if the results look unreasonable, they turn the knobs until they look fine and hope nobody complains. It's like when a CEO hires a consultancy firm to come up with reports that backing the decisions he's already made.
At the end of the day, garbage in equals garbage out. There is no way to get good information on how people are going to vote with the current situation. There is no algorithm that will fix the problem that you have no good information available. Even the betting sites can't conjure information out of nowhere. If you amplify noise, you don't get a good signal, you just get more noise.
What is the reasonable conclusion to draw from this?
Nothing. There's no conclusion. All else equal, the election is 50/50. There isn't any reliable information indicating anything at all. So the prediction should still be unchanged at 50/50.
If you had an honest election forecasting site, they would point out that the information in is garbage, their rules are garbage, and that it thus follows the results they get out are garbage and nobody actually knows what's going to happen. They don't and they aren't.