- Joined
- May 6, 2019
Digging into it, because this is my field, their methodology EXPLICITLY notes that it breaks down at higher numbers.I am literally an Engineer.
Yes it is a mathematical proof, but what is the key thing we ALWAYS CHECK??? The Data set to see if there are any outliers.
Is the Mathematical proof applicable to this situation?
Now i have not worked with Benfords law as it is not applicable to my kind of work. I don't deal with money or any thing that would benefit from this kind of analysis.
I found this on Wikipedia as i read op on how it works.
Election data
Benford's law has been controversially used as evidence of fraud in the 2009 Iranian elections.[34] A 2011 study by the political scientists Joseph Deckert, Mikhail Myagkov, and Peter C. Ordeshook argued that Benford's law is highly problematic and misleading as a statistical indicator of election fraud.[35] The method was also criticized by Walter Mebane.[36]
So did some digging on this claim and the paper who sited it.
Source (35)
This is the article, but not to bore you and as this is my day off i will not dig into it more then a few things.
From the Article. Part of the Conclusion page 16. 9 Lines down.
it, in combination with what we know about the stochastic processes sufficient to yield digits in conformity with Benford’s Law, is that the Law is not universally applicable magic box into which we plug election statistics and out of which comes an assessment of an election’s legitimacy. This is not to say it is fruitless to search for special electoral contexts in which 2BL has some relevance, but our analysis suggests that the data required to validate that relevance must be richer than simple election returns
Now they could be wrong, but i am going to trust experts with nothing to gain in this over random people on the internet and this is literally the Dunning Kruger effect.
Now i am willing to hear your explanation why the Benford's Law is applicable and why it works with the current data set??.
"Given these considerations, then, we let the size of ‘‘precincts’’ in our simulations take one of three values—1000, 10,000, or 20,000—with the assumption that if we are compelled to rely on observations that average more than 20,000 eligible voters, the level of aggregation is likely to be too great for a confident assessment of things using any methodology."
In other words, their conclusion breaks down due to having issues with larger vote totals. I'd note as well that they are an outlier with other sources extolling the virtue of Benford's for finding electoral fraud, and the addition of this article to the wikipedia page TODAY implys hunting fir a source that disagrees. Still, a disagreeing source could see something the consensus does not.
Beyond that, I take issue with their simulation for other reasons. But those are mostly minor nitpicks about methodology.
Another note, at page 255-256 is them pointing out cases where the 1BL and 2BL conflict. These ARE good indicators of the law breaking down, but are irrelevant in this case as the 1BL and 2BL both concur.
In fact, going down to their conclusion, you have misrepresented it. They are specifically saying that 2B: is NOT a good indicator on its own, but say nothing about 1BL having such issues. And their very premise is that 2BL is unreliable, not 1BL.
And finally, the BIG methodological flaw in comparison. Their analysis assumes that there is homogeneity in the analysis, and that any examination of it with a comparable one will create differing results. In other words, its not replicable.
The problem here is that it is, and was 4 times with 4 cities.