It's an evolution of media org's regularly assessing the truthfulness of politicians' claims made during election campaigns or flashy press conferences.
I feel like the turning point came at some point between the 2016 and 2020 elections, honestly, and probably didn't have as much to do with domestic US political motives as people tend to assume. They're involved, but mostly indirectly, because they can be used as a means to an end. It's mainly about public image and the risk of legislators using that to do away with laws protecting hosts from consequences for user generated content, with IP trolls will look for any excuse to challenge.
Issues with content- and bot-farm driven misinformation had been an issue on social media for years, but up to that point it had been allowed to grow unchecked because it was a net positive for companies like Facebook and Twitter. It drove engagement by being sensational in a way that factually correct content could rarely manage, which drove advertising revenue, which made the happy numbers go up on their balance sheets. Considering that social media companies were just starting to make IPOs and to transition from funds-hemorrhaging tech start-ups to serious investment opportunities in the mid-2010s, its kind of hard to blame them from the point-of-view of most of their leaders. They
knew it would eventually blow up and cause a scandal but, like investment bankers during the mid-2000s real estate bubble, they figured it would happen after they'd taken their money, dropped out of the game, and ran with it.
2016 was different for two reasons, though. On the one hand, it was the first US presidential election since 2012, when most bullshit online was still being spammed by individuals in the US who either distanced themselves from it or used claims of "satire" as cover, meaning that the summer of that year very quickly became the high water mark for a new age of bullshit written internationally and disseminated by what now seems like very primitive AI. The programs involved might seem almost childishly stupid now, but they were fucking
frightening compared to the basic algorithms and text generation strings that had existed beforehand. For the first time, they were being rolled out on a large scale and utilized by organizations like the Russian and Chinese security agencies (as opposed to scammers and counterfeit resellers, who had been using them on a smaller scale since at least 2014), resulting in explosive revelations about how Facebook was handling user data and failing to address deliberately malicious use of its platform. On the other, the election ended up being won by Donald Trump, which was unexpected by a lot of people in Blue States considering how poorly he had been polling beforehand in the surveys released by some major political and news agencies. People who didn't support him wanted an explanation.
In reality? His victory in 2016 probably had very little to do with interference by Authoritarian states like Russia or China. We don't know exactly what the reasoning behind their behavior was, since most documents are obviously still sealed. The overall impression, though, paints a picture of an awkward, fumbling mess probably handled by new internal working groups testing what level of influence they could actually have and trying to justify their existence to higher-ups, who'd served in the KGB and MSS during the last years of the Cold War, and who thought the whole idea was fucking retarded. The English in known propaganda pieces is usually decent but feels vaguely off, and the messaging seems muddled and targets both sides. At some point, a descendant of the KGB started selling Black Lives Matter Christmas sweaters that knitted "Thug Life" against a Holly Jolly background of pine trees and Glocks. We can only guess at the motive behind that bizarre decision, but what we can know with certainty is this: Somewhere in Moscow, in October of 2016, a Colonel named Vladimir Ruskovich-Vodkavski saw that shit cross his desk for approval, glanced over at the wall of medals he'd earned for dropping depth charges on the
Halibut and shooting dogs at Chernobyl, reached down for his service revolver, then gave a sigh of resignation and decided that he could repaint the mural of Lenin behind his desk with brain matter some other time.
That helped liquor sales near the Lubyanka, I'm sure, but it didn't help anyone win any elections anywhere on Earth. For talking heads who didn't understand how election cycles work in two-party dominant states, or the difference between polling numbers in the general population and the population who actually voted, though, it meshed neatly with the Trump campaign's Russia scandal. That meant story after story about how Cl. Ruskovich-Vodkavski had colluded with Facebook to ensure the death of democracy and the rise of a new, Fascist global order, along with a pressing need for social media companies to do
something to separate themselves from both hyperbolic news stories about a second Holocaust targeting Muslims, and the more realistic risk of the Trump administration becoming unpopular after doing shit stupid. Ironically, considering that the actual goal of the limited 2016 interference campaign seemed to be spreading distrust in the elections process, it probably also netted the Colonel a new medal with the doge meme on it. One that he had to put in the desk and never glance at, because something in the eyes of that Shiba Inu brought back memories from a warm late spring day in a village far to the North of Kyiv, near the border with the Belarussian SSR. A place that reeked only of blooming flowers, not of ashes, where the hundred year old houses and the even older walls of an Ultra-Orthodox synagogue were being bulldozed and prepared for interment under the blood-stained soil, under the Cesium-tainted soil, of what he'd grown up calling "Little Russia" in his childhood bedroom on the fourth floor of a housing block, studying the fall of the Black Army during those cold, dark Murmansk winters. Where the puppies came up whimpering for food, their eyes so full of joy at the sight of the first humans they'd seen since the days after the night when a corner of the reactor had reached prompt criticality, a sound that only they'd heard here, and then only as a distant pop. Those puppies, there were four in all...
Anyway, saying that they were doing something to address misinformation was the obvious solution. Not, like,
actually doing anything, because misinformation was both too lucrative and too hard to actually deal with. Just saying that they were. The same AI innovations that had led to the problem in the first place presented an obvious solution, automatically flagging posts that involved keywords associated with conspiracy theories or known, widespread falsehood, and linking articles by political fact checkers (or, in the case of YouTube, sometimes just Wikipedia, which probably seemed more neutral). Twitter/X decided to allow community notes instead, effectively outsourcing decisions about what and how to fact check. Overall, though, the outcome was the same. A simple, easy to ignore link would be affixed to anything controversial, so it could be allowed to remain on the site without the site itself being blamed for inaccuracies. Unless something rose to the level of libel, or was proven to be the direct creation of a content farm, it would still be there to drive engagement. The Covid-19 Pandemic, 2020 Election claims, and January 6th, 2021 riots gave a perfect opportunity to very visibly expand the scope of fact-checking programs, but it would have realistically been scaled up in advance of the 2024 election, anyway, along with more serious and less public efforts to avoid something like a Cambridge Analytica redux.
Ultimately, it's just an effort to avoid public opinion shifts that might favor legislation regulating what sort of content a website can host and under what circumstances it can do that without facing legal repercussions, because a lot of misinformation
does come initially from foreign sources now, who know that it's factually untrue. Especially outside of election years, their end game is usually just making money, but regulators can still use the threat of outside interference in domestic politics to promote bills that are ultimately intended to serve the needs of IP protection firms rather than the general public. As soon as public opinion turned against its fact checkers, Facebook started looking for alternate options, which says a lot about how much of a fuck social media companies actually give.
Getting your news from your feed on Facebook or Twitter is retarded, whether you're on the Right or the Left, and fact checking doesn't help. Especially if the misinformation is coming from the Left, but that's a complicated topic that deserves a lot more time than I have for it right now.