Anyway he intends to bust three myths. Although he says bust and pretends to forget his lines, so instead just says "I'm going to bust" and then leaves a long "comedy" pause. This joke reoccurs in the video. The myths are:
- The problem is fake news
- TikTok Teens will save the world
- Social media holds politicians accountable
For the first segment he talks about how societies need a lot of information to function ("how many swords do we have?", "how many barbarians are at the gate?", "how quickly can we make a lot more swords?") and that on paper, liberal democracies with strong human rights protections should perform better than dictatorships, as dictators end up surrounded by yes-men and so don't get the information they need. While in the past, there were few sources of information (i.e. newspapers), now social media means anyone can share their perspective. Although this does sound good because it means we're exposed to more information and more viewpoints, it ends up as an information overload. One is learned helplessness where you just give up trying to consume information and just go off vibes, and the other one is selective exposure.
This response is really good for platforms themselves. If everyone's sorted into their nice little algorithmic niche, that makes it easier to hit us with targeted ads - and we love it! Posting nice, simple content that aligns with your niche gets you that sweet, sweet clout in the form of likes and shares. I mean what do you get by posting challenging, nuanced content? I guess I wouldn't know. Selective exposure might contribute to political polarisation and maybe an increase in authoritarian thinking as people come to believe that discussion is impossible 'cause everyone's living in wildly different versions of reality. Although I should stress that from the research I've read, that remains an unproven theory. It's a thing some people worry might be happening.
He attempts a self deprecating joke "showing self awareness" but it doesn't work because he's basically dismissing the idea breadtube has made a certain segment of the internet into extremists. Like with a lot of his videos, the arguments he applies to the right wing also apply to his content and fanbase.
The point is that both of these effects (learned helplessness and selective exposure) can be gamed by bad actors. He discusses the Chinese government paying people to post on social media, and Tenet Media accepting funding from the Russians. His basic argument boils down to "fake news" not being the problem, it's more deliberate information overload, and the fans of people like Tim Pool don't really care if something is true or false because sharing his content fulfils a psychological need, so tackling the source is more important than trying to debunk what they say - ergo the problem is not fake news.
He then talks about how social media has achieved some things.
As an actor he noticed since #MeToo intimacy coordinators are more common on sets (glossing over the fact he's never actually worked with one on a set as we've mercifully not had to endure a Choob sex scene). Likewise BLM was able to accrue millions in funding (no mention on what happened to that funding). However being able to grow a movement massively online can mean it's too big to organise, so it fizzles out and dodges accountability for leadership. This also defangs movements - if most people at a protest march are just there to take pictures of their funny protest signs, governments can ignore them because they know there won't be any direct action. He counts this as "partially" busting the second myth.
The third one about social media holding politicians accountable - this was thought to be possible, but the Arab Spring basically failed, China uses social media to control its population and Western elections are vulnerable to disinfo campaigns on social media. But while social media may help you humiliate someone, that's not the same thing as holding them accountable because that's more about organisations. Organisations limit the amount of information they process as they grow larger, to avoid getting overwhelmed - e.g. nobody at Boeing decided to make shit planes, they cut costs and ignored whistleblowers and therefore dismantled the systems that fed information back to the top, resulting in a system that made shit planes - but no one individual can be held accountable for that. So social media might increase the information out there, but if organisations don't process it, there's no accountability. Kony 2012 joke. Myth busted.
At this point it's the halfway mark in the video - he summarises that the things that make social media good also make it bad, which prompts the question "Now what?". He tells the viewer he's setting up for a rug pull and there's a twist coming (he's incapable of making a video with a twist without doing this). He discusses middleware, which outsources the filtering of social media to third party companies and lets people choose between filtering options like picking a browser. So if you were worried about Conservative voices being silenced, you could pick a middleware that only showed you right wing content. Then he starts talking about his naked body:
There are some potential privacy issues with middleware. For example, if I post nudes, and your middleware company filters nudity, do they get to see my tiddies even though I didn't sign up to them and I didn't consent to them seeing that data? That data that is my tiddies? Like presumably they have to see it in order to filter it, but do you see how that would be a problem?
I do feel for the poor content filter assistant who has to see Ollie naked. He also says it doesn't solve the selective exposure issue, although he previously dismissed that necessarily being an issue. This entire process was proposed in a paper by Francis Fukuyama while getting funded by a private equity firm that specialised in middle market companies, and therefore this entire thing is actually just an illicit information campaign to make an equity firm more money (or is it only illicit when foreign companies try to influence government, and not when American companies try to influence the American government?).
Social media is an outgrowth of surveillance capitalism (making money off of browser history etc to create targeted ads etc). This is leaking into the real world with the Internet of Things - e.g. a car insurance company could track your driving habits and adjust your premium in real time, or even send you behavioural nudges (sending a notification that if you keep driving as fast as you are, they'll increase your premiums). The most profitable way of predicting behaviour for things like targeted ads is to use behavioural modification. Politicians and governments can therefore do this too (Cambridge Analytica, Chinese social credit scores) using the same technology that Instagram uses to nudge you into buying things you don't need.
These actions (behavioural nudging, constant surveillance, waiving rights through ToS) are anti-democratic and corrosive to society. This is also associated with cryptocurrency replacing banks and governments with the blockchain. Same with AI. Shosanna Zuboff proposes legislation to outlaw all these forms of technology, but that will be very difficult considering how many huge companies and governments rely on this tech. Ollie argues he could still do YouTube without all the tracking data like we did in 2005, and it's not even an anti-capitalist argument because you could do capitalism without this tech (in fact, surveillance capitalism arguably represents a break from free-market ideology into something more centrally controlled). Whole sections of the government and the economy are effectively an illict influence campaign - which runs counter to the principles of democracy itself.
Although that's maybe not such a bad thing (he does shifty eyes here to show his character is becoming villainous). If democracy is just a way of gathering and processing information, why not replace it with computers that know what everybody wants, condition them into wanting things, solve their problems for them before they know it? Replace democracy with a system where the best, most technically able people make the decisions rather than the unwashed mob? Ollie is very smart but has lost every election he voted in - he lives in England, but have you seen the people that run that place? He should be in charge! He could do a better job with his eyes closed! And if people wouldn't listen to him, he could use tech to make them listen. Cut back to Ollie's character being non-villainous, explaining how common that sort of thinking is becoming in the tech world. Tech can enable authoritarians, and while democracy can be frustrating people with lots of knowledge don't have the right to tell others what to do but instead use it to help others. Democracy isn't optimised for the best decisions or best leaders, but has the most respect for everyone's equal humanity.