EU The horrors experienced by Meta moderators: ‘I didn’t know what humans are capable of’ - A court ruling in Spain that attributes a content moderator’s mental health problems to the job paves the way for acknowledging that fact for at least 25 other employees as well


1706628515306.png

The Glòries tower in Barcelona, Spain, where Meta's subcontractor for content filtering is located.

After having to repeatedly watch numerous videos of suicides, murders, dismemberment and rapes, he had a panic attack and asked for help. This employee worked as a content moderator at a company that serves Meta, which owns Facebook, Instagram and WhatsApp. He was told to go to the fun floor: a large game room located in Barcelona’s Glòries tower, where the content moderation offices for the Californian tech giant are located. He sat staring blankly at a ping pong table. The fun floor didn’t help him at all. On another occasion, two hours after he had another panic attack, his boss gave him permission to go see a company psychologist. She was on another floor, the psychology floor. He spent over half an hour talking to her, getting it all out. When he finished, she told him that his work was very important to society, that they were all heroes, that he should be stronger. And that their time was up.

Content moderators are in charge of leaving walls and feeds clean and peaceful on Facebook and Instagram. Millions of people use these platforms every day and are unaware that this dark side exists. These workers are the ones who decide whether or not to publish fake news or photos that do not comply with Meta’s policy. But they are also the ones who have to deal with the most brutal content: viewing it, evaluating it, censoring it and, if necessary, sending it to the police.

In 2018, the company CCC Barcelona Digital Services moved into the Glòries tower. The announcement was very well received by the Catalan authorities, as the subcontractor of this large technological company came to swell the ranks of innovative companies located in Barcelona, and to occupy part of a building that had just lost the opportunity to house the headquarters of the European Medicines Agency.

The company began to hire people — primarily young foreigners who spoke several languages — to moderate content from different markets. Last October, an investigation by La Vanguardia exposed the conditions under which these moderators work. Before that, Catalonia’s Labor Inspectorate launched an investigation in 2021, and the following year imposed a fine of over €40,000 ($43,314) on the company for deficiencies in the evaluation and prevention of psychosocial risks in the workplace. In 2020, the company was acquired by Canadian Telus International, which claims that the accusations are false and that they have sufficient safety measures in place.

This worker started there in 2018 and stayed until 2020, when he obtained a medical leave for his mental health problems. The company and the mutual insurance company classified it as a common illness. “We then requested a change of contingencies, because his case fit perfectly with that of occupational accident. The National Institute of Social Security agreed with us, and the company appealed, which triggered the legal process,” explains Francesc Feliu, a partner in the law firm Espacio Jurídico Feliu Fins, which specializes in healthcare issues.

On January 12, the 28th Labor Court of Barcelona rejected the company’s claim and ruled that the worker’s sick leave should be classified as an accident at work. This is the first judgment that recognizes that the mental illness a content moderator suffers is caused by his work. “Work-related stress is the sole, exclusive and unquestionable trigger” of the disorders, says the ruling, which can still be appealed. Feliu has some 25 other workers who are waiting for their illness to be recognized as an occupational accident, and in October he also filed a criminal complaint against the company, alleging the lack of safety measures.

The worker has requested anonymity, because he is subject to strict confidentiality agreements, and he prefers not to talk about how he feels or about very personal issues, because the wounds left by this work are still fresh. He is having a hard time with the news coverage of the ruling, which make him relive what he saw. “But at least this is encouraging more people to seek justice,” he notes.

When he started working at the company, he had no idea of the violence of the videos he would have to watch. “They told me, but superficially, and then when you start you see that things are much, much worse...,” he says. The lawyer explains that the work is well paid (about €2,400 [$2600] gross per month, although there are salary differences between workers who are responsible for different markets, for which another office has also gone to court), no experience or training is required, and young foreigners are attracted to it: “They say ‘look, how cool, I’ll work for Meta,” explains Feliu. The affected worker points out that the illusions don’t last long: “People are not at all aware of what is going on. Before I worked there, I assure you that I didn’t know what humans were capable of.”

The workers suspect that they were training AI

Feliu explains that at that time — ”the conditions may have changed now,” he says — the content moderators with the best efficiency scores (as determined by a monthly worker evaluation) were placed in a high-priority section. That is, they continued to receive videos of all kinds, photos and posts in which suicides and terrorist acts appeared.

That was the section where the worker from the case worked: “Constantly seeing this makes you more sensitive to everything. After a while I couldn’t even see a suicide letter,” he explains. You had to strictly follow Meta’s policy, and often watch the videos to the end, several times with different moderators. “For example, you had to keep watching a live video of someone explaining that they wanted to commit suicide, and you couldn’t delete it or alert the police if you didn’t see something in the scene that suggested suicide, a gun, an open window.... Sometimes they would suddenly pull out the gun and shoot themselves, without you being able to do anything,” he laments.

To remove a video, they had to spell out the decision: “You had to rate the video by the worst thing that happened [in it], according to a scale. If the video started with some kind of violence, you had to wait for something more serious, like murder, dismemberment or sexual abuse, to rate it as the most serious. If the most serious violence came up at first, the system would let you delete it.”

This procedure made the workers suspicious. “If you already see that something is violent at second 10, why do you have to wait? You come to the conclusion that what they are doing is training artificial intelligence (AI), that [workers] are cannon fodder,” says Feliu. When asked about this, a spokesman for the subcontractor does not clarify whether this project exists or / and refers all questions to Meta.

The company employs some 2,000 people, after cutbacks at Meta led to the subcontractor’s workforce being reduced through layoffs last year. The works council has not responded to questions from this newspaper, and the company has appealed the ruling. In a statement, Telus explains that “thanks to the comprehensive wellness program,” in December of last year, it had reduced sick leave to 14% of the workforce, and that only “between 1% and 2%” were work-related mental health leaves.

The company claims that it has engaged outside medical support, that the team has a range of counselors available to them 24 hours a day, can call for breaks and emergency sessions whenever workers see disturbing content, and has technology to blur videos or turn off the sound, if necessary. “Any suggestion that employees are constantly exposed to disturbing content for eight hours a day is false,” the note says, adding that its workers’ well-being is a priority. At the trial, the company denied that there was a link between the employee’s mental illness and his work, arguing that he had seen a psychologist when he was 16 years old.

The worker explains that when he was with the company, there were timed five-minute breaks every hour, during which time he could not go outside to get some fresh air because just going down in the elevator used up the allotted time. The lunch break was 20 minutes, and workers had activities such as yoga sessions and games, “but [there was] no specific follow-up” for employees who evaluated some 400 pieces of content every day.

In addition, the rotating schedules — one week in the morning, one week in the afternoon, one week in the evening, one week at night —disturbed their rest, “which was already difficult because of the nightmares.” Feliu says that “25% of people were systematically on sick leave, plus all those who left the job before taking sick leave.” The attorney believes that the court ruling, and future workers will help the company to change things: “Content moderators are essential for social media, but so are their [working] conditions.”




Cry me a river, janny.

 
As someone who knows someone in the US that did this job, this wasn't really talked about enough. They were hired, told they were going to look through political and educational content. It seems a lot of it is offshore than the US and I'm not sure why. The way the company is divided is something of a mess, and they never rotate out people. Their psychiatry is fucking woo woo bullshit. They use those bowls that you run a stick around and arts and crafts. Rarely do they actually have psychs - they have people who are hypnotists and bottom barrel loony themselves. I understand from the workers side in the sense that they probably weren't told about this in the hiring - which can be shocking - but they can also leave. The dude I knew was heavily demotivated and say they run it like a cult.

Dunno, I understand their position but if you were there for two years - I'm also side-eying it. Most places give you disability in a year - is that different in Spain? Some shit can't be wiped from memory but I also don't know who I would cursed with seeing this over and over again even if I am desensitized to hell and back. Mainly cause fuck Meta - but also I don't particularly like gore porn and they can fuck off. Apparently - if insider knowledge holds, they've moved most if not all of their content moderation overseas and in really big metro areas and like fuck those pussies are going to handle shit.
 
Funny watching hot takes in this thread because someone saw an iraqi beheading video when they were 13 and think they're "desensitized". kek xd

I wouldn't do this job if it paid me 300k per year. Some shit can't be wiped from memory.
IMO some people think they can handle it, but it subconsciously still affects them. Brains are weird.

I occasionally see shock images (or an approximation) in my sleep. I can look at bloody bodies in pictures and feel nothing, but then have dreams with a pile of them and disembodied screaming.
 
Last edited:
If they think this its bad, imagine being a Fed and you are forced to not only moderate 4chan but also pay close attention to everything they talk about. Its amazing they haven't swallowed a gun for all the shit they're forced to see as well as being reminded constantly that their job is an utter joke and their masters care very little about them and see them as an expendable piece.

The kicker here is, there's tons of government agents from every single major power who has to do the same thing in that cesspool. And they understand that if they banned the place or torch it, that cancer will spread all over the Internet. Tainting just about every site there is even worse than it is now.

It took a couple of years for the ideas of JQ, Hoeflation, eat ze bugs to take root with the plebs. If 4chan were to go, it may only take but a few months or even weeks to infect the plebs.

Every few years this jannie issue comes up and makes news one way or another. The real story is that for all its hype and promises, AI can't do the "Ignore or Delete" job reliably and effectively. So some human in front of a monitor has to. They're little more than sweatshop workers and they're understandably (and undeniably) affected by all the disturbing shit they experience in their "working conditions".
And the best part about any AI that is sent to absorb data from "hate" they end up becoming the next Tay and in turn, become a detriment to their owners. And anons have all the time in the world to fuck with AI. For starters, they can't censor corporate names even if used pejoratively.
 
And the best part about any AI that is sent to absorb data from "hate" they end up becoming the next Tay and in turn, become a detriment to their owners. And anons have all the time in the world to fuck with AI. For starters, they can't censor corporate names even if used pejoratively.
Arthur C Clark predicted this decades ago with HAL in 2001.
Either you create something that can learn from available data, or you create something that lies to fit your directives. You can't have both.
 
how come these people are the ones that end up doing this job instead of the kaotic/rekt thread-loving 'teens that have been sending hd footage of naked chinese men falling dick first into an industrial rock crusher to each other since they were 12
Because those kids grew up to become normal people, say, they finished school and went into blue collar jobs where they met other normal blue collar workers to laugh about chinese men's dicks with. It was just a phase for them. They still used the internet, but they don't obsess about it.

Meanwhile, jannies likely were kids who grew up overprotected and were constantly fed with "we need to be the change!" bs that turns a kid into a fragile liberal who thinks being an online moderator is akin to land on Normandy and they can't separate reality from the internet.

People who work on a certain type of extreme jobs (police, health, etc) separate their work from their personal life otherwise they'd go insane. They see a dead kid in the morning, then they learn to leave that behind and go home to see their actual kids they hug dear. Trannie Jannies see pepe memes on Facebook and then go home and log in to Facebook...
 
Don't call grossing $2600/month well-paid at all.

Wonder how these mods would be doing mentally if they were paid twice as much, with longer breaks, lunch hours, and more slowly-rotating shifts.
€2400/month is about twice the minimum wage in Catalonia. Factor in cost of living is significantly lower than in any major North American city (plus free healthcare, prolly still heavily subsidised public transport, and whatever perks come with working for a Meta contractor) and it's not a bad deal at all at first glance.
 
This procedure made the workers suspicious. “If you already see that something is violent at second 10, why do you have to wait? You come to the conclusion that what they are doing is training artificial intelligence (AI), that [workers] are cannon fodder,” says Feliu. When asked about this, a spokesman for the subcontractor does not clarify whether this project exists or / and refers all questions to Meta.
So do they want to work in it or not? You can't complain that you have mental damage from a job and then complain that there are plans to automate the job. This kind of shit really highlight how those peopl want to feel like victims ror the job of watching videos for eight hours and push a button, while already being coddled
 
One thing to keep in mind is this:

1) Video where dude holds up severed head twice for 3 seconds each and rants about the feds for 15 minutes.

2) Video where old-ass dude from Philly talks about how his neighborhood has changed in the last 50 years and uses the nigger word twice.

3) Take me to funky town!

4) Subscribe to my OnlyFans!

5) https://www.youtube.com/watch?v=ZE-JoOQ9s7c

6) Watch me fuck a 10 year old boy in the ass!

But the kicker is that you have no clue before you hit "play" what you're going to get.

You tell me it's a fuckin' severed head video, I can't exactly be shocked when I get a fuckin' severed head video.

You tell me it's Magdalen Berns ripping on Riley J. Penis and instead it's a kid getting the long end of a short dick?

These people have no idea what they're going to see before they see it.

Not cool, man. Not cool.
 
Yeah I have full sympathy with the workers. They likely underestimated and were undersold on the severity of what they'd be dealing with.

On top of that, it's not like going on efuckt. You don't know what you're going to see. When something horrible happens, you can't close your eyes, exit out of the tab or take a breather - instead, you've got to rewatch it several times with other moderators, discuss it and rate the severity. Going over it multiple times is asking to burn it into your brain.

Plus this isn't gore that's been reposted on a gore website. It's stuff that's usually been posted by the people in the video. Some of it is happening live. Some of it - as the article observes - is stuff that you could do something to stop as it's happening, but your company policy means you can't intervene until it's too late, and then you've got to rewatch the person you failed to stop from blowing their brains out several times while discussing what you saw and if there were any trigger points you missed that would have allowed you to intervene.

And unlike gore sites of old, this isn't cctv footage of an industrial accident of grainy crime scene photographs. It's often going to be in shiny 4k displayed on high resolution monitors in a format that looks like something you see every day (instagram photos, Instagram stories, Facebook lives). And it'll be the stuff that clearnet gore sites won't even share, like child rape.

And if that all becomes too much for you? "Colleague, you have been allotted half an hour with a company approved MoraleBooster Councellor who will tell you that you are a hero, and you will be assigned an additional half hour on the Fun Floor". It's surreal and dystopia. No wonder they're having nervous breakdowns and becoming hypersensitive, you'd have to be a very strong person not to in that situation and they're mostly just getting naive young normal people who can't say no to the money, people whose Mum probably gushes about how proud she is about her son or daughter working in the big fancy office building for that big famous company, people who lack the life experience to intuitively think "I'll quit and get a job in a shop and go back to being poor, because no job is worth this". People who didn't know what they were signing up for.

Sure, it's not like being a first responder or a soldier and dealing with this shit face to face, but it's clearly still damaging to them and they can't maintain the same degree of separation between work and their private lives.

The only eyebrow raising bit for me is them complaining about AI. I mean I guess I could see the argument that having to rewatch and rate the videos to train the AI is making the job harder than just being a janny. But it seems like they're saying they want to carry on doing their jobs, just in a way where they don't have to see any gore or traumatising videos... which isn't an option, so obviously it would be better to get an AI to do this.
 
My nigga do you know how decimal points work?

This is 43K a year we're talking.

Maybe that sounds like untold riches where you live, but where I live this is wagey shit.
I am aware of decimals and how to convert one currency into another, newfag. I was referencing @skunt's post.
I wouldn't do this job if it paid me 300k per year. Some shit can't be wiped from memory.
 
This article is part of a narrative setup for why all internet companies will start using "A.I." to censor everything in the near future. It's what's good for mental health!
 
  • Thunk-Provoking
Reactions: Cereal Killer
Funny watching hot takes in this thread because someone saw an iraqi beheading video when they were 13 and think they're "desensitized". kek xd

I wouldn't do this job if it paid me 300k per year. Some shit can't be wiped from memory.
1706698528774.png

If you had been here longer, you would know that kiwifuzz is not a newfag, Mr "I joined in 2022 and haven't made 1000 posts yet".

Go sit in a corner.

what is he, your boyfriend lmao why are you in here defending milady's honor?
 
I definitely agree that the unpredictability plays a major role in how mentally straining this kind of work is. Your first click of the day might be a vacation picture where one of the photos gets flagged for a questionable pose but, upon review, it's just the daughter building a sand castle and the next click could be a methhead boiling their newborn alive.
 
Back