CN TikTok Explicitly Bans Misgendering, Deadnaming, Misogyny, and Promotion of ‘Conversion Therapy’ - Has Douyin (Chinese Market TikTok) done the same?

Strengthening our policies to promote safety, security, and well-being on TikTok​

By Cormac Keenan, Head of Trust and Safety
Today we're announcing updates to our Community Guidelines to further support the well-being of our community and the integrity of our platform. Transparency with our community is important to us, and these updates clarify or expand upon the types of behavior and content we will remove from our platform or make ineligible for recommendation in the For You feed. We routinely strengthen our safeguards so that TikTok can continue to bring people together to create, connect, and enjoy community-powered entertainment long-term.
1644339862108.png

Building a safe and secure entertainment platform
At TikTok, we believe people should be able to express themselves creatively and be entertained in a safe, secure, and welcoming environment. Our Community Guidelines support that by establishing a set of norms so that people understand what kinds of content to create on our platform and viewers know what to report to us. Our policies are designed to foster an experience that prioritizes safety, inclusion, and authenticity. They take into account emerging trends or threats observed across the internet and on our platform. We also listen to feedback from our community, our APAC Safety Advisory Council, and other experts in areas like digital safety and security, content moderation, health and well-being, and adolescent development.
Some of the main updates we're announcing today and implementing over the next few weeks include:
  • Strengthening our dangerous acts and challenges policy. We continue to enact the stricter approach we previously announced to help prevent such content - including suicide hoaxes - from spreading on our platform. This previously sat within our suicide and self-harm policies, but will now be highlighted in a separate policy category with more detail so it's even easier for our community to familiarize themselves with these guidelines.
  • Broadening our approach to eating disorders. While we already remove content that promotes eating disorders, we'll start to also remove the promotion of disordered eating. We're making this change, in consultation with eating disorders experts, researchers, and physicians, as we understand that people can struggle with unhealthy eating patterns and behavior without having an eating disorder diagnosis. Our aim is to acknowledge more symptoms, such as overexercise or short-term fasting, that are frequently under-recognized signs of a potential problem. This is an incredibly nuanced area that's difficult to consistently get right, and we're working to train our teams to remain alert to a broader scope of content.
  • Adding clarity on the types of hateful ideologies prohibited on our platform. This includes deadnaming, misgendering, or misogyny as well as content that supports or promotes conversion therapy programs. Though these ideologies have long been prohibited on TikTok, we've heard from creators and civil society organizations that it's important to be explicit in our Community Guidelines.
  • Expanding our policy to protect the security, integrity, availability, and reliability of our platform. This includes prohibiting unauthorized access to TikTok, as well as TikTok content, accounts, systems, or data, and prohibiting the use of TikTok to perpetrate criminal activity. In addition to educating our community on ways to spot, avoid, and report suspicious activity, we're opening state-of-the-art cyber incident monitoring and investigative response centers in Washington DC, Dublin, and Singapore this year. TikTok's Fusion Center operations enable follow-the-sun threat monitoring and intelligence gathering, as we continue working with industry-leading experts to test and enhance our defenses.
Additionally, our community can find more information about the content categories ineligible for recommendation into For You feeds. While the ability to discover new ideas, creators, and interests is part of what makes our platform unique, content in someone’s For You feed may come from a creator they haven’t chosen to follow or relate to an interest they haven’t previously engaged with. That’s why when we come across content that may not be appropriate for a general audience, which includes everyone from teens to great-great-grandparents, we do our best to remove it from our recommendation system.
Every member of our community will be prompted to read our updated guidelines when they open our app in the coming weeks.
Staying accountable to our community
The strength of a policy lies in its enforceability. Our Community Guidelines apply to everyone and all content on TikTok, and we strive to be consistent and equitable in our enforcement. We use a combination of technology and people to identify and remove violations of our Community Guidelines, and we will continue training our automated systems and safety teams to uphold our policies.
To hold ourselves accountable to our community, NGOs, and others, we release Community Guidelines Enforcement Reports quarterly. Our most recent report, published today, shows that over 91 million violative videos were removed during Q3 2021, which is around 1% of all videos uploaded. Of those videos, 95.1% were removed before a user reported it, 88.8% before the video received any views, and 93.9% within 24 hours of being posted. We continue to expand our system that detects and removes certain categories of violations at upload – including adult nudity and sexual activities, minor safety, and illegal activities and regulated goods. As a result, the volume of automated removals has increased, which improves the overall safety of our platform and enables our team to focus more time on reviewing contextual or nuanced content, such as hate speech, bullying and harassment, and misinformation.
We've made significant strides to improve our policies and enforcement, including our efficacy, speed, and consistency, though we recognize there's no finish line when it comes to keeping people safe. We're driven by our passion to help everyone have a good and enriching experience on TikTok.

Source
Archive
 
Broadening our approach to eating disorders.
While we already remove content that promotes eating disorders, we'll start to also remove the promotion of disordered eating. We're making this change, in consultation with eating disorders experts, researchers, and physicians, as we understand that people can struggle with unhealthy eating patterns and behavior without having an eating disorder diagnosis. Our aim is to acknowledge more symptoms, such as overexercise or short-term fasting, that are frequently under-recognized signs of a potential problem. This is an incredibly nuanced area that's difficult to consistently get right, and we're working to train our teams to remain alert to a broader scope of content.
Nikocado's account getting nuked soon then?
1644340359092.png
 
Death Fats are gonna Death Roll in their structurally unsafe bath tub when they read this.
 
  • Agree
Reactions: FierceBrosnan
In addition to educating our community on ways to spot, avoid, and report suspicious activity, we're opening state-of-the-art cyber incident monitoring and investigative response centers in Washington DC, Dublin, and Singapore this year. TikTok's Fusion Center operations enable follow-the-sun threat monitoring and intelligence gathering, as we continue working with industry-leading experts to test and enhance our defenses.
Excuse me, what?

Edit: Seriously, this is more concerning than the deadnaming stuff. The more I read up on it, the more it reads as if they're trying to find ways to put a stop to any oppositional videos before they start trending. This means if you're trying to share protest videos and the like, well, no one will get to see it.

Of course, the deadnaming stuff is bad too. This means you won't see any differing views about troons. It also means you can't expose troon criminals by dropping their real names (Think of not being allowed to warn others about Jonathan Yaniv back when all platforms came out with the same rule at the same time and removed any accounts with his name).
 
Last edited:
I've been seeing this language of "conversion therapy" being used more often to silence discussion around trans issues. Basically if your kid questions their gender for any reason whatsoever (they could be gay, trauma from sexual abuse, uncomfortable about changes during puberty, etc) then if you seek therapy to solve any underlying issues instead of immediately taking the child to a gender affirming therapist to put them on puberty blockers and hormones, you are supporting conversion therapy and you will kill your child. And this is now normie language in Mom circles.
 
Last edited:
I've been seeing this language of "conversion therapy" being used more often to silence discussion around trans issues. Basically if your kid questions their gender for ANY reason whatsoever (they could be gay, trauma from sexual abuse, uncomfortable about changes during puberty, etc) then if you seek therapy to solve any underlying issues instead of immediately taking the child to a gender affirming therapist to put them on puberty blockers and hormones, you are a BIGOT supporting conversion therapy and you will kill your child. And this is now normie language in Mom circles.
Take your meds schizo
 
View attachment 2968041
Lord have mercy, if you call this journalism art...
Null had a segment on this type of shitty artwork. On my recollection this is definitely not only because the artist is shit but to make fatties feel better.


On this matter, I believe that TikTok was designed to promote degeneracy and accelerate the weakening of the West.

If for one welcomes, President Xi.
 
Last edited:
I can't wait for this ugly era of art to die.

It looks like it was drawn by a six year old who doesn't understand anatomy yet. And I'm pretty sure that she-hulk in the middle is a troon.

The weird part is that this isn't the first time this type of shit art has been around. Check out the late 80's into the early 90's era, the shit was very similar.
 
Wow, I guess the Left really has won the culture war.

With conservative opinions being pretty much outlawed on 99% of social media and 90% of the MSM the next generation will grow up pretty far left in viewpoint as that's all they're going to be exposed too. With the push towards progressive propaganda the Overtone window is going to go out the window!


Oh well, at least I got mine. I do feel sorry for the kids that will have to tread the dangerous, pedophile infested waters of a progressive dominated society.

Bring on the leftist dystopia! I'd like to see the end of Western Civilization before I croak, should make for good laughs.
 
Edit: Seriously, this is more concerning than the deadnaming stuff. The more I read up on it, the more it reads as if they're trying to find ways to put a stop to any oppositional videos before they start trending. This means if you're trying to share protest videos and the like, well, no one will get to see it.

Of course, the deadnaming stuff is bad too. This means you won't see any differing views about troons. It also means you can't expose troon criminals by dropping their real names (Think of not being allowed to warn others about Jonathan Yaniv back when all platforms came out with the same rule at the same time and removed any accounts with his name).
Same way the chinese social media algorithms work. Anytime a bad story threatens to trend (such as school children sick from illegal waste dump by the local gov, etc) it gets caught, the sharers are temporarily shadowbanned and the full trending feed is rinsed by random "hot" gossip items. This is very effective in scaring off any concerned users from complaining and memory-holing the entire incident.

This way no user can claim that they are stopped from voicing out ("you weren't banned, liar!"), and it creates the illusion that their views never reach anyone because it is not important, nobody cares. Eventually the user abandons the account because it is unusable from the constant shadowbanning, or gives in and only post inoffensive cat memes.
 
Back