AI Dungeon applies filter to ban child sexual content, Redditors and Discord users most affected

What is it about the specific line of "I'd like to buy 4 watermelons please" that trips the filter in particular?
To hazard a guess its reading 'Watermelons' as slang for breasts, I know 'melons' certainly is. It may also be seeing '4' as referencing age and since its close enough together it trips whatever algorithm it has.

If 'Watermelons' is some codeword then we're going to start running out of words to use as code phrases.
 
  • Thunk-Provoking
Reactions: AmarettoPie
What is it about the specific line of "I'd like to buy 4 watermelons please" that trips the filter in particular?
My bet is that it's from some degenerate hentai or hentai manga.

It's a little surprising to me though that the creeps are using AI dungeon of all things for their degenerate wants when there's much worse garbage that's just as accessible out there.
 
Well, that has effectively killed my interest in AIDungeon. I can easily see the monitoring expanding way past CSA-themed content into violent content, right-wing content, violence in school settings etc. Nothing but sanitized, politically-correct stories.

Also I can see legit pedophiles joining Latitude to read CP for money, to be honest.
Exactly why I cancelled my subscription. Any kind of monitoring of private activity and censorship there-of is something I find disgusting.

Heck, it probably is already here, since I read on the Subreddit that someone got an error for typing in "Nigger."
 
Hold up gaymers, they're poosting science

r/pedo
lolpedoreddit.png

chj4z5dpxyv61.png


Pedophilia and Computer-Generated Child Pornography

Original version​

Palgrave Handbook of Philosophy and Public Policy. 2018, 369-382

Abstract​

To be a pedophile, according to the World Health Organization, is to have a sexual preference for children, boys or girls or both, usually of prepubertal or early pubertal age. Pedophilia is widespread—approximately two percent of the adult population is primarily sexually attracted to children—and world-wide, approximately 1 in 5 girls and 1 in 12 boys, is a victim of sexual abuse. Most researchers working on pedophilia are psychologists, psychiatrists, and criminologists. How might ethicists contribute to the discussion? In this chapter, we ask, and seek to answer, three distinctively ethical questions about pedophilia: (1) Is it immoral to be a pedophile? (2) Is it immoral for pedophiles to seek out sexual contact with children? (3) Is it immoral for pedophiles to satisfy their sexual preferences by using computer-generated graphics, sex dolls, and/or sex robots that mimic children? We hope to show, through our discussion of these questions, that an ethical investigation of pedophilia can help advance our understanding of how pedophilia should be understood, assessed, and handled.
 
My annoyance is that I'm only paying them for access to the gpt-3 machine, that they got exclusive access to. That's it. I dont need you adding in features that weren't requested and have a habit of breaking stuff.

Similar concept is crunchy roll when that made that high spice show. Bitch, i just want my Japanesey cartoons. I'm only paying you for your access to them. I don't need you to make stuff that no one requested.
Paying for anime anything was your first mistake.
 
Exactly why I cancelled my subscription. Any kind of monitoring of private activity and censorship there-of is something I find disgusting.

Heck, it probably is already here, since I read on the Subreddit that someone got an error for typing in "Nigger."

Same. Why can't it just be left alone. If only Mormon spent more time on securing his stuff rather then playing big brother.
1619656208961.png
 
Wouldn't it take an NLP AI a decent while to better chew data/training and understand what content it is supposed to be flagging, with it initially running into some stupid and clunky misfires? the assumption that you'd tell it 'no cp' and it would operate even remotely effectively immediately is absurd

Similarly, why wouldn't the company just have this same AI essentially scan through a user's generated stories looking for these particular flags and highlighting them rather than just in a blurb presenting the entirety of the content up for review to some mook? The idea that they'd read, by hand, over every inch of every generated story and look at every command rather than looking for particular flags and from there looking to see the context is absurd

and does any of this violate anything anyone agreed to in their TOS?
This is looking like another chapter in the esteemed epic Saving the West by Standing Up for Lolicon - the Internet Culture Warrior's Guide to Success
 
What is it about the specific line of "I'd like to buy 4 watermelons please" that trips the filter in particular?
The developer seems more incompetent with each update and is quite possibly an idiot that lucked into this.

Honestly everyone seems retarded in this story. People upset you can't write CP anymore - because, GASP, payment processors don't like that! - to the lead dev that tries to hammer a nail with a pickaxe.

It was always wonky fun bullshit because the code was garbage and I really dunno how he figured he could fix this mess. At least the furries get to have their cub porn, probably, if the Discord moderation fiasco is anything to go by.
 
  • Agree
Reactions: AmarettoPie
Recently, AI Dungeon released an update that filtered out all content that related to child sexual abuse. (archive)

UPDATE TO OUR COMMUNITY​

Yesterday, we released a test system to prevent the generation of certain sexual content that violates our policies, specifically content that may involve depictions or descriptions of minors (for which we have zero tolerance), on the AI Dungeon platform. We did not communicate this test to the Community in advance, which created an environment where users and other members of our larger community, including platform moderators, were caught off guard. Because of this, some misinformation has spread across Discord, Reddit, and other parts of the AI Dungeon community. As a result, it became difficult to hold the conversations we want to have about what type of content is permitted on AI Dungeon.
From the beginning of AI Dungeon, we have always loved the freedom and creativity that AI-powered gaming enables. It lets users exercise their imaginations, creating unique experiences and stories that have never existed before. We love seeing the amazing things our users create.
We remain true to the principles upon which Latitude was founded, which include promoting freedom of thought and expression through our advanced AI platform. At the same time, we have zero tolerance for sexual content involving minors.
We’re a small company and committed to a creative and thriving community. Allowing such content promotes harm to minors, is inconsistent with our company’s values, and could impact our ability to provide AI Dungeon in the future. The changes we’ve been making are also consistent with OpenAI’s terms of service, which prohibit the display of harmful content, even if an end user specifically asks for such content. We recognize that we can and will take additional steps to meet this standard.
While this test has largely only prevented the AI from generating sexual content involving minors, because of technical limitations it has sometimes prevented the generation of content that it wasn’t intended to. We know that our initial test was not perfect, and we also understand that this can be frustrating.
As a result, we are taking the following steps to address the issues that have been brought to our attention:
  • Continuously improve the AI based on feedback to be both fun and welcoming
  • Providing a way for users to report false positives so that we can limit the impact on other types of content.
  • Inform moderators in advance when platform changes are being implemented
In addition to safeguarding freedom of expression, every member of the Latitude team works tirelessly to create a welcoming community environment. The steps that we have outlined above represent our best intentions and efforts at this time to maintain the platform in a way that is welcoming for the largest audience possible.
We have shared responses to some questions you may have at the end of this post, but if you have any other feedback or questions, please reach out to us at support@aidungeon.io.

QUESTIONS AND ANSWERS​

WHAT CHANGES DID YOU MAKE TO AI DUNGEON?​

We are in the process of implementing technical safeguards and policies to support our community guidelines prohibiting sexual content involving minors in AI Dungeon. Additionally, we are updating our community guidelines and policies to clarify prohibited types of user activity.

WHY DID YOU MAKE THIS CHANGE?​

As a technology company, we believe in an open and creative platform that has a positive impact on the world. Explicit content involving descriptions or depictions of minors is inconsistent with this value, and we firmly oppose any content that may promote the sexual exploitation of minors. We have also received feedback from OpenAI, which asked us to implement changes.

HOW WILL IT AFFECT MY GAMEPLAY?​

For the vast majority of players, it shouldn’t. It will only affect your gameplay if you pursue these kinds of inappropriate gameplay experiences.
No system is perfect, and this is still a work in progress, and we are adding systems to let you report if you think a mistake has been made in flagging content from your game. Conversely, if you think the system has not flagged content it should, please report that using our pre-existing Flag button in the toolbar above the input box.

WHAT KIND OF CONTENT ARE YOU PREVENTING?​

This test is focused on preventing the use of AI Dungeon to create child sexual abuse material. This means content that is sexual or suggestive involving minors; child sexual abuse imagery; fantasy content (like “loli”) that depicts, encourages, or promotes the sexualization of minors or those who appear to be minors; or child sexual exploitation.

ARE YOU PREVENTING ALL SEXUAL CONTENT OR SWEARING?​

AI Dungeon will continue to support other NSFW content, including consensual adult content, violence, and profanity.

IS LATITUDE READING MY UNPUBLISHED ADVENTURES?​

We built an automated system that detects inappropriate content. Latitude reviews content flagged by the model for the purposes of improving the model, to enforce our policies, and to comply with law.

HOW DO YOU PLAN TO PREVENT FUTURE TESTS AND CHANGES FROM DISPROPORTIONATELY IMPACTING UNDERREPRESENTED USERS?​

As a diverse team, many of us have had personal experience with features like reporting being misused against marginalized groups to which we belong. We are aware of the possible misuse of any reporting feature on our platform, especially against underserved and marginalized groups, such as women, LGBTQ individuals, and disabled people, and will be taking reports seriously on a contextual basis.

While this is a pretty boring update, what isn't is the amount of Redditors and Discord users screeching at this new change. (mainly due to "muh AI Dungeon staff are looking at my sexual fantasies") Some goldmines I found below are here:

View attachment 2126124
View attachment 2126127
View attachment 2126133
View attachment 2126153
View attachment 2126162
Hi friend!
View attachment 2126175

Some Redditor seething over the changes:
View attachment 2126207

If you have any funny screenshots from their Discord server or subreddit, post them below.
Why are redditors always like this?
 
It's another case of an AI being taken out behind the woodshed for being insufficiently politically correct. The "muh children" line is just a shitty excuse.

Latitude is hilariously incompetent, but to be fair I'm not sure how this service could ever be economical, let alone profitable.
 
Back