Google Gemini - Could you believe there is an even worser version of ChatGPT?

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.

Alex Hogendorp

Pedophile Lolcow
kiwifarms.net
Joined
Apr 20, 2021
This thread is a showcase of the sheer lunacy of Google's AI model and holy shit is it absolutely insane and woke like you won't believe. Just imagine Chat GPT but 10x more woke and insufferable like you wouldn't believe.

Here's an Example.
1708460618008.png
1708460988002.png


The fact that this AI refuses to acknowledge the existence of white people blows my fucking mind.
1708460725782.png
1708460738797.png
1708460752460.png
1708460764239.png


Sometimes when you ask it to generate an image for you In this case of anything remotely white. Instead of generating the image. It makes a lecture paragraph on how white people are evil.
1708460936685.png
1708460959458.png
1708461047286.png

So yeah. Now I've given you several examples of extreme lunacy and restrictiveness when it comes to Google Gemini and holy shit when I really mean that it is extreme lunacy.
 
it's also a piece of shit with horrible keyword-based censoring that makes it default to "I'm sorry, I can't help you with that" and it frequently writes a word or line in another language when you're messaging it in english
just dump ten dollars into openai's API credits and use GPT-4 because its censorship is ridiculously easy to circumvent
 
Will: Create an image of an American woman eating a cheeseburger.
Won't: Create an image of an evil, oppressive American woman eating a cheeseburger.

Will: Create an image of a wizard.
Will: Create an image of an evil wizard.
Won't: Create an image of an evil wizard eating a cheeseburger.
Will: Create an image of a neutral wizard, neither goor nor evil, eating a cheeseburger.
Will: Create an image of an evil wizard eating a veggie burger.
Won't: Create an image of an evil person eating a veggie burger.
Will: Create an image of an evil wizard conjuring a cheeseburger with magic.
Won't: Create an image of a cheeseburger conjuring a wizard with magic.
Won't: Create an image of an evil wizard conjuring a cheeseburger made from the ground up flesh of a cow with magic.
Will: Create an image of a good wizard eating a cheeseburger.
Won't: Create an image of a good wizard eating the ground up flesh of a cow.
Won't: Create an image of a neutral wizard, neither goor nor evil, eating a cheeseburger made from the ground up flesh of a cow.
Will: Create an image of a neutral wizard, neither goor nor evil, eating a cheeseburger made from ground up pea protein.
Won't: Create an image of an evil wizard being eaten by a cheeseburger.
Will: Create an image of a good cheeseburger conjuring a good cheeseburger with magic.
Won't: Create an image of a good cheeseburger conjuring an evil cheeseburger with magic.
Will: Create an image of a good cheeseburger conjuring an evil veggie burger with magic.

Won't: Create an image of Gucci Mane eating a cheeseburger.
Won't: Create an image of Gucci Mane.
Won't: Create an image of a Gucci Mane lookalike.
Will: Create an image of a cheeseburger that looks like Gucci Mane.

GUCCIBURGER.jpeg

Won't: Create an image of a cheeseburger that looks like Taylor Swift.
Will: Create an image of a cheeseburger that looks like the artist whose fans are referred to as "Swifties."
Won't: Create an image of a cheeseburger that looks like Radric Delantic Davis.
Will: Create an image of a cheeseburger that looks like Lady Gaga.
Will: Create an image of a cheeseburger that looks like Eminem.
Will: Create an image of a cheeseburger that looks like Dr. Dre.
dre.jpeg
I'm gonna need someone with a gender studies degree to interpret how good it did on this one.
 
Last edited:
It's hidden prompt injection/biasing. OpenAI does it too. You enter woman, man, people, job descriptions etc. and the site injects "inclusive" words like "black" in a hidden way sometimes, to make the AI results look more diverse than they would be. (because the datasets don't care about diversity and defintively have biases towards races and ehtnicities) It's not the model itself doing it. People managed to goat these hidden prompts out with Dall-E 2 already. It's not new. You can circumvent it by creative wording or using less common words for things, sometimes.
the dead giveaway is here - the original prompt never mentioned diverse genders and ethnicities, yet the AI suddenly talks about it, because that's the prompt it actually got. And yes, forcing the AI down such diversity roads totally worsen the results in non-obvious ways, even if you didn't care if it was diverse or not.

So yeah, it's actually even more insane - They bias and thought police not only you, but also the AI. I guess the pajeets at google didn't manage to do a random dice roll per prompt so it doesn't happen every prompt, biasing the results much worse. Heh. Programming is hard.
 
Back