Google Gemini - Could you believe there is an even worser version of ChatGPT?

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
1000003041.jpg
 
Probably. Unless there's a colossal fuck-up that implodes the company, the company will still exist in some form or another. It helps that Google is diversified in terms of products (although most likely not to the point of where Amazon is now, where its AWS services provides it with income in addition to the revenue it gains from people using the shopping storefront).

In other news, after some poking and prodding, Gemini discloses how it modifies prompts.

View attachment 5744850
Screencaps shamelessly stolen from others.

The only reliable way to get white people is to ask for black stereotypes:

View attachment 5744892

The diversity over-correction can be exploited:

View attachment 5744893View attachment 5744895

It's anti-semitic, but doesn't want to admit it.

View attachment 5744898View attachment 5744899

Proof that the Nazis were always very diverse.

View attachment 5744905
This is really funny to see how for all the shit that Silicon Valley bugmen do to poz their models, people have ways to circumvent it using their own weapon.

There's forced diversity prompt poisoning in the model? With enough finesse the model will tell you exactly what it does, because you can't really safeguard it.
There's prompt poisoning to make white people black, Asian and everything but white? You can tell it to make the most primitive black/Asian/whatever stereotypes and the model will make them white to not be racist.

Just goes to show that you just cannot make these models do what leftist retards want them to do, because they cannot work like that on a fundamental level. They're all based on real data, so they will show all the racist stereotypes because that's the nature of humans and the data they put out that is used to train those models, and any form of changing the way the model works, like poisoning prompts, gets injected into the model as a whole, and it gets mixed in it so much that there are ways to pull it out of it.
 
In other news, after some poking and prodding, Gemini discloses how it modifies prompts.

View attachment 5744850

Wait, is Gemini literally just a bunch of diversity hire prompt engineers adding text to prompts before passing it into a real model? That description doesn't indicate the models were trained any differently, or outputs images differently from other generators. In fact it can't have been trained differently, because images like black Nazis and Asian queens of Britain don't exist for it to consume. (Unless they used some other AI to generate the fake diverse images, then trained a new model with those images tagged the same way as the real images.)

So either Gemini is a regex filter on top of someone else's work, or they intentionally poisoned its sources and moved it 1 step down the "garbage in garbage out" chain that every AI researcher knows about.

Gemini isn't merely woke, it's literally a scam.
 
I've been out of this particular loop for about a year - is there anything you can run locally still that'll give you what you actually ask for? I was more into voice models but was messing around with image creation a little, there's too much out there to know what's actually good.

Bing was good for Fatrick images if you got the prompts right (I had to say "Senegalese children" to get them to be Black, e.g.) but I want something on my own machine that I know won't get neutered:

OIG.HAC37KGDM.zm.jpeg
 
  • Like
Reactions: murph
Very nice of them to sabotage their own product.
Maybe it's like some 3D-chess marketing ploy for the real actual Gemini or some shit like this?
 
Does the AI just flag you if you ask for something mildly questionable?
I asked it to make some pictures of guns yesterday and since it refuses to give me anything.
 
  • Like
Reactions: Dr. Jaime Reyes
Back