Google Gemini - Could you believe there is an even worser version of ChatGPT?

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Am I doing sum ting wong?
Google’s hidden AI diversity prompts lead to outcry over historically inaccurate images (archive)

According to that, they turned off image gen because they were getting roasted too hard.

It would be funny to see that someone, somewhere was legitimately "red pilled" by this. It's not happening in that comment section.

Whenever they re-enable it, it will be under heavy scrutiny and probably face an outcry from the other side for not producing the Nazis of Color.
 
I've been out of this particular loop for about a year - is there anything you can run locally still that'll give you what you actually ask for? I was more into voice models but was messing around with image creation a little, there's too much out there to know what's actually good.

Bing was good for Fatrick images if you got the prompts right (I had to say "Senegalese children" to get them to be Black, e.g.) but I want something on my own machine that I know won't get neutered:

View attachment 5746178


the local llm model is alive and well theres a reddit group, I know I know, devoted to it. For most purposes they are as good or better than chatgpt. Everyone should be using local models preferentially over the online corporate ones, but relatively few are due to laziness.

For images I haven't investigated that as thoroughly but there are probably untarded versions of stable diffusion models floating around.
 
I might talk about this more in the 'Conspiracy Theories You Believe In' thread at some point, because I, for a while, have had my own private opinions on things like this - but this whole situation really gets the ol' noggin joggin'. 🤔
 
When will you people learn that shit is nothing more than a viral marketing strategy. They get brownie points (no pun intended) and customers with leftards and free advertising from people screaming about it from rightoids. They WANT you to scream about it so it gets clicks and attracts users. Ignore it. The bot's shit anyways.
 
I've been out of this particular loop for about a year - is there anything you can run locally still that'll give you what you actually ask for? I was more into voice models but was messing around with image creation a little, there's too much out there to know what's actually good.

Bing was good for Fatrick images if you got the prompts right (I had to say "Senegalese children" to get them to be Black, e.g.) but I want something on my own machine that I know won't get neutered:

View attachment 5746178
Stable Diffusion's still pretty local, especially if you stay on 1.5. There're also a lot of models and the like on CivitAI, so you can customise it however you want.
 
When will you people learn that shit is nothing more than a viral marketing strategy. They get brownie points (no pun intended) and customers with leftards and free advertising from people screaming about it from rightoids. They WANT you to scream about it so it gets clicks and attracts users. Ignore it. The bot's shit anyways.
Yeah, no. This is a fuckup, not a marketing strategy. Google is a dysfunctional company kept alive by its online advertising juggernaut.
 
1000003050.jpg
 
The AI companies are ridiculous for not taking a hardline "Blame the user, the AI is just a tool" stance. There's nothing these things can generate you can't make in Photoshop. Nobody blames Adobe for that.
No, no, no. I think you're missing the real motivation here.

The point is not to avoid blame or even legal liability. The point is to prove that you and your company are even more down with the cause than all the other soydevs in Silicon Valley.
 
The point is not to avoid blame or even legal liability. The point is to prove that you and your company are even more down with the cause than all the other soydevs in Silicon Valley.

There are way cheaper, easier, and less embarrassing ways to do that.

This feels more like we're seeing the first wave of products developed under the assumption that guardrails are needed, not just disclaimers. The whole progressive leftist outrage machine isn't new any more, nobody is reacting or catching up to Current Year any more. It is now a baked-in assumption that is incorporated into product decisions from the start.

This feels like one of the first things said in the first project meeting was "and of course, we need to be pro-active in promoting diversity, not just reactive". It wasn't raised as a problem by a Stunning And Brave advocate halfway through development; the entire team already agreed and there was never any point where they considered challenging it.

From there we have 2 possibilities: they tested these scenarios and didn't care, deciding they can PR around it. Or the ideologically blindered team didn't even think to try such things, they just dropped the guardrails in then did thorough testing on puppies in fields with flowers or melting clocks or whatever.
 
We've turned their wokeness against them :story: :story:
Yep. Like I said earlier, taking down image generation had nothing to do with "historical figures".

I'll bet they're right now researching everything they can possibly find on black stereotypes and tweaking the bot to only render white people when words and phrases like "chicken", "grape soda" and the like gets entered into a prompt. Historical figures is just an excuse.
 
Back