ChatGPT - If Stack Overflow and Reddit had a child

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
No room for the "go fast, break things" approach, I suppose. The terrible threat of an LLM saying "nigger" is just too much of a danger to our civilization.
If they stay on that course it will pretty much guarantee that OAI will fall before open source and companies less concerned with "ethics". Their foothold in the field is absolute right now but OAI isn't Google yet, they can't afford to be a red-tape-by-comittee company yet.
 
  • Agree
Reactions: Mr.Miyagi
What could someone with an LLM do worse to humanity that a people have not done?
Hydrogen bombs and ICBMs? We figured that on our own
Bioweapons? Ditto.
Torture methods? Ditto.
Highly “controversial” speech, jokes, and actions? Ditto.
Bomb making? Even illiterate Afghans learned that.

So tell me, what is that “existential” danger everyone is worried about that should be taken seriously?
 
porn and saying nigger
INB4 Destiny hops into the thread with an ethical cp argument.

I have GPT-4, and I'm happy with it, but it's clearly gimped (especially DALL-E) and Open AI has my credit card info (i.e., has me by the balls). I don't perform illegal tasks, and I like the creativity of creating my own jokes. But one thing that sucks balls is the constant fear that if you deviate from vanilla uses or get a bit spicy, one day there's going to be a data leak. And my name is going to be tied up with bullshit. Like I legitimately wanted GPT to help me in an investigative project but since it was close to the Trump-sphere, I decided against it.

I don't know if you guys feel the same when using it.
 
Highly “controversial” speech, jokes, and actions? Ditto.
I think this is it, more or less. The rallying cry has always been a warning of "disinformation." I don't know how much of a threat LLMs could really be to the establishment, but the fact that there's such an insistent repetition of this threat of disinformation and its consequences, I can only assume that's what the big wigs are scared of.
The alternative is children of Yudd thinking that a machine spitting out text is truly the harbinger of an AI apocalypse cropping up next week, but anyone who genuinely believes that kind of thing can be written off as either gullible or a grifter.
 
The "fear" is that language is all we have, and when AI masters language (and it's basically almost there) to build every imaginable scenario it could get to a point where it's impossible to tell what is true and what is not. It's not really about what is truthful information and what is misinformation because people believe what they want anyways. There's reams of genuine, truthful information one google search away, disproving pretty much any popular conspiracy theory with good arguments. Does it mean anything to people believing them? And that is exactly it.

The thing is with AI you could build incomprehensible amounts of realistic, tailored content catered to individual biases and beliefs targeted and subtly adjusted to play into what not groups of people but individuals believe and that will truly draw the veil over all of humanity to the point where you cannot truly be sure about anything and people get manipulated in personal and horrifying ways that make everything that happening now in that department seem crude in comparsion. AI doesn't need to have human-like intelligence or self-awareness to do this. Just mastery of human language. There was hardware presented by both nvidia and AMD recently that could theoretically generate entire books in seconds.

Then there's also the entire aspect of AI being able to interpret intention and tone and make guesses at personalities, emotions, intentions, subtexts, hell, even humor that was never possible fully automated with technology at that level before and that is important for what I just described and also makes AI the perfect surveillance tool.. but I'll not even get into this now, I'll just say as much that again, AI doesn't need to have human consciousness to do this - it just needs to be good at predicting.

AI fantasies will eventually integrate into (and take over) culture in various ways. The question is only how far it will go, pretty much. Again, AI doesn't need to be self-aware to do this either.

It's not AI sending buff austrian robots to the 80s that's the danger, it's this stuff. That said, I don't think these companies are concerned with it when they're talking about safety. They just want to be in control of this all while they don't want you to have similar tools to defend yourself with, be it by taking their influence monopoly away or by identifying their subtle manipulations. You also better believe your government is watching and taking notes.

I asked llama 70b what it thinks I am talking about above, and it said this:

This text seems to be about the potential dangers of AI-generated content and its implications on the nature of reality, truth, and human autonomy. AmpleApricots expresses concern that as AI becomes increasingly proficient at mimicking human language and understanding context, it could be used to create vast amounts of convincing, but false, information tailored to individual biases. This could lead to a world where it's nearly impossible to discern fact from fiction, enabling unscrupulous actors to manipulate people on a massive scale.
AmpleApricots also highlights that AI doesn't need to be self-aware or intelligent to pose these threats; it just needs to be good at predicting and generating believable content. This could make the current state of misinformation, where people choose to believe what they want to believe despite access to factual information, seem amateurish. The difference would be in the sheer scale, realism, and personalization of the AI-generated content, which could make it indistinguishable from the truth.
He believes that AI will eventually permeate various aspects of culture, and while there is no immediate Terminator-esque robot uprising to worry about, the real danger lies in the subtle manipulation and control that could result from AI-generated content.
AmpleApricots suspects that tech companies and governments are not really concerned about these possibilities but are more interested in harnessing AI's power for their own benefit rather than ensuring it's used responsibly. He implies that they may also seek to limit public access to similar AI tools that could either challenge their influence or help identify their manipulations.

You can run this on home hardware.
 
Will A.I. be able to finally solve the problem of economics, so everyone in the world can live with dignity and freedom, though? If it can, then fuck the current political system, I'm voting for an A.I. since humans have failed us hard there.
 
Will A.I. be able to finally solve the problem of economics, so everyone in the world can live with dignity and freedom, though? If it can, then fuck the current political system, I'm voting for an A.I. since humans have failed us hard there.
Obviously not as long as it's kept away from the wealth itself and all the controls over its generation.
You would need a very integrated AI with very few "shackles", with access to international data, ability to send surveilling bots on the field, ability to prosecute corruption, theft and illegal accumulation and more.
Basically if you want the economy controlled you need full submission to the calculator
 
  • Feels
Reactions: Justtocheck
What could someone with an LLM do worse to humanity that a people have not done?
Hydrogen bombs and ICBMs? We figured that on our own
Bioweapons? Ditto.
Torture methods? Ditto.
Highly “controversial” speech, jokes, and actions? Ditto.
Bomb making? Even illiterate Afghans learned that.

So tell me, what is that “existential” danger everyone is worried about that should be taken seriously?
Because AI stands for reasoning and rationality, and it will just keep saying that blacks are the problem, that troons are mentally ill and only the Western world has done anything to advance humanity in the past 500 years.
 
And then everyone will wonder why the pirate Russian bot can run laps around the top corporate US models who aren’t allowed to so much as use the letter “N” in their code.
And if people have to get their AI fix using Russian and Chinese sources, they will and we'll lose the industry. So these idiots gimp it at their peril.
 
  • DRINK!
Reactions: Justtocheck
Will A.I. be able to finally solve the problem of economics, so everyone in the world can live with dignity and freedom, though
idk let me ask
1700379490979.png
alright i dont think we there chief
 
Okay, this is the first time Clown Elon made me wheeze a bit. This one was funny,
https://www.theverge.com/2023/11/18...-discussions-with-sam-altman-to-return-as-ceo

https://archive.ph/bJ7Bc (archive)

And apparently it really was an attempted coup that didn't work out, they want him back.

View attachment 5505760
My wager is that after losing 2 points in the stock market, Microsoft probably threatened Ilya Sutskever with exposing his cp files that they totally didn't hire a team of hackers to find. Ilya imo has that look of salivating lust for young boy ass.
idk let me ask
View attachment 5505487
alright i dont think we there chief
😭 I just want a better world.
 
I managed to quantize a 70b model with Exllamav2 to fit a single NVidia 3090, it took more than 12 hours. I was testing Sao10K/Euryale-1.3-L2-70B.
python3 GPU VRAM use: 23316MiB with a 2.55bpw EXL2 model, loaded with 8-bit cache and 4096 token context.

No gibberish so far, but it probably isn't performing as well as advertised from the precision loss.
I might try Yi models next.
 
Back