Ai Hysteria Thread - You get replaced by Ai, and you get replaced by Ai, and YOU get replaced by Ai!

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
California legislature passes controversial “kill switch” AI safety bill (archive)
As we've previously explored in depth, SB-1047 asks AI model creators to implement a "kill switch" that can be activated if that model starts introducing "novel threats to public safety and security," especially if it's acting "with limited human oversight, intervention, or supervision." Some have criticized the bill for focusing on outlandish risks from an imagined future AI rather than real, present-day harms of AI use cases like deep fakes or misinformation.

In announcing the legislative passage Wednesday, bill sponsor and state senator Scott Weiner cited support from AI industry luminaries such as Geoffrey Hinton and Yoshua Bengio (who both last year also signed a statement warning of a "risk of extinction" from fast-developing AI tech).

Meanwhile, OpenAI and Anthropic have "voluntarily" signed deals with the US government to allow early access to their models, through the US AI Safety Institute. They will also collaborate with the UK AI Safety Institute:
Through the deal, the US AI Safety Institute will "receive access to major new models from each company prior to and following their public release." This will ensure that public safety won't depend exclusively on how the companies "evaluate capabilities and safety risks, as well as methods to mitigate those risks," NIST said, but also on collaborative research with the US government.
 
Last edited:
  • Informative
Reactions: invisibult plus+
Trevor Moore called it years ago

Anyways are we allowed to make fun of people that are fear mongering and having absolute meltdowns over anything that even slightly looks like a robot? If not can someone make an “AI Fear Mongers” thread?
 
I think AI concerns are a bit overblown. We can't even get them to safely drive a car. While I'm sure some companies will try because they think it will save them money, it's going to eventually blow up on their face because we are far, far away from an AI sophisticated enough to handle all the things that come up in life that we don't think about. You would need to program it with a lifetime of experiences and moral codes, otherwise it will come to solutions to problems that inadvertently harm people or otherwise have disastrous effects. There may be situations where an AI would solve a task by stealing and damaging someone's personal property, or even directly injuring a person. The moment that happens the company that switch to AI has severe liability issues they have to deal with. No one wants an AI nanny that when asked to quiet a crying baby picks it up by the head and puts it in a sealed box.
 
  • Like
Reactions: invisibult plus+
This animation is from a channel that's relatively neutral towards AI and goes into some detail about its nature and controversies around it. However, it shows what people think AI could do if it reaches singularity and becomes ASI.
AI is capable of fluid intelligence. In simulations it can have surprising solutions to problems, like using map geometry to leave it and win at tag. However, it depends on the model. ChatGPT can analyze patterns and simulate "desired" outputs, but it doesn't think about why those patterns exist. Its responses have a very particular style which is easy to spot once you see them enough times. When you talk to models like it, they eventually run out of memory and "forget" what started the conversation in the first place. Even programs like Stable Diffusion, despite getting better lighting and anatomy, tend to forget context exists. They use the same subject-in-middle composition most of the time, and make silly mistakes like giving a solid hair strand a fine, split shadow, or using a line of clouds in the background that look copy-pasted. Much like text models, image and video generating models are trained on inputs that they just don't understand because they don't exist in the reality those inputs are based on. They're designed to specialize in a few tasks at most- they don't have generalized intelligence like even animals do. Companies will try to get around this by making all kinds of AI, like watchdogs that report you to the police if you show "suicidal intent."
Basically artificial intelligence isn't dangerous because it's particularly smart, it's dangerous because it's talented. It doesn't have to think about the patterns it's learning from, it just has to be good enough at analyzing them to be useful to whatever's keeping it. Maybe that's you, or Rajeesh Ramalamadingdong, but it can also be Meta, Google, or any evil government or NGO you can think of.
 
I guess this is a good thread for this.

Billionaire venture capitalists Marc Andreessen and Ben Horowitz have a podcast together. On the July 16 episode, they talked about regulation of AI and other topics. They are supporting Trump because of the Biden administration's tax policies and approach to regulating tech including AI.

https://x.com/tsarnick/status/1813393267679240647 Your browser is not able to display this video.
They claim someone in the Biden White House believes that "AI" can be classified/regulated out of existence.

@GradeAGobbleDeeGook I heard about this because the UFO people are talking about it. They believe the "entire areas of physics" vanishing is referencing not merely nuclear weapons, but anti-gravity research that was going strong in the 50s but completely dead (publicly) by around 1960. You could also compare this to Phil Zimmermann's Pretty Good Privacy (PGP) being considered a "munition". The NSA is known to hire lots of mathematicians in order to develop cryptography-related math that may never see the light of day.

What would trigger a "classification"? The "transformer" architecture underpinning the big, scary LLMs like GPT-4 has only been around since 2017. If those are already hitting a plateau, a different approach based on different mathematical models and/or hardware could be the key to advancing further and creating what could be considered sentient "artificial general intelligence". And it's at that point that the government would definitely want to step in and snuff it out of existence, or keep it all for themselves. Hardware regulations are already being considered, US export restrictions already attempt to prevent China from getting powerful AI hardware, and big companies are already ready to roll over because of "muh ethics". A Congressman will play a clip of Terminator 2 on the House floor to warn about the dangers of sentient AI, or the White House will simply issue an executive order and that will be that.
Math is already regulated, there are copyrighted formulas (Look at the No Man's Sky and the lawsuit threat they got), there are "classified" physics (explosives and certain nuclear physics stuff are only taught to people that are cleared) and lastly to end it on a BossmanJack related meme: Certain patterns are illegal to play in gambling (counting cards, number guesses in betting and so on...), they are classified but if you have applied them you will not be able to withdrawal the winnings. It's unironically rigged by regulating math.
 
The people I work for seem to firmly believe that my position (sales basically) will never be able to be done by AI because it takes a personal touch to empathize and overcome objections in a natural way or whatever, but I've heard some of these AI voices and they sound a million times better than me so if I can find a way to outsource my own job to AI I'm going all in.
 
Except """AI""" (Machine Learning) isn't the cause of any of these problems. Government and corporations are. Machine Learning models like Stable Diffusion, Transformer-based text models like GPT, and others are incredibly powerful tools that can augment human creativity, productivity, and efficiency.
except they're not those tools, because the models themselves are already hyper pozzed from their inception onwards by those corps. The only hope is that we reach some plateau with the algorythms where they don't get better by throwing more computing power at them while the computing power you do need gets inexpensive, so that small fry groups and maybe even home users can grow their own models from scratch.
But as of now, the actual hardware tools necessary for that are still economically unattainable for the proles playing along at home.
Something like the 90s home computer revolution needs to happen, where, for a short time, people, mostly unwittingly, had faster machines under their home desk than their local university had in their basement to run weather sims on. Sadly not many took advantage of that and I lack the understanding about a.i. algorithms to know if such a singularity moment could happen again in, say 5 to 15 years.
 
Last edited:
  • Winner
Reactions: AFAB
The people I work for seem to firmly believe that my position (sales basically) will never be able to be done by AI because it takes a personal touch to empathize and overcome objections in a natural way or whatever, but I've heard some of these AI voices and they sound a million times better than me so if I can find a way to outsource my own job to AI I'm going all in.
is an AI going to go "wait a minute they sound stressed, i should ask them how they're doing. oh they just had a baby died, maybe I shouldn't try to sell them baby clothes"?
 
is an AI going to go "wait a minute they sound stressed, i should ask them how they're doing. oh they just had a baby died, maybe I shouldn't try to sell them baby clothes"?
Do indians care about that? They probably populated call centers with them to lower the expectations of the public to A.I. levels already.
 
Huh? Care to simplify?
No Man's Sky had an algorithm on how to generate planets or something, a mathematician demanded to see the formula they had used since he had some copyright related to space and computer programs. Nothing happened but it's still pretty retarded that it can happen,

Yea but that'd be like in chess people were not allowed to think about their moves.
It is. It's pretty much illegal to "gamble" with logic. Even if it's a small chance, it's effective enough for casinos and betting places to ban it.
 
1726423798126.png

AGI incoming boyos? The times are changing...
 
  • Informative
Reactions: SubarashiiRage
o1 uses a concept called "Chain-of-Thought". I demonstrated it in the ChatGPT thread with another LLM months ago. Basically, make the LLM think about it's response by "drafting" it, "reading" it back and improving on it, being a critic on the output it itself wrote. The response will get more "accurate" and it allows LLMs to get things right they wouldn't on the first turn, basically self-correct and add avenues of "thinking" about the problem it would not consider on a simple pass/direct generation. There is a plethora of ways to improve on this, even using several different LLMs in this chain (Chain of Experts). It's computationally very expensive because you basically process the same prompt several times, but I assume OpenAI optimized the training for CoT thinking and I'm guessing they probably use a group of pretty small LLMs with differing specializations (not to be confused with Mixture-of-Expert models). That's also what I meant when I said that we haven't even scratched the surface in gains and optimization yet.
 
Last edited:
  • Informative
Reactions: BirdUp and SIMIΔN
o1 uses a concept called "Chain-of-Thought". I demonstrated it in the ChatGPT thread with another LLM months ago. Basically, make the LLM think about it's response by "drafting" it, "reading" it back and improving on it, being a critic on the output it itself wrote. The response will get more "accurate" and it allows LLMs to get things right they wouldn't on the first turn, basically self-correct and add avenues of "thinking" about the problem it would not consider on a simple pass/direct generation. There is a plethora of ways to improve on this, even using several different LLMs in this chain (Chain of Experts). It's computationally very expensive because you basically process the same prompt several times, but I assume OpenAI optimized the training for CoT thinking and I'm guessing they probably use a group of pretty small LLMs with differing specializations (not to be confused with Mixture-of-Expert models). That's also what I meant when I said that we haven't even scratched the surface in gains and optimization yet.
Can you demonstrate?
 
Can you demonstrate?
Yes, here. It was more a demonstration how CoT can be used for improved creativity and more natural language but in one of the chains you can see self-correcting behavior. The task was to write alarmist BBS messages about the AIDS epidemic, set in the late 80s. In one draft, the LLM (it was llama 3 70b) used unicode emojis, which did not exist then. By pretending to be specialist on computer history and critiquing it's own draft, it self-corrected by advising itself to remove these emojis in the final version, which it did. It's a good example for a model not reliably being able to zero-shot a thing, but getting it right with CoT.

You'll also see that these CoTs are basically walls of text, which makes them computationally expensive (This is why o1 responses are so slow). They do work though.
 
  • Informative
Reactions: SubarashiiRage
Back