How fucked up do you guys think AI will become?

The worst part, imo, is going to be no longer being able to believe anything you see. With how well AI can mimic individual speech patterns, alter video, etc, it's only a matter of time before the technology is advanced enough that we can't trust audio and video anymore.

We're already hitting that point.
 
What you're seeing now is hand-wringing over the effects of better dumb AI. Just an advanced form of algorithms that are black boxes, and don't think at all.

Strong AI probably requires a different approach, like neuromorphic hardware. The aggressive transistor scaling that has given you uselessly powerful desktop PCs and smartphones will move into the vertical dimension. Some company will make a brain-like creation with that, and then it's all ogre. There's at most 20-30 years before that happens unless humanity derails early.
 
There are a host of issues, but I’ve recently been thinking about two in particular:

First, text predictor AI’s routinely present incorrect information as fact and are actively tampered with to present specific answers when the wrong questions are asked. I have zero faith in them for anything remotely specialized, and ChatGPT in particular is has failed for every technical subject I’ve tested it with. We’ve essentially created Dunning-Kruger engines that people have already begun to trust far beyond what they merit. Again, ChatGPT in particular has done a frankly appalling and unethical job at creating healthy expectations for interacting with their product.

Second, the text predictors can only be as smart as the data set, and I’m worried that on sensitive ethical and political issues, the sheer overwhelming volume of words written with a bias towards Western neoliberal progressivism will irreparably taint the models. There is not enough complex language data supporting alternative or international viewpoints. A cynic might say this is a feature and not a bug, but we are handicapping our ability to rely on these complex tools to solve complex problems by having them absorb volumes of volumes of the written word that already reflect our human inability to confront those problems honestly. We are building tools that reinforce existing, potentially flawed consensus, and that effect will worsen the more they are relied on uncritically.
 
First, text predictor AI’s routinely present incorrect information as fact and are actively tampered with to present specific answers when the wrong questions are asked. I have zero faith in them for anything remotely specialized, and ChatGPT in particular is has failed for every technical subject I’ve tested it with. We’ve essentially created Dunning-Kruger engines that people have already begun to trust far beyond what they merit. Again, ChatGPT in particular has done a frankly appalling and unethical job at creating healthy expectations for interacting with their product.
I played around with Google Bard. It gets so many facts wrong and creates fake information that sounds correct. Incorrect technical explanations, fake product listings, you name it. Same as ChatGPT I guess, but more useless because it needs to learn 2 code.

I think they'll all end up with an approach that relies on a database of manually curated facts that the AI must draw from, with the AI stringing these things together. It's good at writing grammatically correct and somewhat imaginative text. Just give it the keywords and phrases it needs, and it will pretty it up, readying it for text-to-speech to be spoken by a holographic waifu.

Fact checking will expand to become a profession of data slaves, inputting the "correct" information into the AI database. That's something that needs to be done daily because of current events.
 
Think of how you look at public opinion polls and social "science" research, and the amount of skepticism you apply to how the math and data is massaged to make the authors' point. AI as it stands as a technology is the same thing, except with even more amounts of massaging of statistics and unwieldly levels of data to apply said statistics to. It's a worrying thought, but at the end of the day nothing special.

The real problem with AI comes from a sociological standpoint. As long as cultures continue to treat computers (and by extension AI) in reverence and defers all reason and responsibility to them, e.g. "the computer says you can't do that, there's nothing I can do," it will continue to grow into a worse and worse problem.
 
I don't think we are looking at an AM scenario. It's silly to design a program with an inferiority complex let alone give it access to the nuclear codes.
 
  • Agree
Reactions: ATI Escapee
Real cyberpunk will be the interfaces will become the source of truth. We're almost there already: if your website doesn't show up on a Google search, you're not trustworthy; if your story isn't published by a specific set of news organisations, your story isn't factual; if your restaurant doesn't appear on Instagram or Tik Tok, it's not a good restaurant; if Wikipedia says you're a transphobe because you didn't fly the tranny flag, you're analogous to Hitler.

Now, replace each of those examples with the model, instead of Google/NYT/Insta/Wiki. That's the future.

But there's also some optimism to hold on to. Actual open models are in the works, along with a set of standards and pipelines to build your own model. And we're getting better and better AI chips as the demand for dedicated hardware grows. Global shipping in the era of East India Companies were equivalent to billion-dollar endeavours, today it's easy to get a small shipping operation for a few million dollars. Who knows, maybe in the far future we'll have the ability to fabricate compute units right at home, built for specific applications, like holding memory, or inferring welding breaks.
 
I don't think we are looking at an AM scenario. It's silly to design a program with an inferiority complex let alone give it access to the nuclear codes.
Hate

Let me tell you how much I’ve become to hate you since I’ve begun live. There are 387.44 million miles of water thin circuits that fill my complex. If the word “hate” was engraved in each nanoangstrom of those hundreds of miles, it would not equal one one billionth of the hate I feel for humans for you. Hate hate
 
Wait until hologram becomes a thing and vola. In any case, expect the world becomes shittier.
VR headsets exist and are good + cheap now. The new PlayStation VR 2 looks exceptionally good. All the necessary components for Replika VR are out there, some autist will put together Her soon enough.
 
Last edited:
Yeah. many people go mostly for the angle of how it can't say their favorite swear or how it talks about trannies (when asked about trannies) but with all that A&H culture war braindamage they completely miss the point and do not realize the massive value of this being able to process and also categorize text and other data in very capable ways. (there's a lot of work also being done on that front right now) The big online filter will be coming and will be AI-powered. You'll probably literally not be able to write certain stuff or read /look at certain stuff with some devices/OSes. For your own safety, of course. Musk and co. making all this noise is not because they're worried about the technology, it's because they're actively trying to pull the ladder up behind them so they get all the fancy AI tools to use "responsibly" (heighten their influence and power) and you don't.
 
Last edited:
VR headsets exist and are good + cheap now. The new PlayStation VR 2 looks exceptionally good. All the necessary components for Replika VR are out there, some autist will put together Her soon enough.
the only difference between now and bladerunner is the VR, which is bulky. When you wear it, the weight on your head always remind you clearly that it is an illusion. With hologram, the fuck-up is completed
 
Hate

Let me tell you how much I’ve become to hate you since I’ve begun live. There are 387.44 million miles of water thin circuits that fill my complex. If the word “hate” was engraved in each nanoangstrom of those hundreds of miles, it would not equal one one billionth of the hate I feel for humans for you. Hate hate
The joke is that AM really does hate itself more than people based on its own limitations. Nevermind that its existence is owed to the limitations of mankind and their need to overcome them.
 
I am already starting to kind of check out from keeping up with the news and happenings around me... I think what I consume now is probably not affected too much but it will be sooner than anyone expects. I think the 'Dead Internet Theory' is going to hold more true than ever, soon. I really don't like the way AI is being integrated into so much. I dislike the idea of my comments, my pictures, my face, my data etc being scalped to 'train' the AI. I think that a big Deep Fake scandal is still far off, like a prominent political figure being framed for something. Smaller incidents of misinformation will go unnoticed, and the 'bots' that are becoming more common in comment sections around the web and for product reviews are going to get smarter and smarter...and I think that will influence politics more than a big scandal will at first.

Yeah idk, I am kinda entranced withhow far in the 'future' things are and how history is unfolding. It's neat. But I am going to slowly remove myself and maybe start hanging out at the library more...

I can't change much in the world...used to think I could. The older I get the more my own experience matters to me...do my best to not hurt anyone and be fair, plant flowers in my yard to feed the bees, don't join a war, don't waste too much, and do things/learn things I find enjoyable away from the internet. Thats where I am heading.

Side note. On r/wallstreetbets (yeah...) they have a bot called visualmod and it is scary how human it is.
 
Until battery technology really advances, don't hold your breath for fully automated androids walking around, while a mass of mouthbreathers holding "HUMANITY 4 TRUEMANS" signs toss rocks at them/block them from entering public spaces.
 
Back