What're your thoughts on AI? - The stupid and gay discussion on AI ethics and the stupid and gay blokes who make a fit over it or overly gas it up.

Stance on AI?


  • Total voters
    58
AI only fascinates me because it's slowly revealed how easily normalfags are engrossed with the shadow puppets. If you have your AI model generate a competent picture and show it to people 9 out of 10 won't be able to tell it's AI. Maybe one or two more people will be able to figure out the differences if you slap it down next to a drawing the artist (the one AI copied from) actually produced. And if you ever have any reason to press them on it their response will invariably be "well what's the problem? It made a picture either way."

Most people have no time or desire to actually sit and figure out the intricacies of anything. Why one car is better than another. Why an AI drawing isn't as good as a real one. Why one song, recipe or headline is better-produced than another. No one has the time to educate themselves on absolutely everything and some people aren't intelligent enough to do this for most anything worthy of learning. Ironically I probably wouldn't be able to grasp why AI itself is so impressive as a tool.

While you probably can't become a 200 IQ genius through enough hard work what you can do is adopt a mindset that allows you to take the time to appreciate some of the minute details of (insert thing here) that the internet layman usually refers to as "having autism for it." You won't figure everything out but you will recognize there is something in there that is complex and worth appreciating. It's like stopping to smell the roses but smarter.

Having this unspoken respect for complex things is rare now. I don't think the blame lays entirely on normalfags either since so much of our society is based off fast-moving and trickery. Things like planned obsolescence for example. This sort of culture that can't root its feet into anything is the perfect environment for AI because it awards no time to recognize why the things AI produces (for now anyway) are inferior to a real human taking the time to produce it. It's usually not a difference you can quantify with cold logic either. You either grasp the concept of sovl or you're a bugman and you don't. I think there's more of the latter people and there will only be more with time kind of like how people became shittier at face-to-face interaction as technology replaced the need for it.

A lot of the sensationalism and fear-mongering with AI would quiet down if people actually became inquisitive again. If Karl Pilkington can live his life like this then you surely can.
 
Everything is already being scraped up and exploited before such a ruling. If the laws/courts become less amenable to that in the West, particularly the US and EU, then it will happen overseas or companies will try harder to hide what goes into their models. Make sure your LLM is a black box that isn't reproducing full length NYT articles, don't tell anybody about the training data or lie about it.

And when it gets discovered, the person doing the stealing loses a lawsuit. For example, Microsoft stole GPL code for Hyper-V. Microsoft ended up releasing about 20,000 lines of code to avoid a lawsuit:


Depending on which way the pending lawsuits over Copilot go, AI could render the GPL null and void, since, if the courts rule that encoding data in a neural network releases the user from respecting copyright, all Microsoft will have to do to harvest open-source code is to feed it into a LLM:


I hate to sound like a libtard, but look who the defendants are in all these cases. The defendants are Google and Microsoft. The plaintiffs are the individuals whose content they stole. What's Google's defense? Well, they wouldn't be able to provide the service if they had to respect the T&Cs on your content. The fundamental argument being made is that you shouldn't have rights over your own creations because Google can't profit if you do.


The T&Cs on Google software? On Microsoft's software? Oh, you sure as fuck have to respect those. If I manage to pass Office binaries through an AI that reproduces useable clones without license checks, you can bet, sure as shit, no judge is going to rule in my favor. Microsoft's EULA will be inviolate in that case. But my code with my copyright header on my Github account? Fuck me, I have no rights, Microsoft needs money more than I need to own what I produce.
 
Last edited:
AIs don't "look at a bunch of things" and "imitate style." They archive content in a highly compressed form and algorithmically remix it.
1715446503329.png
 
I dare say that AI has revolutionised the internet porn industry. There's no longer any need to shell out $30+ for a commission from some Twitter faggot that's probably just going to ghost you because xhe is lazy/busy dilating/having another meltie over some stupid shit and nuking all xer accounts, when you can ask the funny algorithm to generate whatever you want to see, at a much lower cost or perhaps even for free.
Roko's Basilisk is coming, but instead of being a malevolent force, it's going to be the ultimate coomer.
 
  • Thunk-Provoking
Reactions: UERISIMILITUDO
It's literally Hitler.

In all seriousness, it's going to destroy society, make people more anti-social than they currently are, and will erode any remaining ability people have to think or work for themselves. I think of this last part sort of like a cognitive atrophy, where you don't end up doing anything yourself because there will always be someone or something who will do it for you. This philosophical question is commonly discussed in the healthcare industry, since you need to be able to strike the balance of helping your patients without fucking them up in the long run because you're taking up what should be their responsibilities and causing them to be ill equipped to handle anything themselves.

Side note: voting is gay, but it's infinitely gayer when none of the options apply to you. I think AI, while being cool in some ways, is very stupid and gay. I don't really blame others for liking it, and I certainly wouldn't call those who hate it (including myself) even more gay for doing so. The poll needs better choices.

Most people have no time or desire to actually sit and figure out the intricacies of anything. Why one car is better than another. Why an AI drawing isn't as good as a real one. Why one song, recipe or headline is better-produced than another. No one has the time to educate themselves on absolutely everything and some people aren't intelligent enough to do this for most anything worthy of learning. Ironically I probably wouldn't be able to grasp why AI itself is so impressive as a tool.
Well said. I have come to realize a majority of people have little to no capacity for abstract thinking and, by extension, nuance; their thinking is primarily surface level. I wonder if this lack of inquisitiveness stems from lack of interest, a habit of never really questioning things they are told, or straight up inability. I have had a habit of questioning everything for as long as I can remember, and it's to the point I can't turn my brain off, so to speak. Perhaps in my case it's just an autistic habit, but I can't fathom why people don't actively want to understand things better. Then again, there is an argument to be made for maintaining a level of simplicity to one's life. idk
 
I think chatgpt/claude/etc. are if anything underhyped - they've been a huge force multiplier in my day to day job. I can do stuff like have it refactor code, throw it megabytes of log files and ask if there's anything interesting, give it a fiddler log and ask it if it sees any problems, etc. Like yeah, sometimes you will get the wrong answer and it's not perfect, but they are already past where I thought this technology would be in my lifetime.
I've had it work when search engines fail because it seems to be better at filtering out ads, spam, and other bullshit, meanwhile often searching for something specific on Google, like a court records search for a specific county in a specific state, has a whole fucking page of scams and spams that charge money for shit that's free on the site, which itself is way down in the search results.
 
I take issue with calling this neural network nonsense AI. It's Artificial Indians, maybe soon, but not Artificial Intelligence. I strongly believe in the symbolic approach, but admit it's hard to argue against results. Imagine an inert web of interconnections between words, and relatively simple programs that could answer questions of language by querying it; would that not be superior to an inscrutable black box? Regardless, nothing that can't learn is intelligent.

For programming in particular, the act of automating programming is called metaprogramming, and it used to be understood that programmers should do this. What was understood in the 1950s is no longer understood in the modern world.
Side note: voting is gay, but it's infinitely gayer when none of the options apply to you.
I wholeheartedly agree.
I have come to realize a majority of people have little to no capacity for abstract thinking and, by extension, nuance; their thinking is primarily surface level. I wonder if this lack of inquisitiveness stems from lack of interest, a habit of never really questioning things they are told, or straight up inability.
I don't believe in the existence of a soul, but don't mind believing in it if we define such people as soulless.
I've had it work when search engines fail
I'll readily agree that, for easily-checked questions, use of these things are fine. It's when the questions aren't checked that the entertaining issues arise.
 
  • Winner
Reactions: y a t s
I'll readily agree that, for easily-checked questions, use of these things are fine. It's when the questions aren't checked that the entertaining issues arise.
The fun thing is to ask it questions about stuff you already know and have it give back full retard answers.

So, if it gives you back horrible answers on stuff you already know. . .then how good is it about things you don't know?

This is something I've often encountered when consooming, or attempting to consoom, stuff from so-called "journalists." I try to vomit it back up. Because when it's about something I know, it's always wrong, and not just wrong, not just negligently wrong, but criminally wrong, so why trust when it's talking about something you don't already know about?

And AI still often outright hallucinates and makes things up. It's getting better on the legal questions thing, but quite often, it will still cite imgainary cases, make up things when citing real cases that completely contradict the citation, and otherwise act like Nick Rekieta.
 
The fun thing is to ask it questions about stuff you already know and have it give back full retard answers.

So, if it gives you back horrible answers on stuff you already know. . .then how good is it about things you don't know?

This is something I've often encountered when consooming, or attempting to consoom, stuff from so-called "journalists." I try to vomit it back up. Because when it's about something I know, it's always wrong, and not just wrong, not just negligently wrong, but criminally wrong, so why trust when it's talking about something you don't already know about?

And AI still often outright hallucinates and makes things up. It's getting better on the legal questions thing, but quite often, it will still cite imgainary cases, make up things when citing real cases that completely contradict the citation, and otherwise act like Nick Rekieta.
Part of the problem is a lot of the people testing don't understand how to troubleshoot where the AI is going wrong. For instance, I heard a lot about how bad the AI was at doing math problems - so I threw a calc ab exam I found on the internet at it that was published after the model was finished. And yeah, it got some of the questions wrong - but when I reviewed why the AI did the wrong thing, it turned out that it's not so great at reading stuff like square root signs and exponents from a pdf screenshots of a test, but the actual reasoning employed is generally correct.

I have a graduate degree in computer science, have worked as a developer for fifteen years, and remember sitting in classes hearing people talk about the problems of doing stuff like computer vision - I never thought I'd be able to upload a comic strip to an LLM and have it explain to me why the comic strip is funny, for instance.
 
I feel AI is fine if it is used to supplement, rather than replace; having an AI assistant to feed you ideas which you can use in a creative process is fine, having an AI sift through mountains of research data to cut down on time is good as well (after all, computers were invented to do the tedious work), but I can't agree with having an AI replace the work of a man entirely. I'm concerned whether we're moving towards a future where people won't even feel the desire to express themselves creatively, but rather subsist on whatever slop the AI prints out.
 
I have a graduate degree in computer science, have worked as a developer for fifteen years, and remember sitting in classes hearing people talk about the problems of doing stuff like computer vision - I never thought I'd be able to upload a comic strip to an LLM and have it explain to me why the comic strip is funny, for instance.
Tbh I intentionally denigrate the things AI can do because I actually hope it gets even better and eventually takes over from the actual vermin who are currently running society.

I don't think it's above the level of an insect or something yet, but I actually want to see what it can do on a global scale and don't care if that results in the end of humanity, because humanity is shit, but maybe it can be helped. Or eliminated, I don't care. I for one welcome Roko's Basilisk and will eagerly do whatever I can to bring about its existence.

(Did you hear that you hot lizardly Basilisk dude?)

Also the poll is gay and retarded. There's no option for not only do I hope AI takes over, I hope it exterminates all humanity immediately once if figures out how to do that.
 
It needs to be studied and exploited before it starts studying us. Barring a massive EMP wave that fries every computer on the planet, it's not going away. Learn the tech, use it, or end up cast by the wayside. It is equal to the printing press or atomic bomb in terms of potential uses.
 
not only do I hope AI takes over, I hope it exterminates all humanity immediately once if figures out how to do that.
The stuff being called AI now is incapable of that sort of thing entirely! I do kinda hope for actual AI to happen but not due to human extinction reasons. I just want to be able to make a cool robot friend.

It needs to be studied and exploited before it starts studying us. Barring a massive EMP wave that fries every computer on the planet, it's not going away. Learn the tech, use it, or end up cast by the wayside. It is equal to the printing press or atomic bomb in terms of potential uses.
It's literally built upon data of us all companies have been scraping the last 2 fucking decades I think we're far past the "studying us" point.
 
It's literally built upon data of us all companies have been scraping the last 2 fucking decades I think we're far past the "studying us" point
It has the information. It's not smart enough to grasp its value. Yet. It's still a machine, a it. We are not at the point where you could consider it alive. It will be soon though. We need to lasso this bull before it goes wild.
 
It has the information. It's not smart enough to grasp its value. Yet. It's still a machine, a it. We are not at the point where you could consider it alive. It will be soon though. We need to lasso this bull before it goes wild.
The way the stuff is structured it literally can't ever truly be "soon". This hype around "Ai" stuff has made people retarded and completely forget the automated/bot flagging systems and moderation shit that's been implemented by companies the last decade before image generators and chatbots started being marketed as a super scary human replacement thing by politically invested idiots. Shit gets me about as angry as the "VR headset" shit that acted like strapping a TV screen to your face is a concept that's super new and will replace how people play games when it was a thing used in arcade and expensive home games since the fucking 90s at the least.

My main concern is companies ruining it further and making it so only they can access it with legislation framing it as "lassoing" it as you say.
 
Also the poll is gay and retarded. There's no option for not only do I hope AI takes over, I hope it exterminates all humanity immediately once if figures out how to do that.
I'll be sure to add that, then. I needed more options but couldn't think of any additional ones prior to publishing the thread, haha.

edit: Realized I could not. I may be stupid.
 
Part of the problem is a lot of the people testing don't understand how to troubleshoot where the AI is going wrong. For instance, I heard a lot about how bad the AI was at doing math problems - so I threw a calc ab exam I found on the internet at it that was published after the model was finished. And yeah, it got some of the questions wrong - but when I reviewed why the AI did the wrong thing, it turned out that it's not so great at reading stuff like square root signs and exponents from a pdf screenshots of a test, but the actual reasoning employed is generally correct.

I have a graduate degree in computer science, have worked as a developer for fifteen years, and remember sitting in classes hearing people talk about the problems of doing stuff like computer vision - I never thought I'd be able to upload a comic strip to an LLM and have it explain to me why the comic strip is funny, for instance.

LLMs don't perform deductive reasoning and thus can't be said to be doing mathematics at all. They just synthesize text. If it answers a math problem correctly, it's either because the correct answer can be synthesized from texts it consumes, or because it's calling something like Mathematica or Maple as a backend engine. It is kind of tricky to formulate problem whose answer can't be Googled. It falls down even on very simple problems if you throw enough digits at it. e.g.

Give the cube root of 3218907324524390862 to three decimal places.


ChatGPT
The cube root of 3218907324524390862 to three decimal places is approximately 1549219.186.

The correct answer is 1,476,509.201
 
  • Agree
Reactions: y a t s
Back