Feedback Thou shalt not make a machine in the image of mans mind - New forum rule

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.

mindlessobserver

True & Honest Fan
kiwifarms.net
Joined
Jul 18, 2017
A critical line was crossed today. A user of the forum didn't present their own thoughts or arguments. They prompted an AI with a query and then posted verbatim that machines response.

Post in thread 'US Politics General 2' https://kiwifarms.net/threads/us-politics-general-2.210076/post-20849293

Beyond being pure autism for which the user is currently being rightly cyber bullied, this is I fear the opening salvo of the AI scourge on the forum. We are already beset with AI generated images and videos. Now will come the AI generated posts and thread content.

AI generated content needs to be restricted at a minimum. If it cannot be outright banned it needs to be clearly labeled and users who utilize these tools without disclosure need to face punitive action by moderators if they are found out. There is an insidious danger inherent to a technology that allows someone to sub contract their own creativity and reason to an algorithm.

But the true danger is not what it does to the person generating the content. It is in how the people interacting with it may be deceived into thinking it is organic.
 
Its becoming a problem in a number of places.

I'll talk about my issue with it at work. I put the effort in, and have both superiors and subordinates. Those above me are like hawks. You have to scrutinize their every message, pick through the open meaning and what they may be hinting at. I have to send them things back that are detailed, concise, to the point and in direct consideration to the concerns they have or issues they might have missed. So, a lot of work goes into making sure what I say in communication is efficient but also tailored to respond in a way that is specific to the person and the queries they have. I am not complaining about that, that is the way it has been, and it is something that covers every bodies ass. My issue is when people below me aren't doing the same.

What is pissing me off is when I have to delegate down to get concerns above me addressed. Like everyone I can't be everywhere at once and I don't know everything I need to. So I'll send out an email with specific requests to people, relevant to a problem or situation. Within five minutes I get a 2000 word email back, where the paragraphs follow the same exact formula and language. Numbered points that don't actually answer my question but affirm my question with exactly 3 assurances they to share that concern, and then suggesting I look at a generic google linked article. The pattern repeats itself like this 3 or 4 times over 2000 words, and the language is made to sound like it's answering a question but doesn't actually address the specifics of what I was querying or expressed a concern about. On top of this is no spelling or grammatical errors over those 2000 words, and these are people working in the mining industry. All of this 5 minutes after I sent the email, on issues that should take 2 days to a week to get an answer back to me. It legitimately makes me want to put my fist through something (and I am usually calm) because it means on top of now having to press the person and more closely supervise their stupid ass, I know this cunt was disrespectful, lazy and dismissive of not just me, but to the work. These are issues of safety that if left unaddressed can potentially cost lives, cause injuries and put me into court for negligence. Chatgpt hasn't made my life easier, it has made it more stressful, and I hate with a passion when I realize someone is using it to communicate with me.

Anyone caught using it on here shouldn't just get a mocking, they should get big red words under their avatar which say "This User is a Fucking Retard". If you're too lazy to put in the basic effort to have your own thoughts, and speak/type your own words you're a retard. This isn't fart huffing intellectualism either (like we got from the Skeptics), someone who has typed in the response "U R A FAggiT" has put more effort, thought and quality into discussion on this forum than anyone who copy pastes a response from a robot that was most likely programmed by team of Silicon valley toddlers that included one or more troons. Copypasting from a Troonbot should earn a person a badge of shame.
 
This forum post expresses strong concerns about the use of AI-generated content in online discussions. The author reports that a forum user posted a response generated entirely by an AI without adding their own thoughts or arguments. This act, according to the author, marks a significant shift in online discourse—one they see as potentially harmful.

The author frames this as the beginning of a broader issue, where AI-generated content could become widespread on the forum. They reference the already common presence of AI-generated images and videos, warning that text-based contributions will likely follow, potentially altering the nature of discussion.

To counteract this, they advocate for at least some form of restriction on AI-generated posts. Ideally, they suggest an outright ban, but at a minimum, AI-assisted content should be clearly labeled. They also propose punitive measures for users who fail to disclose AI-generated contributions.

The post articulates two primary concerns. The first is about personal intellectual responsibility—using AI to generate arguments effectively means outsourcing one’s reasoning to an algorithm. The second, and more pressing issue in their view, is the deceptive nature of AI-generated content. Readers might engage with such posts believing them to be genuine, human-created arguments when, in fact, they are machine-generated. This deception, they argue, poses a significant danger to the integrity of discourse.

Overall, the post is a reaction against what the author perceives as a growing "AI scourge," emphasizing the need for human-authored, organic engagement in discussions.
 
Chatgpt hasn't made my life easier, it has made it more stressful, and I hate with a passion when I realize someone is using it to communicate with me
Heard a very similar story from a fellow employee. Her higher-up kept putting the team's instructions through ChatGPT, so instead of receiving concise bullet points, they had to read through paragraphs of info from a bot that didn't know what it was talking about. She said it got so bad that they would have to go to his office and ask in person instead because the emails were useless. This wasted both his and their time. If you can't write well, just use bullet points. Even if you can write well, most people prefer a blunt, fluff-free explanation over an essay.
 
I remember there was a user who would post summaries of Ian and Anisa's unwatchable podcast, and it was so clearly AI, but they refused to label it as such and wouldn't even admit it unless you pressed them on it.

"Yeah, robots love Anisa!"

It irked me a lot.
 
What would an AI trained entirely on Kiwi Farms posts be like?

russell.jpg
 
Its becoming a problem in a number of places.
A colleague where my wife works left. One of the others put out this really nice farewell letter. Later someone else was leaving and she was struggling to know what to write. I jokingly say to ask an AI to write something. She does, job title and everything. It spits out the exact same farewell letter the last person wrote. 1:1. I know a guy at work who had chatgpt write his wife's Valentine's Day card message. I bet people are having AI write their wedding vows now. The troubling thing is how little it bothers them.

I think if all someone is posting is AI content that's not disclosed then it's no different than a bot posting here and something should be done. Even if it is disclosed it's so low effort and meaningless. There's obviously exceptions like AI content that's obviously AI and funny like songs or something.
 
give it another decade and AI chatbots will be better posters than actual humans

then we can finally replace all the Gold Redditors and Blue Checkmarks with digital homunculi that aren't completely insufferable pieces of shit
 
My question is what is even the point of sharing the 2000 word essay you didn't write. The information in it is useless and nobody even reads them anyway. It's easy to tell an AI generated paragraph at a glance and I just stop reading instantly and move on. So really all that ChatGPT has done is increase the volume of worthless writing distributed throughout the Internet and into real life. What was the point? Businesses are now relying on people who don't write or read anything? This shit can't last forever.
 
  • Like
Reactions: Wright and Vonel
I have to send them things back that are detailed, concise, to the point and in direct consideration to the concerns
Her higher-up kept putting the team's instructions through ChatGPT, so instead of receiving concise bullet points, they had to read through paragraphs of info
This is wild to me, because as a mid-level employee of an email-heavy firm, I spent a lot of time looking at paragraphs I'd just typed, nuking whatever extra I could, and turning sentences into bullet points/fragment.

It seemed to be appreciated by senior management (who didn't have time for everyone's eleventeen paragraph explanations, and just wanted distilled info), by peers (who don't care about your stuff as much as you do), and by junior employees (who just wanted to know what they were supposed to do).

I never once had someone say "I wish you'd sent me two more paragraphs of added color commentary".
 
Back