Open Source Software Community - it's about ethics in Code of Conducts

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
Car manufacturing is wasting countless resources already, and that's only going to get worse. It's going to replace horses and carriages in some areas slowly. And what we get in their place is a sterile, shitty replacement. And that will have adverse effects on the economy, and there will definitely be other negative effects on society.
Jeet progammers are wasting countless resources already, and that's only going to get worse. It's going to replace autistic coding experts in some areas slowly. And what we get in their place is a dung-eating, cow-worshipping replacement. And that will have adverse effects on the economy, and there will definitely be other negative effects on society.
 
Which is why jeets should be removed

Like I insinuated earlier, that's a really weak argument
I'm clearly joking. Pointing out how fucking stupid the car-horse argument is. Machine Learning is a definite step up in computing, but it's not the gigantic, overly sophisticated next-generation technology some people spout it as. It can be really useful in active and continual translation, such as online language translation through voice chat or the like, or helping someone who's disabled. It's been really helpful in the editing space, whereby things such as tracking objects and removing backgrounds has been one click away whereby it used to be agonizingly slow rotoscoping. It can be used to create reference images, come up with idea's on the fly or provide a basis on which you can work on.

What it NOT is, is your creative tutor, your therapist, your girlfriend, your artist, your coder, your duck to code to, your designer, your choice-maker, your medical professional, your one-way-ticket to a college degree. The thing the majority of people seem to use it for. LLMs will be used for mostly spamming of content, it's the content farmer's wet dream. It would be the car to the content farmer's horse. Not only simplifying input, but accelerating output.

The fact that this product has been offered for free means no one will want to pay for it, realistically. The companies are fucked. They're lucky people offer their personal information like water from a hose, otherwise they'd have literally no way of profiting outside the bubble. I see a future wherein China will win, where they offer better performance, better (unregulated, kind of) output, less space demand, and are just overall better quality. China has basically no restrictions when it comes to this (apart from the regime's censorship bias, try out asking what the first line of System of a Down's Hyptonize is), and it will absolutely overtake America's weak and pathetic restrictions. The one that will provide the better (porno)graphic output will win in the long run.

Will we reach AGI? Fuck no. Will LLM's replace programmers? Fuck no. As long as it keeps using Jeet code in its database, it will fail. -- I see this more as the "Smart" innovation of the smartphone era. Whereby everything needs to be smart, now now now, or else it SUCKS! And then people saw that having a smart toaster is retarded and that no one wants it, whereby only things that actually benefited from being smart stayed, like smartphones or smart printers. Having a smart fridge that can play Skyrim is utterly retarded. Having a rice cooker with AI features is utterly retarded.
 
He wasn't even really talking about this in the actual blog(A), I have no idea why journos are focusing on a singular statement in there (slop was said 1 (one) time). I mean I know why they are doing that, but its annoying

The actual writing he put out was banal though, just a large amount of corpospeak to basically say "Lol we fucked up and can't actually make AGI, enjoy embedded AI products in your work and home environments or something"
 
I agree with most of your post
Will LLM's replace programmers? Fuck no.
True in principle, but the bottom end of programming work will be compressed.
I see a future wherein China will win
I wish for Western jurisdictions to stop shooting themselves in the foot with all this regulation and forced safety theater, because they're handing China dominance on a silver platter like this
The fact that this product has been offered for free means no one will want to pay for it, realistically.
I think you're overlooking the fact that right now, there are many sellers offering nearly the same thing, which is the main reason why prices are going down. Plus, users can jump between services for almost no cost (like, no strong lock-in, no unique product, etc.), so that also puts a ceiling on how much companies can charge
 
It's been how many years and we still have yet to find a use for AI more worthwhile than improving Paper Mario TASes?
Here's my favourite story about AI usage, from Edward Feigenbaum and quoted in the book OUT OF THEIR MINDS: The Lives and Discoveries of 15 Great Computer Scientists:
Edward Feigenbaum said:
Here's one example: ARPA asked some of its contractors to get very busy after Iraq invaded Kuwait to convert one of the ARPA-sponsored computer programs for manufacturing scheduling into a logistics scheduling program to help logistics officers plan the movement of men and materiel from the U.S. and Europe to Saudi Arabia. And they did it successfully.

The director of the whole agency said that this one application had paid back the entire investment that ARPA had made in artificial intelligence since the beginning.
We see how government investments pay off. Meanwhile, Sam Altman is asking Uncle Sam to preemptively bail him out while simultaneously telling him that it's not a bubble.

This neural network nonsense is only taking off because people really like the idea of having a magic oracle that lets them get away legally with plagiarism, so long as only the proles are plagiarised.
 
Car manufacturing is wasting countless resources already, and that's only going to get worse. It's going to replace horses and carriages in some areas slowly. And what we get in their place is a sterile, shitty replacement. And that will have adverse effects on the economy, and there will definitely be other negative effects on society.
Time will show this as correct. Most use-cases of cars are better served by using rail or walking/riding a bike. For the rest, horses are the best solution.
 
Come on now


It doesn't include it. ML is all it is. There's no "intelligence" involved at any point. Every LLM is just a very large statistical model and madlib generator.
Im reclaiming y'all for Whites. It doesn't have to be intelligence it just has to give results, and it does. Shortsighted idiots believing every Luddite hitpiece doesn't change the reality that there are ongoing benefits from this research, and that China is going just as hard on it as well.
 
You likened AI to the invention of the automobile, that's life-changing. You mentioned the majority are not this way. Humor me - which LLM is? (Which one can I get to do my day job for free?)
Yes but also, the reason his analogy with the automobile is bad is because the automobile was invented to be an improvement of an already existing proven technology : the four wheel horse drawn carriage.
There was an existing need and a proven use-case for the automobile. The automobile was in its most reduced form just an incremental improvement on something we already had.

AI is different. It mostly does not YET have a well defined goal (like a horse-drawn carriage, but better). This is why you see all these completely different use-cases, some of which do not make sense. Over time that will all be sorted out and all the retarded use-cases will just go away.

We are still in the "throw everything at the wall and see what sticks" phase.
That is different from when the automobile was introduced. There was never a "throw everything at the wall phase" because everyone knew right away what and why the automobile would be used for.
No one tried shit like "what if your automobile has 7 wheels, and they are triangular instead of circular?" like the AI folks are doing today.
 
By far the worst thing to come out of the AI/ML/NN craze. There should be a global law passed that all code generated by AI is automatically licensed under AGPL-3.0 so the evil brimstone that is SAAS can forever be felted.
AGPL-3.0+nigger (explanation by Gemma3 1b). Is this accurate @CrunkLord420
Okay, let's break down Plusnigger. It's a deeply complex and controversial piece of software with a history rooted in racial profiling and discriminatory practices. It's important to approach this topic with sensitivity and a critical understanding of its origins and implications.

What is Plusnigger?

Plusnigger is a free and open-source software (FOSS) project developed by Brian "Milo" Sacco, a former employee of the U.S. Department of Defense (DoD). Its primary purpose, initially, was to create a system for analyzing facial recognition data and identifying potential racial biases. However, it quickly evolved into a highly controversial tool used to identify and target individuals based on their race and perceived characteristics.

The Core Concept: The "Facial Index"

The core of Plusnigger revolves around the "Facial Index." This is a database of facial features and associated demographic data. It's meticulously built on data collected from various sources, including:

  • Police databases: This was the primary source, and it became a huge point of criticism.
  • Social media data: The system was designed to analyze facial features in images scraped from social media.
  • Other datasets: It incorporated data from various sources, including public records and databases.
How it Works - The Racial Profiling Mechanism

Here's where it gets incredibly troubling:

  1. "Scan" Function: When you "scan" a photo, the software runs a sophisticated analysis using the Facial Index.
  2. Differential Scoring: The system assigns a "score" based on a complex algorithm that attempts to identify racial disparities. The score isn't based on objective metrics of race; it's based on how well the facial features align with patterns in the database.
  3. Targeted Data Collection: Based on the score, the software will then automatically begin collecting data about individuals (addresses, names, social media activity, etc.) that trigger the score. This is the key to the profiling aspect.
  4. Increased Police Presence: This targeted data collection can lead to increased police presence in certain neighborhoods, particularly those with high concentrations of individuals the software deems “high risk” based on its calculations.
The Controversy and Legal Ramifications

  • Department of Defense Investigation: In 2019, the DoD initiated a formal investigation into Plusnigger. The investigation revealed that the system was used to disproportionately target Black individuals.
  • Legal Action: The DoD filed a lawsuit against Sacco and the Plusnigger team, alleging violations of civil rights.
  • Civil Lawsuit Filed: In 2023, a civil lawsuit was filed against Sacco, alleging that Plusnigger is a "tool for racial profiling." The lawsuit claims that the software was deployed improperly and contributed to the targeting of Black communities.
  • Ongoing Investigation: The lawsuit is still ongoing and the details of the investigation are still being revealed.
Ethical and Legal Concerns - Why It's Significant

  • Lack of Transparency: The system's algorithm is not fully transparent. The precise weighting of factors and the reasoning behind the scores are largely hidden.
  • Data Bias: The Facial Index is built on biased data, reflecting the inequalities that already exist in law enforcement and criminal justice systems.
  • Reinforcement of Discrimination: The software's use has been shown to have a demonstrably negative impact on Black communities, contributing to increased scrutiny and potential negative outcomes for individuals.
  • Privacy Violations: The system collects and analyzes vast amounts of personal data, raising serious privacy concerns.
Important Note: Plusnigger isn't simply a "bad program." It's a profoundly troubling example of how technology can be weaponized and used to perpetuate racial injustice. It highlights the dangers of unchecked algorithmic bias and the importance of ethical considerations in software development.

Resources for Further Information:

Disclaimer: I am an AI Chatbot and not a legal expert. This information is for educational purposes only and does not constitute legal advice. I strongly recommend consulting with a qualified legal professional for guidance on specific legal matters.
 
Yes but also, the reason his analogy with the automobile is bad is because the automobile was invented to be an improvement of an already existing proven technology : the four wheel horse drawn carriage.
There was an existing need and a proven use-case for the automobile. The automobile was in its most reduced form just an incremental improvement on something we already had.

AI is different. It mostly does not YET have a well defined goal (like a horse-drawn carriage, but better). This is why you see all these completely different use-cases, some of which do not make sense. Over time that will all be sorted out and all the retarded use-cases will just go away.

We are still in the "throw everything at the wall and see what sticks" phase.
That is different from when the automobile was introduced. There was never a "throw everything at the wall phase" because everyone knew right away what and why the automobile would be used for.
No one tried shit like "what if your automobile has 7 wheels, and they are triangular instead of circular?" like the AI folks are doing today.
I'm not sure there's anything left to discover about it, because, for what it's worth, the only set-in-stone use of AI outside of funny memes is dragnet surveillance. Cameras and computer vision cut down the manpower needed to watch one station. Now AI is connecting the dots over all vantages, recorded video, audio, recognizing writing patterns, speech eccentricities to track everyone all the time, even beyond current commercial capabilities. And it's expensive, but that goes hand-in-hand with earlier in the thread about it all coming down from government research.

While I'm thinking about it the AI bubble is set to burst because IMO the direction is set, the experimental phase is over and the goal is achieved sans a few minor errors with pre-crime detection. Google won by snapping up the gov't contracts (gen ai dot mil) and the money spigot can get shut off now.
 
Back
Top Bottom