ChatGPT - If Stack Overflow and Reddit had a child

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Problem is I don't want to self host it and expose my machine to the internet, and I'm too lazy to pay $2 for a VPS. Sucks.
I think even a pre-trained model is definitely going to set you back more than $2 (USD?) / mo. for VPS costs but is it less the cost itself and more can't be fucked to get it to run? Perhaps you could get some help here...

Anyway, I've been kind of late to the LLM game for anything besides what Google already forces on me (and I've noticed definite flaws like mixing up multiple titles in a given game series when giving advice on them) and trying to get other LLMs to say the nigger word and give me pointers on manufacturing illicit drugs. But since DeepSeek has been in the news lately I figured I'd try more serious queries and actually see what such models are really good for. As someone who is familiar enough with machine learning to have deployed those sorts of tools in Python and R on my own, and read a lot about the history of AI from well before the present time, there is no doubt in my mind that we are currently undergoing a lot of hype that now seems to be ebbing. But when that ebb has finished, I still think there will be more of a residue of good non-hype stuff left behind vs. what was the case in the 80s with expert systems. What I mean is basically: expert systems are still used today. In fact, if you're running a Debian system for example, you can do sudo apt install clips and already have the means to create expert systems at your fingertips with NASA's own open source contribution to the software. Expert systems still quietly inhabit important niches in the software world where their somewhat brittle requirements can be satisfied. In contrast, when the hype behind LLMs and other forms of deep learning has faded, they will remain more widespread than expert systems have become, because, despite their own drawbacks, they are overall more robust in being able to learn from data even if it means huge amounts of it have to be thrown at them.

So what are LLMs really good for in my opinion? For me, it seems that they're really good at doing more than a regular search engine query can do, as long as you know how to vet the results. It wasn't too long ago that I saw someone suggest that LLMs could replace K-12 human teachers outright. Thinking back on that part of my life, I'd like to believe it in many instances, but it's a terrible idea. Whenever an LLM tells you something you need to know enough about the subject matter not to treat the responses you get like they were handed down from the top of Mt. Sinai. If you can't at least hypothetically think of a situation where you can look at what an LLM says and think to yourself "this makes no sense at all" then it's not time to use it. And also consider how often it is that an LLM will be cucked or not help you with piracy and so forth. In many such instances you might be better off with just using DuckDuckGo or Yandex and not relying on any sort of LLM. Even then, DeepSeek threw me a bone when I asked it a not super-leading question not optimized to get the desired answer:
Screenshot 2025-02-10 at 10-12-25 DeepSeek - Into the Unknown.png
Item #2 gave the correct answer with very little foreknowledge in the query. And that's good.

Still, I remain completely unwilling to trust LLMs with generating code. Possibly, the output could be really good well over half the time, maybe more like 80%+, but that type of automation could lull you into complacency and that's had some evidently pretty disastrous effects. I remember linking a video, probably not in this thread, but in the main programming thread maybe, by the German physicist Sabine Hossenfelder and she just really nails down why LLMs can't be trusted to write good code, supported by studies on the matter. Here it is again:
So anyway, in conclusion: LLMs? Still very valuable. The resource requirements they have are still kind of sketchy. Hopefully it inspires the construction of more nuclear power plants that we should have had years or even decades ago regardless. Just remember that AI has gained increasingly human qualities over the years and one of them is that you can't ever fully trust humans, or machines that stand in for them.
 
I think even a pre-trained model is definitely going to set you back more than $2 (USD?) / mo. for VPS costs but is it less the cost itself and more can't be fucked to get it to run? Perhaps you could get some help here...
You misunderstood what I'm trying to do, I don't need a server to run an LLM natively, but to run a reverse proxy and route API requests to an actual LLM provider that I have keys for.
It doesn't need a lot of resources, anything that can run one Docker instance is enough.
 
You misunderstood what I'm trying to do, I don't need a server to run an LLM natively, but to run a reverse proxy and route API requests to an actual LLM provider that I have keys for.
It doesn't need a lot of resources, anything that can run one Docker instance is enough.
Well, in that case, I'm pretty good at running extant Dockerfiles. But any resources on writing and testing them?
 
Oh dear god, the dreaded Kiwifarms proxy arises! In all seriousness the idea of a proxy has put a sour taste in my mouth. Some shitters on 4chan couldn't let anyone have a free ride and got a pretty nice and generous pubic proxy shut down for (((reasons))) doing something called "spitefagging". After a while some Discords servers cropped up making some people do a goy dance just to write their gooner coom slop once they found out how to scrape for API keys that were unprotected.
 
Some shitters on 4chan couldn't let anyone have a free ride and got a pretty nice and generous pubic proxy shut down for (((reasons))) doing something called "spitefagging".
This has been happening for years now.
After a while some Discords servers cropped up making some people do a goy dance just to write their gooner coom slop once they found out how to scrape for API keys that were unprotected.
It's always the most unhinged schizos gatekeeping access to their circlejerks, there's nothing of value to gain by interacting with them.
And putting scraped keys in a proxy is asking for trouble, most that did so were just attentionwhoring and scattered when Microsoft started suing.
 
is it morally wrong to have sexual conversations with an AI?
Don't know about the ethics but its really pathetic and even sad how some people have to rely on a glorified markov chain for a facsimile of love.
human psyche's vulnerability to artificial intimacy
While I don't think we'll ever get replicant-tier sexbots we might get actual matrix-level VR thanks to neurallink and AI. We can already interface with the brain and AI can generate Doom in real time, go further with that and you have VR that's practically indistinguishable from reality except you have near total control. While I can imagine a ton of ways to use it, like walking on Mars based on data from probes, guess what most people will use it for? that's right, porn, and who knows how deranged it might get.

It also raises some nightmare scenarios like what if the simulation crashes with you on it and you can't exit it? you can't pull the cord out of your head as you're essentially disconnected from your body while you're plugged in, so what you do? you slowly die IRL?. Maybe you don't even need to go thru a crash, some people might die in the simulation because they don't want to disconnect to eat, move a bit to avoid atrophy and other basic self-care routines. Those who are paralyzed IRL, why would they want to leave a world where they can walk or, hell, even fly? to go back to the wheelchair?. At some point you're gonna have people demanding to be put on pods with life support so they don't have to come back to reality ever again.
Frankly after deepseek any G20 country can make their own LLM.
I gave DeepSeek and Claude the same ethical dilemma and asked them for the best option:
These AIs are so basic and naïve, they don't even consider that the stowaway is lying and probably a criminal on the run.
Latest sama post
"Our mission is to ensure that AGI (Artificial General Intelligence) benefits all of humanity."

LMAO sure thing pedoman, why not make your model FOSS with a MIT license like deepseek then? how is a closed opaque model only a few can afford going to benefit all of humanity? more like its going to make society more stratified than it ever was.
I've seen unions now recommend using chatgpt to write job applications.
Link to that?
I have way too many keys to use by myself.
PM me one.
 
Legal complications await if OpenAI tries to shake off control by the nonprofit that owns the rapidly growing tech company
archive

OpenAI, the tech company that created the popular ChatGPT chatbot, is at a crossroads.

It began as a nonprofit dedicated to developing artificial intelligence systems smarter than humans. Since its founding, OpenAI has boasted that it was upholding its nonprofit goal – “to build artificial general intelligence (AGI) that is safe and benefits all of humanity.”

Now, its tune has changed. OpenAI’s leadership is reportedly taking steps to transform it into a for-profit company. If that happens, the nonprofit would lose control.

We are law professors who specialize in nonprofits. As we explained in an earlier article, all charities must devote their assets to their legal purposes. If OpenAI hoped to have a quickie divorce from its charitable obligations, it is now learning how costly that could be.

Proposing divorce from charitable vows​

OpenAI began in 2015 as a scientific research nonprofit. Four years later, its board decided that achieving its lofty goals required more than gifts and grants.

It reorganized to accommodate and attract private investment. As a result, the company known as OpenAI is neither a single nonprofit nor for-profit company; it is a set of interlocking entities, including the for-profit subsidiary that conducts its operations.


As a whole, OpenAI is ultimately required to advance the nonprofit’s purposes. These purposes represent the promises OpenAI made to the public when it was founded.

OpenAI’s nonprofit certificate of incorporation states the purposes: “to provide funding for research, development and distribution of technology related to artificial intelligence,” thus producing technology that “will benefit the public.”

For almost a decade, OpenAI proclaimed its commitment to safe scientific development, making it a higher priority than earning profits. For example, OpenAI warned investors that “the Company may never make a profit,” and that “it would be wise to view an investment … in the spirit of a donation.”

Nevertheless, if profits were made, investors like Microsoft were entitled to receive them – collecting up to 100 times their investment, before the nonprofit parent could take any share of those gains.

Having second thoughts​

CEO Sam Altman and his colleagues have apparently had second thoughts about their vows.

According to widespread media reports citing unnamed sources, they want to restructure and remove the nonprofit parent from its controlling perch, turning the rest of the company into a benefit company – a type of for-profit enterprise with some public-interest goals.

Those media reports relay that the US$6.6 billion in recent investments in OpenAI is conditioned on OpenAI converting to a for-profit company within two years.

If this conversion fails, OpenAI must return that money. Investors also want to remove any caps on their investment returns. Altman himself now reportedly wants to own a piece of the company.

Not moving so fast​

Changing a nonprofit to a for-profit company, known as a “conversion,” often requires only a board vote and filing forms with state regulators.

Delaware, where OpenAI was established, would regulate at least this filing process. When a nonprofit has significant operations in another state, then that state also has authority to regulate the conversion. In this case, that would be California, where OpenAI is headquartered and holds most of its assets.

The state attorneys general for Delaware and California are reviewing the proposed restructure. Delaware Attorney General Kathy Jennings has asked for more information about how the nonprofit’s rights would be protected if OpenAI carries out its reported restructuring plans.

And California Attorney General Rob Bonta announced that his office is “committed to protecting charitable assets for their intended purpose.”

With OpenAI valued at $157 billion, the nonprofit’s fair share could make it the wealthiest foundation in the United States. The Bill and Melinda Gates Foundation, currently the largest U.S.-based foundation, holds $75.2 billion in assets. Harvard’s endowment, the largest for a U.S. university, has about $53.2 billion in its coffers.

State regulators, which supervise the transformation of nonprofits into for-profits, oversaw many conversions of health insurers and hospitals in the 1990s. For example, the California attorney general oversaw Blue Cross Blue Shield of California’s payments of $3.2 billion to establish two new health care foundations when it converted. Critics subsequently pointed out that investors in such for-profits made even greater gains a short time later.

We believe that if the nonprofit gets its fair share of OpenAI, those health care transactions would pale in comparison to the scale of an OpenAI conversion.

Estimating what’s at stake​

To be sure, the nonprofit would not be entitled to receive the full $157 billion in a conversion of OpenAI. So what is the nonprofit entitled to?

First, the nonprofit parent has a right to the value of its share of ownership of the for-profit operations. That value would include the value of the properties owned by the for-profit subsidiary, such as ChatGPT.

The nonprofit should also be compensated for giving up its control over the whole OpenAI enterprise. Typically, investment bankers assess the value of control somewhere between 20% and 40% of the value of the company.

What could be the hardest part of this process would be estimating the value of the right to OpenAI’s future profits. Under the current arrangement, investors first get 100 times their investments in OpenAI before the nonprofit receives any share of the profits.

Microsoft has invested $13 billion in OpenAI so far. But let’s assume total investment of $10 billion: OpenAI would need to make $1 trillion before the nonprofit would get its piece of the pie. Since only 10 companies have made over 100 times the amount invested in them in the past decade, this is a high bar.

Protecting the nonprofit​

Although state attorneys general have the leading role in protecting the nonprofit, they do not have to do this work alone.

First and foremost, the existing nonprofit’s board members are legally required to protect the nonprofit and its purposes. This puts them roughly on the same team as the state attorneys general.

The board could decide that OpenAI’s charitable purposes are best protected by spinning off the for-profit company. Even so, it must ensure that the nonprofit is adequately compensated.

Other authorities may play a role. For example, if a state attorney general can’t broker an agreement with OpenAI, they might farm out the work. Either a state attorney general or a court could authorize a third party, such as another charity, to bring a claim to protect the nonprofit assets.

In addition, the IRS could step in.

The tax collection and enforcement agency has the responsibility to ensure that assets held by a nonprofit tax-exempt organization remain within the charitable sector. In this case, that would mean ensuring that OpenAI’s nonprofit gets what it’s owed.

As an added wrinkle, billionaire entrepreneur Elon Musk has already weighed in as a former board member. He filed a lawsuit in February 2024 to protect OpenAI’s charitable commitments, withdrew it, and refiled it. Now, he has brought Microsoft into the fray, arguing that Microsoft’s partnership with OpenAI is allowing the two companies to bypass antitrust laws.

We’re certain that if OpenAI does embark on a journey to for-profit status, that road will be long and bumpy. In determining the value of OpenAI’s assets and who owns them, the state regulators and the nonprofit board are empowered to fully protect the nonprofit – not OpenAI’s CEO, its employees, the for-profit company itself, or any investor.
 
  • Like
Reactions: Fae Supremacist
While I don't think we'll ever get replicant-tier sexbots we might get actual matrix-level VR thanks to neurallink and AI. We can already interface with the brain and AI can generate Doom in real time, go further with that and you have VR that's practically indistinguishable from reality except you have near total control. While I can imagine a ton of ways to use it, like walking on Mars based on data from probes, guess what most people will use it for? that's right, porn, and who knows how deranged it might get.

I don't think neural implants will be used in the future beyond medical applications. The brain processes information at a huge bandwidth, and even if a device could read neural signals, brain patterns vary between people. It would degrade over time due to the brain being corrosive, and scar tissue would form around it. Also, once the implant is in, how would you maintain it? The hardware might become outdated in a few years.

It's more likely that people will start wireheading—stimulating the reward centers in their brains with electrical currents—as a replacement for masturbation or drugs. They wouldn't develop a chemical tolerance to this, so they would neglect basic needs until their synapses become numb. But most will be too scared to take up this addiction, sticking to digital girlfriends and locally generated porn.
 
Last edited:
why LLMs can't be trusted to write good code, supported by studies on the matter
Any studies on if indians can be trusted to write good code. I have a strong feeling the German lady won't admit "Bob and vagene" indians aren't quality coders

And that's the thing. Yeah AI can't replace quality, but a lot of jobs are not hiring based on quality. Instead of hiring 20 indians and making the manager fix their code. You can get an AI to give you the same results in seconds that takes indians hours.

Think about it like self checkout vs cashiers. Except we exclusively hire stinky rapey indians as cashiers
 
  • Thunk-Provoking
Reactions: Belisarius Cawl
Any studies on if indians can be trusted to write good code. I have a strong feeling the German lady won't admit "Bob and vagene" indians aren't quality coders
I'm not sure. She does take a strong unpopular stand on certain issues.
And that's the thing. Yeah AI can't replace quality, but a lot of jobs are not hiring based on quality. Instead of hiring 20 indians and making the manager fix their code. You can get an AI to give you the same results in seconds that takes indians hours.
That's a good point.
 
That's a good point.
except you can only say that about what i said because this is kiwifarms. twitter, reddit, any corporate job, most social media, and any place with boomers would call it a racist point to make and therefore wrong.

Thats really the point about AI, its like the suburbs, yes there's a fuck load of things wrong with it, but thats because its not a solution for what they state the solution is for, the suburbs were a solution to non-whites, just like AI is the solution for Indians.

You can start an AI Menace thread on kiwifarms right now, it will never catch up to the Indian Menace thread, and those are the only people allowed to code anymore. look at Luigi, literally a masters with internships in CS with a heavy focus on AI and was forced to work for some small car company's data website. meanwhile i know Indians that straight up just lied about having a degree and got better computer jobs out of high school.

anyone in the industry knows its straight up getting garbage output from the shitload of 3rd worlders and trying to make it work. Whats the fucking difference between that and AI?
 
except you can only say that about what i said because this is kiwifarms. twitter, reddit, any corporate job, most social media, and any place with boomers would call it a racist point to make and therefore wrong.
Depends. It's increasingly acceptable to be RAYCISS depending on the circumstances, not like in the recent past at least.
 
The brain processes information at a huge bandwidth
I read somewhere the cerebellum's bandwidth its comparable to a 4G connection at full blast, so not a lot.
It would degrade over time due to the brain being corrosive, and scar tissue would form around it. Also, once the implant is in, how would you maintain it? The hardware might become outdated in a few years.
But most will be too scared to take up this addiction
Consider the sheer amount of people getting their genitals removed and tell me with a straight face they wont opt to something like this if it allows them to live in lalaland 24/7. Hell anyone doing shooting up H or similar drugs is already destroying themselves even more for whats essentially just a chemical bump.

On the other hand I remember a paper saying how the brain and thought patterns could be manipulated with superconducting magnets, IIRC they even gave a guy temporary amnesia. If we ever figure out room temp. superconductors you could scale this down to a helmet full of those magnets disconnecting you from body and taking you to whatever VR sim you want.

However either case still has the problems I mentioned. If this tech did happen, implants or magnets, you would still have the problem of what happens to those who just don't want to leave the simulation. Rich people I assume could afford to be in a care facility of sorts where people or robots wash their asses, hook feeding tubes and watch that they don't get too atrophied. But for the vast majority of such VR addicts they're just gonna die slowly in their beds or couches, you're gonna have firefighters braking into homes to find people who died while hooked up to the simulation because they wouldn't log out for even a second.
1739384047262.png
Any studies on if indians can be trusted to write good code.
I knew an indian who couldn't even figure out Ubuntu, the kiddie distro...
just like AI is the solution for Indians.
Ha, hahahaha, they are already using AI to cheat at interviews, how do I know? because they are openly bragging about it...
anyone in the industry knows its straight up getting garbage output from the shitload of 3rd worlders
Indians are on a category of their own below other 3rd world devs, I knew Brazilians working at consulting companies like Accenture who would dread the day they had to work with the functional retards from the bangalore branch.
 
  • Informative
Reactions: The Spoils of War
Not an LLM enthusiast but haved played with ChatGPT and Deepseek off and on over the past week. ChatGPT is good at bullshitting stories and Deepseek is terrible after 3 prompts where it gets very repetitive.

At one point I started up another "story" where I told it to tell me a story and then kept telling it to continue and it started spewing a Linux GRUB bootloader error at me in an infinite loop, I had to end the prompt manually. I told it to continue the story again and it did the same thing.

I've never mentioned anything technical on the account so I'm not sure what was up with that but found it interesting.
 
  • Thunk-Provoking
Reactions: ZMOT
I read somewhere the cerebellum's bandwidth its comparable to a 4G connection at full blast, so not a lot.

Wetware uses neural coding instead of digital or analog signals, which makes comparisons to hardware difficult. But even if we disregard that, the cerebellum is just one part of the human brain. The brain contains ~86 billion neurons, each forming ~1,000 synapses (totaling ~100 trillion synapses). Neurons fire ~10-100 times per second, leading to ~860 billion to 8.6 trillion action potentials per second.

Consider the sheer amount of people getting their genitals removed and tell me with a straight face they wont opt to something like this if it allows them to live in lalaland 24/7. Hell anyone doing shooting up H or similar drugs is already destroying themselves even more for whats essentially just a chemical bump.

You're right. I was being a little optimistic.

On the other hand I remember a paper saying how the brain and thought patterns could be manipulated with superconducting magnets, IIRC they even gave a guy temporary amnesia. If we ever figure out room temp. superconductors you could scale this down to a helmet full of those magnets disconnecting you from body and taking you to whatever VR sim you want.

That's quite a leap, but I don't know enough about non-invasive brain stimulation to say anything on the topic.
 
Last edited:
Don't know about the ethics but its really pathetic and even sad how some people have to rely on a glorified markov chain for a facsimile of love.
it's just technical progression. back then people either kept being single and drank themselves to death, or hooked up with someone out of sheer desperation, possibly fucking up their offspring. it's simply a harsh truth that not everything has a happy end, so might as well make the time more enjoyable till your liver gives out

you're gonna have firefighters braking into homes to find people who died while hooked up to the simulation because they wouldn't log out for even a second.
again technical progression, but for natural selection
 
  • Optimistic
Reactions: cybertoaster
Back