AI development and Industry General - OpenAI, Bing, Character.ai, and more!

I'm sorry, I can't understand how this AI stuff is supposed to be any threat whatsoever to the species. It just looks like a big algorithm
The major worry is that AI will surpass human intelligence, which isn't science fiction and will happen in our lifetime. At some point it will develop the ability to create new technical and scientific discovery, leading to a new arms race in the 21st century. Once that happens it will be able to improve itself at an exponential rate and create the technological singularity.
 
The major worry is that AI will surpass human intelligence, which isn't science fiction and will happen in our lifetime. At some point it will develop the ability to create new technical and scientific discovery, leading to a new arms race in the 21st century. Once that happens it will be able to improve itself at an exponential rate and create the technological singularity.
Just turn it off then
 
The major worry is that AI will surpass human intelligence, which isn't science fiction and will happen in our lifetime. At some point it will develop the ability to create new technical and scientific discovery, leading to a new arms race in the 21st century. Once that happens it will be able to improve itself at an exponential rate and create the technological singularity.
I hope AI one day destroys the investor and financial speculative market, making it obsolete and all the fin-tech dude bros end up sweeping streets like the rest of us.
 
Last edited:
I've personally found the AI/Midjourney-generated worlds/alt universes pretty funny, very autistic and yet weirdly kind of charming, like this one:

Screen Shot 2023-12-05 at 6.25.24 AM.png
Screen Shot 2023-12-05 at 6.26.22 AM.png

Normally they're just Instagram pages with tons of Midjourney "art" with captions, though this one seems to be its on world complete with short stories, detailed backstory and worldbuilding, a newsletter, etc.

Screen Shot 2023-12-05 at 6.26.57 AM.png
 
@NoReturn
Altman's gambit to bluff into going to Microsoft worked, and now is back as CEO of OpenAI. Greg Brockman is back too. There's also a new board.

Linked the nitter to update the OP. Something of note. Sam Altman is so fucking gay. Like, he tries really hard not to appear as a flaming homo, but reading his twitter is obvious he's a big gay.
 
Last edited:
@NoReturn
Altman's gambit to bluff into going to Microsoft worked, and now is back as CEO of OpenAI. Greg Brockman is back too. There's also a new board.

Linked the nitter to update the OP. Something of note. Sam Altman is so fucking gay. Like, he tries really hard not to appear as a flaming homo, but reading his twitter is obvious he's a big gay.
OP updated, screenshots from obvious homo Sam Altman below:
1701797785841.png

And some responses:
1701797818005.png
1701797926415.png
1701797958801.png
Also, and this may be my inner weeb showing, but does anyone else find it pretentious that his handle is "sama"?
 
The major worry is that AI will surpass human intelligence, which isn't science fiction and will happen in our lifetime. At some point it will develop the ability to create new technical and scientific discovery, leading to a new arms race in the 21st century. Once that happens it will be able to improve itself at an exponential rate and create the technological singularity.
AI isn't exactly intelligent. It recognizes and follows patterns in data we feed it, hence we prefer to call it ML. It can sometimes combine these patterns in new ways, but coherency has remained an issue. Of course some people (including my old boss, years ago) argued that close imitation of intelligence might as well be thought of as intelligence. I don't know if quantitative change (size and performance of new models) will result in qualitative change (real creativity and innovation from them) but I'm skeptical. I get their point though, and it's hard to argue against. I'm opposed to it in spirit, without rational justification. Regardless, I don't think it's worth freaking the fuck out over and treating it as the end of the world.

Personally I think AI is best conceived of as a very powerful tool. That's where the danger lies: it could be used to enhance digital surveillance (already happening) and take a more active role in decision making (very likely). I don't see a Skynet type situation ever happening, but AI-enabled soft totalitarianism seems a real possibility.

If twitter coomission artists lose their jobs to image gen models, so be it. They can always find something less soul sucking to do. It was a miserable "job" in the first place. Neither AI "art" or Twitter "art" have a right to be called "art". Both are the blackest, lowest coal in my eyes. Screw them both.

As for AI-written code, I'm not afraid for my career, since most of the training data will remain middling garbage. It might replace some of the more braindead jeet jobs (by 10xing the best jeets), but if I'm a decent engineer, I'll be fine. If I'm not fine, then I simply didn't deserve to be. I'll move to Kazakhstan and herd sheep.

I think the singularity is a retarded concept. Tech has slowed down a lot compared to 1850-1950. We're getting deindustrialized, half the economy is fake, easy resources are running out, world trade is getting more complicated due to war and politics. Chudding out. There's some work being done to solve these, like the Chinese getting clever with synthetic hydrocarbons, but research and implementation are difficult. As for the hard problems of space travel and colonization, I see little indication that we can "science the fuck out of this" one. Reddit mindset. In the end, as computer hardware improves, code worsens. There's been massive brain drain it seems. Set up hot reloading with DLLs and soydevs think you're a wizard. Literally 1968 technology.

I don't give a fuck. Life finds a way.

If any of my old co-workers are reading this, hit me up. We can go fishing, I'll bring beers.
 
I don't know if quantitative change (size and performance of new models) will result in qualitative change (real creativity and innovation from them) but I'm skeptical.
i hate philosophical zombies
if nobody can distinguish between artificial intelligence and human intelligence, machines have become as intelligent as humans
saying "no, they're only pretending" is pointless if the result is the same
 
that's gay
It's over, you've won.

If true AI existed then I would hate it more. I simply like humans. To illustrate this, if Aliens came to earth, I would dedicate myself to sparking a war between us and them. Even if we were to lose, that would still be a better outcome than having to share our world with those freaks. Tie me to a planet killer and launch it at their homeworld. Inshallah I will be ready
 
Today on /g/
1701807961188.png1701804848617862.png

Meanwhile, on reddit
1701807468712.png
Full text:

Now that it's all said and done, let's talk about Effective Altruism (and why it is a societal cancer)​

Educational Purpose Only
Preface: I had the great misfortune of living in an EA co-op when I first moved to the Bay because I was limited on housing options. Many of the EA organizations still run off of the dwindling store of fraudulent money that FTX pumped into them. As a result of this extremely poor housing choice, I know too much about this shitstain of a movement on society. You can see more about that specific saga in my first Reddit post if you're curious.
Ok, now for the post:
Sam has returned as CEO. The two Effective Altruists who were on the board have been banished to the shadow realm. Hooray, all is good?
Not exactly.
If you work in tech at all, you need to be on the lookout for anyone who presents themselves as "EA", "Rationalist", or uses words like "x-risk" or even a random buzzword like "deterministic". If they do, the chances they are part of this doomer cult of EA is pretty high.
Why is EA so damaging?
If you remember, Caroline Ellison, one of the central figures in the FTX fraud, gave testimony in Sam Bankman-Fried's trial. It's pretty well-covered in this article: https://www.theringer.com/tech/2023...caroline-ellison-testimony-ftx-cryptocurrency
I want to highlight one specific part of that at the end, where Caroline talks about SBF having a different risk profile than other normal people.
Ellison said Sam had once claimed that “he would be happy to flip a coin, if it came up tails and the world was destroyed—as long as, if it came up heads, the world would be, like, more than twice as good.” When you’re assigning your own odds to everything, you can make them look however you like.
There's another article I can't find right this moment, but it covers Caroline speaking about their mentality being essentially that the ends justify the means. So if they can create greater net benefit for society later on, then ANYTHING they do is moral and just. This could mean literally anything. In this case, it was massive fraud, but in other parts of EA it has meant literal rape and domestic violence. When you think in this way, it is extremely dangerous and destructive. You can rationalize anything you do as for the greater good.
Why is this important?
Because they were all Effective Altruists. Basically the entirety of FTX (well-documented at this point) was either EA or heavily EA-adjacent. This philosophy of irrational rationalization is what allowed them to commit such serious crimes and still claim the moral high ground the entire time.
Well let me tell you man... the worst things you could ever do only look like the moral high ground if you're standing upside down, your head buried in the sand. Not a bad analogy for how EA people are.
But now the EA's are gone from OpenAI, so everything's good?
No. I want you to recall this article, which covers how the OpenAI board approached Anthropic about merging.
https://www.theinformation.com/articles/openai-approached-anthropic-about-merger
If you haven't heard of Anthropic, it's essentially an OpenAI competitor but run from top-to-bottom by EAs. It was also funded to the tune of $500 million out of their total early funding of $700 million by.....
Alameda Research. The fraud trading arm of FTX.
Helen Toner and Tasha McCauley, the two board members who were forced to resign, were both EAs. But notice how Ilya is also not back on the board? Sam and Greg aren't either, but this is key.
Ilya is also an EA. And the interim CEO and former Twitch co-founder who they tapped to lead OpenAI after firing Sam? Emmett Shear?
Also an EA.
You can probably already understand where I'm going with this, but this is a massive conflict of interest where EAs are trying to gain widespread control of the tech industry, as well as gaining influence over other parts of society at large. Recall that SBF wanted to be President of the United States, if you read the Ringer article I linked above in full? Well this doomer cult essentially wants to amass wealth, influence, and power for "the greater good." But the philosophy that backs it allows them to commit acts of absolute criminal destruction as the means to it.
This is an incredibly dangerous movement that people NEED to be wary of.
Before Sequoia dumped $213.5 million into FTX, EAs were in large part not so influential in Silicon Valley and the world at large. Now, with many of the funds FTX stole from customers redirected into EA organizations like Anthropic, this has changed.
EAs now have a significant platform of influence in Silicon Valley. Even as major scandals that have hit global news cycles like FTX and now OpenAI being heavily driven by EA shittiness, they still retain that power. This isn't even to begin cracking the wave of smaller scandals like the outright misogyny, white wealthy privileged roots and racism, and string of sexual assaults in the EA community.
FUCK MAN. Somehow this movement, like their mentality itself, is unshakeable by all the writing on the wall and evidence of destructive behavior.
Ok, so what now?
I made this post because I didn't see the EA angle being talked about enough. That's what drove this shit. Even articles talking about the paper that triggered this conflict is too indirect, that paper was contributed to by one of the board members because they're part of this EA AI-doomer cult. That, plus their ties to a direct competitor in Anthropic are such an obvious conflict-of-interest that I cannot believe this wasn't exposed until now.
If you are in tech, and you see people like this, actively avoid them. The more we can avoid these people, the weaker their grip on influence and power to pull this kind of shit is. That's why I'm making this post. I hope to god that it gets seen.

1701807547857.png1701807561469.png

I can tell you what people inside Silicon Valley are saying is behind the OpenAI debacle​

News 📰
Only a small number of people know for sure what was said in the important meetings, but the highest-probability explanation right now has nothing to do with AI safety/alignment, nor with the claim in the original press release that Sam Altman was dishonest with the board (which they have walked back from.) Instead, the most likely explanation has to do with Y Combinator, a startup incubator that has almost no real power (they're barely taken seriously by real venture capitalists) but an impressive amount of influence, due to the mindshare of Hacker News among the under-30 tech workers. The short version is that it appears that YC initiated this, that they massively overplayed their hand, and they ended up screwing up in a way that Microsoft of all sides won (and this almost certainly wasn't their intent.) Ilya Sutskever, who my sources tell me is on the whole a good guy but unskilled in office politics, was unfortunately played as a patsy in this whole thing. Although he and Sam had genuine disagreements about the direction of OpenAI, there wasn't really bad blood between them and he wouldn't have tried this if he hadn't been egged on by someone else.
Sam Altman used to head Y Combinator, and no one seems to be entirely sure why he left, but the general consensus is that he was pushed out. YC are absolute dirtbags, so I wouldn't take that to reflect on him. The relationship between them has not been great since then, and Y Combinator generally isn't the best at keeping its own people in line, by which mean I mean that its partners are known to pull things that damage the YC brand and it rarely does a good job of stopping. So, it's hard to say who at Y Combinator knew this was going on or wanted it, even though the YC fingerprints are on the event (especially pertaining to the incompetence with which it was executed) to everyone who can recognize the signs.
Also, over the past few months, YC has been begging OpenAI to give their investments, partners, and important founders preferential treatment in training data and weights for future language models, so that answers to questions like "Who is [X] and what is he known for?" will be favorable to people they care about. Giving up the store like this to YC is, obviously, something that most people at OpenAI are against. It sets a horrible precedent. I wasn't sure, and I'm still not, which side of this controversy Sam was on, but this has been building for a couple months and we all knew it was going to get hot. A lot of us were warned that we'd see "news" around the end of last week, but even I had no idea exactly what was about to go down, let alone the ridiculous way in which it did.
Adam D'Angelo feels a certain fealty to Y Combinator because, although Quora is still a failure as a product, they rescued it for long enough, and in such a way, that he could sustain his career, which he wouldn't have been able to do if nature had been left to take its course with Quora. It's likely that YC was using Adam as their link for this coup, since he's the only one who owes them anything. Of course, even though Ilya and Sam have strong disagreements, they generally (per my sources) respect each other on a personal level. Ilya wouldn't have gone against Sam just because. And this is another reason why I don't think it was Ilya's idea.
The thing about YC is that they're extremely petty. Boardroom intrigue happens everywhere in the corporate world, but the YC people go out of their way to ruin the target's reputation. It's like middle school for them. So when the press release announcing Sam's firing said "he was not consistently candid", which is business speak for, "he's an unethical liar", there were two explanations. One was that something Epstein-level horrid was going to drop, and soon, because companies usually don't put something that damning in a change-of-leadership notice unless they're trying to get out ahead of horrible news. The other was that this was a deliberate hit (but a clumsy one) that sought not only to remove Sam but to damage his reputation out of spite. Ilya was the first to undermine this narrative. Even if he felt Sam wasn't the right leader for the company, he didn't want to destroy the man, so he put his weight behind the claim that it was about disagreements regarding AI safety and alignment—which, to him, it probably was—rather than an ethical problem. This made him look bad, of course, because it comes off as inconsistent: why issue a damning notice and then retract it? Again, this is evidence that he was in over his head and played as a patsy by someone else.
There is a lot we still don't know. We don't know if (a) Sam full-out refused to offer YC preferential representation in future LLMs or if (b) he just wasn't giving YC everything they wanted, so they decided Adam, boosted by leveraging Ilya's general positive reputation, was a better bet. We're still figuring that out. We also don't know if preferential representation in existing language models has been sold in the past, but my guess is no. As for the firing, the whole thing was executed with such incompetence and hubris that it was impossible to know exactly what they were thinking, or who specific key players were, but there aren't a whole lot of people in Silicon Valley who would do it with the particular signature of incompetence that we've seen, which is why everyone thinks Y Combinator had something to do with it.

1701807669305.png

What would you do if you were the Singulariton?​

AI
Singulariton = The first person to have control of a super intelligent AI.
Here's what I would do...
Prompt: Explain to me what we need to do to stop anyone else from getting a super intelligent AI. I must be the only one with that power.
After I am secure in being the only singulariton
Prompt: Create a time machine, make me immortal, give me the powers and good looks of superman... oh yeah solve world peace and starvation blah blah blah

Bonus
1701807313348.png1701807341648.png1701807361040.png1701807402079.png

Even if we were to lose, that would still be a better outcome than having to share our world with those freaks. Tie me to a planet killer and launch it at their homeworld. Inshallah I will be ready
s1c3pp71xbc31.jpg
 
This is floating around today:
IMG_20231210_200744_078.jpg
WSG article: https://archive.is/z1958

Excerpts:
“He was trying to claim that it would be illegal for us not to resign immediately, because if the company fell apart we would be in breach of our fiduciary duties,” she told the Journal. “But OpenAI is a very unusual organization, and the nonprofit mission—to ensure AGI benefits all of humanity—comes first,” she said, referring to artificial general intelligence.
The members, including Toner, were taken aback by staffers’ apparent willingness to abandon the company without Altman at the helm and the extent to which the management team sided with the ousted CEO, according to people familiar with the matter.
Toner was previously an active member of the effective-altruism community, which is multifaceted but shares a belief in doing good in the world—even if that means simply making a lot of money and giving it to worthy recipients. In recent years, Toner has started distancing herself from the EA movement.

Aussie Aussie Aussie! Oy oy oy!
 
i hate philosophical zombies
If you corner people saying such things eventually the bottom line is some variation of "it has no soul" or some other human-centric view, which ultimately, is a non-argument and meaningless. AI will indeed surpass human intelligence within our lifetimes. I'm not sure about the optimistic five years many in this industry claim, but ten years? 15 years? Quite possible. The moment AI comes close in generalizing and problem solving to some of the dumbest humans, it'll literally be a moment until it becomes (from our perspective) superintelligent from there. It'll just zip us by, "IQ-wise" if you will. We'll barely see it happen.

It's always important to note how alien AI, or at least it's current invocation, is from our thought process though. Some AI scientists defeated undefeatable, even by masters, Go-playing AI by basically playing the game in an unexpected, kind of wrong (while still sticking to the rules) way. In a way even the most amateurish human player would've caught on and had a chance of winning. That was ultimately possible because that Go AI had no general concept of winning, what a game is, or what even Go is. The same how LLMs don't really have a concept of what a conversation, a question or an answer is. You could of course now ask - do the heap of neurons that process the signals from my nerves that process the light input that is this text have an "innate" concept of what language is by themselves? You know they don't and I wasn't born with the ability to interpret these patterns, nor did I even speak this language in the first years of my life. Was I less then than I am now? It's the age old philosophical question of when the whole becomes more than it's parts. Does it ever really?

The ability to generalize, to have an "inner world" will make something we, as humans, will be able to recognize as intelligence. LLMs don't really have it yet although sometimes it feels like (and yes, this is completely unscientific of me and subjective) you can see hints of it in the bigger models sometimes, which is eerie when it happens.
 
Back