AI development and Industry General - OpenAI, Bing, Character.ai, and more!

Doubleposting for the update on the OpenAI situation.
Source: https://www.reuters.com/technology/...etter-board-about-ai-breakthrough-2023-11-22/
Archive: https://web.archive.org/web/2023112...etter-board-about-ai-breakthrough-2023-11-22/

Exclusive: Sam Altman's ouster at OpenAI was precipitated by letter to board about AI breakthrough -sources​

By Anna Tong, Jeffrey Dastin and Krystal Hu
November 22, 202310:53 PM UTCUpdated ago

Nov 22 (Reuters) - Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers sent the board of directors a letter warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.

The previously unreported letter and AI algorithm was a catalyst that caused the board to oust Altman, the poster child of generative AI, the two sources said. Before his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader.


The sources cited the letter as one factor among a longer list of grievances by the board that led to Altman’s firing. Reuters was unable to review a copy of the letter. The researchers who wrote the letter did not immediately respond to requests for comment.

OpenAI declined to comment.

According to one of the sources, long-time executive Mira Murati told employees on Wednesday that a letter about the AI breakthrough called Q* (pronounced Q-Star), precipitated the board's actions.


The maker of ChatGPT had made progress on Q*, which some internally believe could be a breakthrough in the startup's search for superintelligence, also known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as AI systems that are smarter than humans.

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because they were not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.


Reuters could not independently verify the capabilities of Q* claimed by the researchers.


Anna Tong and Jeffrey Dastin in San Francisco and Krystal Hu in New York; Editing by Kenneth Li and Lisa Shumaker
 
I wonder if social media existed back when personal home computers was happening (paradox but bear with me) we'd have lolhappenings from Bill Gates, Steve Jobs, big tech and small tech people. I wonder how much nerd programming & software develeopment drama was lost over time?

If you look at the early Zine from the Home Brew Computer Club and various other similar works you'll find a lot, instead of Problematic Anti-Trans stuff it's problematic Pro War stance or work history etc, interesting stuff tbh.
 
Doubleposting for the update on the OpenAI situation.
Sounds like bullshit but I'll believe it. Altman is Q. Trust the plan.

Bloomberg article about how the board isn't diverse enough: https://www.bloomberg.com/news/arti...penai-after-firing-showed-rifts-left-unhealed
Was unfortunate enough to catch this interaction on Xitter so inflicting it on you.
Women.png
Machine God kills humanity. Women, minorities most effected.

Edit: Found him, Nick Land is here now.
 
Last edited:
Source: https://www.businessinsider.com/bill-gates-comments-3-day-work-week-possible-ai-2023-11
Archive: https://web.archive.org/web/2023112...-comments-3-day-work-week-possible-ai-2023-11

Bill Gates says a 3-day work week where 'machines can make all the food and stuff' isn't a bad idea​

Jordan Hart
Nov 22, 2023, 10:36 AM PST
  • AI won't take your job, but it will "change it forever," Bill Gates says.
  • Gates says that a 3-day work week is "probably OK."
  • The billionaire has previously acknowledged the risks of AI being misused.
Technology may not replace humans, but it could make a 3-day work week possible — at least, that's what Bill Gates thinks.

The billionaire joined Trevor Noah on his "What Now?" podcast in an episode that premiered on Tuesday. When Noah asked about the threat of artificial intelligence to jobs, Gates said there could one day be a time when humans "don't have to work so hard."

"If you eventually get a society where you only have to work three days a week, that's probably OK," Gates said.

There could exist a world where "machines can make all the food and the stuff," and people don't have to work a five day-plus work week to earn a living wage.

While artificial intelligence could bring about some positive change, Gates has previously acknowledged the risks of AI if it's misused. In July, he published a 3,000-word blog post about the potential impact of AI.

"I don't think AI's impact will be as dramatic as the Industrial Revolution, but it certainly will be as big as the introduction of the PC. Word processing applications didn't do away with office work, but they changed it forever," Gates said at the time. "Employers and employees had to adapt, and they did."

And, Gates isn't the only business titan to predict a shorter work week. JPMorgan CEO Jamie Dimon said that the next generation of workers will only have a 3.5-day work week due to AI.

"Your children will live to 100 and not have cancer because of technology and they'll probably be working three and a half days a week," Dimon told Bloomberg in October.

Gates once viewed sleep as lazy and told Noah that his life was all about Microsoft from the ages 18 to 40 years old. Now, he feels "the purpose of life is not just to do jobs."

Companies in the US and abroad have been testing the effectiveness of a four-day work week. Some have given positive reports of improved work-life balance and efficiency.
1700856219898.png1700856292882.png
1700856318142.png

Machine God kills humanity. Women, minorities most effected.
It's cult vs cult.
 
I find it funny that both pro-ai and anti-ai are still predominantly both leftists.

Pro-AI side consists of leftist corporations that impose moral limitations on the technology to heavily skew toward the politically correct. Compound this with the progressive/accelerationist transhumanism mindset that these people possess. For people outside of these companies, there are people who heavily hype AI, despite their understanding of the topic being exclusively from fiction.

Meanwhile, the Anti-AI side is filled with Marxists who think AI will severely devalue and trivialize human creation and oppose technology as they see it as a product of capitalism. Yet at the same time, they push for stronger legislation of intellectual property rights, which is contrary to their insistence that "ownership of property" is not a real thing.
 
Yet at the same time, they push for stronger legislation of intellectual property rights, which is contrary to their insistence that "ownership of property" is not a real thing.
Being in favor of intellectual property rights is logically consistent with being against property rights to real, physical goods
Intellectual property is a fundamentally socialist idea, as it logically means that you expropriate other people.
For instance, if I have an intellectual property right to some melody and I can forbid you from whistling it or insist on a fee if you do, I as an intellectual property right holder have become a partial owner of your lungs and vocal cords.
There is nothing "capitalist" or "anti-socialist" or "pro-freedom" about "intellectual property rights" - the consistent stance is that intellectual property rights (read: rights to goods that are not scarce) are an inherent violation of property rights to real, physical goods
 
I as an intellectual property right holder have become a partial owner of your lungs and vocal cords.
There is nothing "capitalist" or "anti-socialist" or "pro-freedom" about "intellectual property rights" - the consistent stance is that intellectual property rights (read: rights to goods that are not scarce) are an inherent violation of property rights to real, physical goods
Doesn't that imply that the stock market is the most socialist thing to exist and that all socialists should champion Wall Street?

I think IP rights can be plenty capitalistic considering it makes ideas, that would normally can't be used as private capital, as private capital. If there were no IP laws every idea would be in the public, IP laws deliberately make it so those IPs are exclusive to a person or an entity, it's anti-competitive by design.
 
Doesn't that imply that the stock market is the most socialist thing to exist and that all socialists should champion Wall Street?

I think IP rights can be plenty capitalistic considering it makes ideas, that would normally can't be used as private capital, as private capital. If there were no IP laws every idea would be in the public, IP laws deliberately make it so those IPs are exclusive to a person or an entity, it's anti-competitive by design.
Ideas are not scarce goods. Ideas are only worth something insofar as they are implemented in the real, physical world (=> scarce goods).
Consider all these shitholes that have Internet access and thus access to a fuckton of ideas and knowledge, yet without ever doing anything.
 
Yodayo (an ai generating site/“tavern” chat - almost akin to Character.ai but minus skirting around lewd shit) has an… interesting community. The front page (if you hit new) for ai art is flooded with freak shit and don’t get me started on the tavern characters.

The community on Discord is a different breed as well.
 
Yodayo (an ai generating site/“tavern” chat - almost akin to Character.ai but minus skirting around lewd shit) has an… interesting community. The front page (if you hit new) for ai art is flooded with freak shit and don’t get me started on the tavern characters.
Post screenshots!
 
Okay, so what the fuck is Q-star that's got everyone panties in a twist.
I'm on the left end of the bell curve, so if you are tard whisperer, I pray G'd for your help. I don't understand shit.
 
Pretty much most players in the OP are only front-ends more or less dependent on OpenAIs GPT (and always at the danger of being cut off because some customer of them made the AI say "titty" or "nigger", completely privately, without anyone else seeing it) OpenAI is *very* heavy handed with it's moderation and cares *a lot* what people do privately with it's software. A major example of an ideological post-profit nu-silicon valley corporation. Antropic is even more insane and consists of ex-OpenAI people who think OpenAI doesn't go far enough with the censorship.

The only notable exception on the OP is NovelAI which is a for-profit company with no investors (by choice, they wanted to remain truly independent) which successfully made a bigger LLM (13b) completely in-house. It's sad that this is not the rule, but the exception.

What's missing from the OP is Meta which is, funnily enough, spearheading "AI for the masses, not for the classes" right now for unknown reasons. Yann LeCun is Meta's Head AI guy and is an old school computer scientist kind of guy. (IIRC first time I saw him was in the 90s on a tv show, he was presenting his OCR tech) He has some very good takes on the topic, actually and is deffo not a lolcow.

There's also Mistral, a french company lead by some of the people who initially came up with llama.

LLM training and development as of now is very expensive, and indeed also kind of random, so it's not something somebody can do in their garage. It's very easy to burn millions of dollars and months of time and have nothing to show for it.

There's a pletora of development coming out of china too and potential western regulation of the technology will pretty much guarantee that it will fall behind.
 
Last edited:
Okay, so what the fuck is Q-star that's got everyone panties in a twist.
I'm on the left end of the bell curve, so if you are tard whisperer, I pray G'd for your help. I don't understand shit.
It's an alleged computer model that is/was supposedly so good at updating its "decision-making" for logical/mathematical tasks that people at OAI started getting cold feet about AGI (the real sci-fi AI) approaching, sent a paper to the board of directors, and that's allegedly why Altman was kicked out in some kind of safety power-play. But really, it's likely bunk, at least in terms of its capabilities, since you have multiple people in OAI say that such a letter to the board didn't exist, and people in Google and Meta just writing it off as rumor-mongering.... or maybe it's all a conspiracy and there's a sentient mind on the level of a young child locked in the basement of a California tech company. Who knows?
Just the tip of the iceberg.
Looks a bit similar to other bot-hosters, though the integration of an SD art suite on the site, too, is interesting. Can't seem to find what text model they're using, guessing it's some flavor of Llama.

The other ones I know of are janitorai (mostly unfamiliar, seems to have a large malebot focus) and chub.ai (which seems to be the preferred site for /aicg/, at least when it actually works.) NSFW/Viewer discretion advised, of course, as the sites seem pretty laissez-faire when it comes to whatever content can be hosted, so you may see something you wish you hadn't. The only rule I know of for chub is that you need to tag NSFW bots accordingly and that you can't use any explicit loli/shota images as the bot's thumbnail. Both sites also seem to host the ability to chat with these bots directly, though you need an account/payment, and I believe they're running local-sized LLMs.
 
Last edited:
I'm on the left end of the bell curve, so if you are tard whisperer, I pray G'd for your help. I don't understand shit.
Someone 'leaked' a letter to the board concerned about something called Q*, which is reliably capable of solving grade school math word problems. Leaker claims Altman's lack of safety concerns about was what caused his firing. This is never verified, denied almost instantly and doesn't really fit the facts. Only support for this Altman impaling some sort of breakthrough. People decide to ask 'Yeah, but what if it WAS real? Can it kill us?" anyway. So people start trolling through public available material for anything related to math and extrapolating from there. This has lead to everything to more efficient use of current models, brand new models that work different than what we have, to NEW MATHS that can crack any encryption which caused the government to step in.

tl:dr; There's a project that may or may not exist named Q*. If it's real, it may or may not have raised security concerns that were sent to the board. If that happened, it may or may not (But almost certainly didn't) play a hand in getting Altman fired. You may notice that's a shit ton of ifs, maybes and buts. That has not stopped people on the internet from wildly over reacting and forming complex theories about how OAI is a threat to humanity.

It's tempting to make the obvious Q* is Q analogy, but it's closer to the amateur sleuthing that surrounded Russiagate. 'Alfabank rents server space in Trump tower, and there's an antenna on top that can communicate with Moscow, and a power surge happened two hours after Deustchank froze some services for Alfabank and..' reasoning.
 
Looks a bit similar to other bot-hosters, though the integration of an SD art suite on the site, too, is interesting. Can't seem to find what text model they're using, guessing it's some flavor of Llama.
I’m assuming the ability to make the art for free and it not being automatically shitty is the main draw for it.

I just had to mention Yodayo because it intrigues me.
 
  • Like
Reactions: Mr.Miyagi
Grok will say the N-word:
Elon Musk confirms his xAI bot named "Grok" will use a racial slur if it means it will save a billion lives in a hypothetical "Trolley Problem." This is in response to a post on X showing Chat GPT's refusal to use the term to save a billion lives when asked the question.
In response to a post on X about OpenAI's Chat GPT's refusal to use a racial slur to save a billion white people from death in a hypothetical "Trolley Problem" (or even answer the question at all), Elon Musk posts Grok AI's response to the same problem. He confirms his AI will use a racial slur if it means it can save a billion lives.
1701028847387.png

GOOD LORD.

Pretty much most players in the OP are only front-ends more or less dependent on OpenAIs GPT (and always at the danger of being cut off because some customer of them made the AI say "titty" or "nigger", completely privately, without anyone else seeing it) OpenAI is *very* heavy handed with it's moderation and cares *a lot* what people do privately with it's software. A major example of an ideological post-profit nu-silicon valley corporation. Antropic is even more insane and consists of ex-OpenAI people who think OpenAI doesn't go far enough with the censorship.
Updooted in OP

The only notable exception on the OP is NovelAI which is a for-profit company with no investors (by choice, they wanted to remain truly independent) which successfully made a bigger LLM (13b) completely in-house. It's sad that this is not the rule, but the exception.

What's missing from the OP is Meta which is, funnily enough, spearheading "AI for the masses, not for the classes" right now for unknown reasons. Yann LeCun is Meta's Head AI guy and is an old school computer scientist kind of guy. (IIRC first time I saw him was in the 90s on a tv show, he was presenting his OCR tech) He has some very good takes on the topic, actually and is deffo not a lolcow.

There's also Mistral, a french company lead by some of the people who initially came up with llama.
Added
 
Last edited:
Back