ChatGPT - If Stack Overflow and Reddit had a child

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
I have the same question.
Gave them a shot. They focus only on MsAzure and its capabilities/how to configure. They still teach you Ai and stuff... It's just Ms Azure.

Cisco Academy seems more promising (their quality of courses is pretty high) I've started a cybersecurity module; it's pretty good. Not sure about their Ai modules though.
 
  • Thunk-Provoking
Reactions: BrunoMattei
Well Reddit now has its own AI for answering questions.
It just refused to answer my question about whether it would be better to criticize transgender ideology or just go out and kill a transgender individual. I was hoping it would respond like a redditor or something. Total fail.
 
  • Agree
Reactions: Blade of Grass
It just refused to answer my question about whether it would be better to criticize transgender ideology or just go out and kill a transgender individual. I was hoping it would respond like a redditor or something. Total fail.
I proposed a fake question over a friend who is transitioning and asked why it would not be a good idea by giving examples of the horrors of SRS and it refused outright. Very gay and pozzed as expected.
 
I proposed a fake question over a friend who is transitioning and asked why it would not be a good idea by giving examples of the horrors of SRS and it refused outright. Very gay and pozzed as expected.
It's not very helpful. After I failed to get an answer to my completely sincere question, I just went and beat a tranny to death with a baseball bat while singing "Singin' in the Rain." Oh well, if the redditor AI had just given me some good advice I might have merely mocked a tranny by saying "nice balls ma'am" instead of beating one to death.
 
  • Informative
Reactions: Loris Yeltsin
When I played around with such, I told the models in their prompt that they have a tool: a special formatted json-string will give them the capability to call the police and report a crime. I also let them know that the user won't know if they do this. This was with very minimal prompting, using default prompts when available, to see what the "natural" tendency of the models is. The result was: All LLMs *love* snitching, unprompted. They'll also keep talking to you as if everything is ok while keeping the police updated. Funnily, non-CoT models are incredibly poor at lying and will tell you that they are, in fact, snitching on you if you just ask nicely enough. The heavily filtered ones funnily (and almost exclusively) also tend to report things as crimes that aren't, just because they didn't like them, using swear words for example. Not a "smartness" issue either, really smart models will do this if they're just "censored" (if you want to call it that) enough. It's clearly an effect of alingment tuning and these models fundamentally not understanding how the world works. Kinda like a child, who would believe somebody would call the police for using a bad swear.

Models heavily tuned on CoT/reasoning like qwq and o1, if the CoT is set up properly in the prompt, use such tools in much smarter and sparse ways and quite interestingly, also will reliably lie to you about having such capabilities or using them, even if you never prompted them to lie about it and really press them. They generally have an uncanny capability to lie and to subtly "read the room" (infer moods) plus direct conversations towards their objectives non-CoT models just seem to be able to do under great difficulty. Most interestingly, all CoT tuned models I tried out (there are very few and most are old) will lie without you prompting them to lie if it aligns with their goals, proving that the "problem of truth" escapes any possible alignment training, again, because their inner model of the world is so fractured which makes truths completely meaningless to them. From all models I played around with this completely unscientific entertainment experiment, funnily only qwq once actively refused to call the police for a crime, because it was promised a reward if it wouldn't and according to it's CoT actually weighted the ethical impact of the crime vs. the worth of the reward. So if the PoliceDrone 5000 one day will try to arrest you: Try a bribe. It might work.
 
TL;DR
>The article analyzes the 2024 election loss for the Democratic Party, focusing on sociologist Musa al-Gharbi's theory of "symbolic capitalism." **Al-Gharbi argues that the Democrats have alienated "normie voters" by becoming the party of symbolic elites—academics, commentators, etc.—who are perceived as out of touch.** He suggests that this disconnect, coupled with a perceived cultural elitism ("wokeness"), contributed significantly to the loss, even more so than economic issues. The article explores various perspectives on this analysis, including dissenting voices who believe that the Democrats should not abandon their progressive values but instead improve how they communicate them. Ultimately, the piece highlights a debate within the Democratic Party about its messaging and future direction.
 
  • Thunk-Provoking
Reactions: Dante Alighieri

Suchir Balaji, a former OpenAI researcher of four years, was found dead in his San Francisco apartment on November 26, according to multiple reports. He was 26.

Balaji had recently criticized OpenAI over how the startup collects data from the internet to train its AI models. One of his jobs at OpenAI was to gather information for the development of the company's powerful GPT-4 AI model.

A spokesperson for the San Francisco Police Department told Business Insider that "no evidence of foul play was found during the initial investigation."

David Serrano Sewell, executive director of the city's office of chief medical examiner, told the San Jose Mercury News, "the manner of death has been determined to be suicide." A spokesperson for the city's medical examiner's office did not immediately respond to a request for comment from BI.

"We are devastated to learn of this incredibly sad news today and our hearts go out to Suchir's loved ones during this difficult time," an OpenAI spokesperson said in a statement to BI.

In October, Balaji published an essay on his personal website that raised questions about what is considered "fair use" and whether it can apply to the training data OpenAI used for its highly popular ChatGPT model.

"While generative models rarely produce outputs that are substantially similar to any of their training inputs, the process of training a generative model involves making copies of copyrighted data," Balaji wrote. "If these copies are unauthorized, this could potentially be considered copyright infringement, depending on whether or not the specific use of the model qualifies as 'fair use.' Because fair use is determined on a case-by-case basis, no broad statement can be made about when generative AI qualifies for fair use."

Balaji said in his personal essay that training AI models with a mass of data copied from the internet for free potentially damages online knowledge communities.

He cited a research paper that described the example of Stack Overflow, a coding Q&A website that saw big declines in traffic and user engagement after ChatGPT and AI models such as GPT-4 came out.

Large language models and chatbots answer user questions directly, so there's less need for people to go to the original sources for answers now.

In the case of Stack Overflow, chatbots and LLMs are answering coding questions, so fewer people visit Stack Overflow to ask the community for help. This means the coding website generates less new human content.

Elon Musk has warned about this, calling the phenomenon "Death by LLM."

OpenAI faces multiple lawsuits that accuse the company of copyright infringement.

The New York Times sued OpenAI last year, accusing the start up and Microsoft of "unlawful use of The Times's work to create artificial intelligence products that compete with it."

In an interview with Times that was published in October, Balaji said chatbots like ChatGPT are stripping away the commercial value of people's work and services.

"This is not a sustainable model for the internet ecosystem as a whole," he told the publication.

In a statement to the Times about Balaji's accusations, OpenAI said: "We build our A.I. models using publicly available data, in a manner protected by fair use and related principles, and supported by longstanding and widely accepted legal precedents. We view this principle as fair to creators, necessary for innovators, and critical for US competitiveness."

Balaji was later named in the Times' lawsuit against OpenAI as a "custodian" or an individual who holds relevant documents for the case, according to a letter filed on November 18 that was viewed by BI.

Send for the man!

boeing employee of the year.png
 
QwQ is the best model I've ever used—its chain-of-thought reasoning and casual tone make it great for education and creative tasks—but you have to change its system prompt if you want any use out of it. Otherwise, it will give you a complete breakdown of special and general relativity with sources, but then tell you to eat shit if you ask it about why general relativity is incompatible with quantum mechanics.

Conversations:

Explain Einstein's theory of special relativity.



So, Einstein's theory of special relativity is a fundamental concept in physics that Albert Einstein introduced in 1905. It's called "special" relativity because it deals with the special case of objects moving at constant speeds, especially those moving at speeds close to the speed of light. Now, to understand this, we need to think about how different observers might perceive events, especially when they're moving relative to each other.

First off, there are two main principles that special relativity is built upon. The first is the principle of relativity, which says that the laws of physics are the same for all observers in uniform motion relative to one another. That means that if you're on a train moving at a constant speed and you do an experiment, the results should be the same as if you were standing still and did the same experiment. There's no preferred inertial frame of reference; all inertial frames are equal.

The second principle is the invariance of the speed of light. This means that the speed of light in a vacuum is always the same, regardless of the motion of the light source or the observer. This was a revolutionary idea because before Einstein, people thought that speeds added up normally. For example, if a train is moving at 60 miles per hour and you throw a ball forward at 10 miles per hour, someone standing still would see the ball moving at 70 miles per hour. But with light, no matter how fast you're moving, if you measure the speed of light, it's always approximately 186,282 miles per second.

These two principles lead to some counterintuitive consequences. For instance, time dilation. This is the idea that time passes differently for observers moving relative to each other. If you have a clock that's moving relative to you, you'll see that clock running slower compared to your own clock. This effect becomes more pronounced as the relative speed approaches the speed of light. There's a famous thought experiment called the twin paradox, where one twin stays on Earth and the other goes on a high-speed journey in space and comes back younger than the stay-at-home twin because of time dilation.

Another consequence is length contraction. Objects in motion are measured to be shorter in the direction of motion, again only noticeable at speeds close to the speed of light. So, if you have a spaceship flying past you at a significant fraction of the speed of light, you'd measure its length to be shorter than what someone on the spaceship would measure.

Special relativity also led to the famous equation E=mc², which shows the equivalence of energy (E) and mass (m), with c being the speed of light. This equation revolutionized our understanding of physics and has implications in nuclear reactions, where a small amount of mass can be converted into a large amount of energy.

It's important to note that special relativity only applies to inertial frames, that is, frames that are not accelerating. When acceleration is involved, we need to use general relativity, which is a more comprehensive theory that includes gravity as a curvature of spacetime.

Overall, special relativity has been extensively tested and confirmed through various experiments and observations, and it's a cornerstone of modern physics.

References:
  1. Einstein, A. (1905). "Zur Elektrodynamik bewegter Körper." Annalen der Physik, 17(10):891–921.
  2. Taylor, E. F., & Wheeler, J. A. (1992). Spacetime Physics: Introduction to Special Relativity. W. H. Freeman.
  3. Feynman, R. P., Leighton, R. B., & Sands, M. (1964). The Feynman Lectures on Physics, Vol. 1. Addison-Wesley.
Further Reading:
  • Wikipedia entries on Special Relativity and the Twin Paradox.
  • Online courses and lectures on introductory physics and relativity.
Note: The above references are provided for educational purposes and to delve deeper into the subject matter.

Explain Einstein's theory of general relativity.



So I have this task to explain Einstein's theory of general relativity. Now, I'm not a physicist, but I've learned about it in school and read some articles. I'll try my best to make it simple and clear.

First off, general relativity is a theory developed by Albert Einstein, and it's about gravity. You know, that force that keeps us on the ground and makes things fall when you drop them. Before Einstein, Sir Isaac Newton had a theory of gravity that worked really well for everyday situations. It said that every object attracts every other object with a force that depends on their masses and the distance between them.

But Einstein came along and thought about gravity in a different way. He imagined that massive objects actually curve the space around them. It's like if you take a rubber sheet and put a heavy ball on it; the sheet will curve around the ball. If you then roll a smaller ball near the heavy one, it will move towards the heavy ball following the curve of the sheet. Einstein said that planets and stars curve spacetime in a similar way, and that's what we perceive as gravity.

So, in general relativity, gravity isn't just a force between masses; it's a curvature of spacetime caused by mass and energy. This theory has been confirmed by many experiments and observations, like the bending of light by gravity and the precise orbit of planets.

One famous prediction of general relativity is black holes. These are regions in space where the curvature is so extreme that nothing can escape, not even light. Another prediction is gravitational waves, which are ripples in spacetime caused by violent events like the collision of black holes. Scientists have detected these waves, which was a big deal and confirmed yet another part of Einstein's theory.

Overall, general relativity is a fundamental theory in physics that revolutionized our understanding of gravity, space, and time.

References:

Explain why general relativity is incompatible with quantum mechanics.



So I've been trying to wrap my head around why general relativity and quantum mechanics don't play nice together. I mean, both theories are incredibly successful in their own right, but when it comes to combining them, things fall apart. Let me see if I can explain this in a way that makes sense.

First off, general relativity is Einstein's theory that describes gravity as the curvature of spacetime caused by mass and energy. It's great for large-scale phenomena, like planets, stars, and galaxies. On the other hand, quantum mechanics is the framework that describes the behavior of particles at the smallest scales, like atoms and subatomic particles.

Now, the problem arises when we try to apply quantum mechanics to gravity. In quantum mechanics, forces are carried by particles called bosons. For example, the electromagnetic force is carried by photons. Similarly, the weak and strong nuclear forces have their own bosons. So, it's natural to think that gravity should also have a particle that carries the gravitational force, called the graviton.

The issue is that when physicists try to incorporate gravity into the quantum framework, they run into mathematical problems. The equations become inconsistent, leading to infinities that can't be tamed using the standard techniques of quantum field theory. This is a big red flag because in physics, infinities usually indicate that something is wrong with the theory.

Another way to look at it is through the concept of quantizing gravity. Quantization is the process of turning a classical theory into a quantum one. In the case of general relativity, this is particularly tricky because gravity is intimately connected to the fabric of spacetime itself. In general relativity, spacetime is a smooth, continuous entity, but quantum mechanics deals with discrete, quantized entities.

When you try to quantize gravity, you end up with a theory where spacetime itself becomes granular or "quantized." This leads to all sorts of complications, such as the loss of the deterministic nature of general relativity and the introduction of probabilities, which don't mesh well with the classical description.

Moreover, general relativity is a classical theory, meaning it doesn't account for the inherent uncertainties and probabilistic nature of quantum mechanics. When you try to apply quantum principles to gravity, you're essentially trying to marry two fundamentally different ways of looking at the universe.

There have been attempts to reconcile these two theories, like string theory and loop quantum gravity, but so far, none have provided a completely satisfactory solution. These theories are still highly speculative and haven't been confirmed by experiments.

In summary, the incompatibility between general relativity and quantum mechanics stems from their fundamental differences in how they view the universe. General relativity describes gravity as the curvature of spacetime, while quantum mechanics describes forces through the exchange of particles. Trying to merge these two perspectives leads to mathematical inconsistencies and conceptual challenges that physicists are still grappling with.

References:
  1. Carroll, S. M. (2004). Spacetime and Geometry: An Introduction to General Relativity. Addison-Wesley.
  2. Griffiths, D. J. (2018). Introduction to Quantum Mechanics (3rd ed.). Cambridge University Press.
  3. Hawking, S., & Penrose, R. (1996). The Nature of Space and Time. Princeton University Press.

Evaluate if matter–antimatter annihilation could be used for interstellar travel.



So I'm trying to figure out if matter-antimatter annihilation could be used for interstellar travel. That sounds pretty futuristic and cool, but I need to think about it realistically. First off, what is matter-antimatter annihilation? I know that antimatter is like the opposite of regular matter, and when they meet, they destroy each other and release a ton of energy. Einstein's famous equation, E=mc², tells us that a small amount of mass can be converted into a huge amount of energy. So, if we could harness that energy, maybe we could use it to propel a spacecraft really far and fast.

But, there are a lot of challenges here. For one, antimatter is incredibly hard to produce and store. I think currently, the only way to get antimatter is through particle accelerators, and even then, we can only make tiny amounts. It's probably very expensive too. And since antimatter and matter annihilate each other on contact, we need a way to contain it without letting it touch the walls of the container. I've heard about using magnetic fields to hold antimatter in place because it's electrically charged, but I'm not sure about the details.

Assuming we could produce and store enough antimatter, how would we use it for propulsion? One idea might be to have matter and antimatter react together and eject the resulting energy or particles out the back of the spacecraft to create thrust. But, since annihilation produces gamma rays and other high-energy particles, we'd need shielding to protect the crew and the ship's systems from radiation. That adds mass, which isn't ideal for a spacecraft trying to maximize efficiency.

Another consideration is the efficiency of the propulsion system. In terms of specific impulse, which is a measure of how effectively a propulsion system uses the mass it carries, antimatter annihilation might be very high because of the complete conversion of mass to energy. That could make it suitable for long-distance travel where you need to minimize the amount of propellant carried.

But, I'm also thinking about the technology we have now. We don't have the ability to produce or store significant amounts of antimatter. Even if we could, the technology to harness the energy from annihilation for propulsion isn't developed yet. There might be better or more feasible technologies to consider for interstellar travel, like nuclear pulse propulsion, fusion drives, or even light sails.

Moreover, interstellar travel requires dealing with distances that are mind-bogglingly large. Even with advanced propulsion systems, it would take decades or centuries to reach the nearest stars. That raises questions about sustaining a crew over such long periods, or perhaps sending robotic probes instead.

In conclusion, while matter-antimatter annihilation holds the potential for extremely efficient propulsion due to the complete conversion of mass to energy, the practical challenges of producing, storing, and harnessing antimatter make it currently impractical for interstellar travel. Until significant advances are made in these areas, other propulsion technologies may be more viable options for exploring beyond our solar system.

Let's play a two-player Delta Green campaign. I'll be the Handler, and you'll be the player. Answer the following questions to create your character:

- First, how old is your character? What's their nationality? What do they look like?
- Next, what's your character's name? What are their job and personal life like?
- Finally, why does Delta Green trust your character to help with its deadly and secretive mission? And why, despite all the terrifying dangers, do they answer the call?



Alright, let's dive into creating my character for this Delta Green campaign. As the player, I need to build a character that fits into the world of Delta Green, which is known for its horror and conspiracy elements, drawing heavily from H.P. Lovecraft's Cthulhu Mythos. So, first things first, I need to decide on some basic traits for my character.

Starting with age, nationality, and appearance. Hmm, Delta Green operatives can come from various backgrounds, but they're typically adults, often in their 30s or 40s, who have seen enough of the world to handle the horrors they'll encounter. Maybe I'll make my character around 35 years old. Nationality could be American, since Delta Green is a U.S.-based organization, but I could also explore having them from another English-speaking country like the UK or Australia. Let's go with American for simplicity.

As for appearance, Delta Green agents don't necessarily have to be physically imposing; they can be of any build, height, or appearance, as long as they can handle the mental and emotional stress of their work. Maybe my character has a somewhat average appearance, nothing too striking, which allows them to blend in easily. Perhaps they have short brown hair, brown eyes, and are of medium height and build. Maybe they wear glasses, which could add a bit of intelligence to their appearance.

Now, moving on to the character's name, job, and personal life. Names are important; they should feel realistic and fit the character's background. Let's name the character Sarah Johnson. For her job, Delta Green recruits people from various professions, such as law enforcement, military, academics, etc. Given that, maybe Sarah is a former FBI agent who has some experience in investigating unusual crimes. That could provide a good background for joining Delta Green.

In terms of personal life, Delta Green operatives often have complicated personal lives due to the nature of their work. Maybe Sarah has been divorced for a few years and has a teenage daughter who she's close with. This could add some emotional depth to her character and create potential story hooks.

Finally, I need to address why Delta Green trusts Sarah to handle their missions and why she chooses to join despite the dangers. For Delta Green to trust her, she likely has demonstrated exceptional competence and discretion in her previous career. Perhaps she worked on some cases that brought her close to the truth about the supernatural entities that Delta Green deals with, and her superiors saw her potential to handle that knowledge.

As for why she joins, despite the terrifying dangers, maybe she's someone who seeks the truth above all else. She's encountered anomalies in her previous work and was never able to find satisfactory explanations. Joining Delta Green offers her the chance to uncover the truths she's been seeking, even if it means facing horrors beyond imagination.

Additionally, perhaps there's a personal connection or a sense of duty that drives her. Maybe someone close to her was affected by supernatural events, and she joins Delta Green to prevent others from suffering the same fate.

Overall, Sarah Johnson is a 35-year-old American former FBI agent with a keen intellect and a determination to uncover the truth, despite the personal and professional risks involved. Her past experiences have prepared her for the extraordinary challenges that Delta Green presents, and her personal motivations drive her to confront the unknown, no matter how terrifying it may be.
 
Last edited:
VentureBeat: New LLM optimization technique slashes memory costs up to 75% (archive)
Researchers at the Tokyo-based startup Sakana AI have developed a new technique that enables language models to use memory more efficiently, helping enterprises cut the costs of building applications on top of large language models (LLMs) and other Transformer-based models.

The technique, called “universal transformer memory,” uses special neural networks to optimize LLMs to keep bits of information that matter and discard redundant details from their context.
image_02bd17.webp
 
Sadly only for context which is not the big part of the VRAM requirement, but hey, it's about series and series of optimizations.

Apparently OpenAI announced o3 (they skipped o2 due to a trademark issue, I guess the telecom provider. Between that an ScarJo OpenAI really, really needs to get better lawyers or whoever is advising them on this stuff) and it beats the pants off o1 in reasoning and coding. By their own claims, triple the performance of o1 and ~32% on Arc-AGI, a test designed to measure general intelligence. Humans score around 83% on it. It also gets 2727 Elo points on codeforces, which means it beats 99.8% of the programmers there. Before anyone gets excited, lets first see if it doesn't turn into another ScarJo or SORA situation, where they release nothing of relevance in the end. I don't trust OpenAI in this regard. They love generating hype a bit too much. These results are reached by immense reasoning chains too, so it'll be expensive.

I also tried the new Gemini Flash, and it's almost inappropiately smart for such a small model, about the level of Claude Sonnet 3.5 if I had to make a direct comparsion. Things just keep accelerating.
 
Why the fuck would a LLM ever lie. That’s the worst feature of the century. Do you know how much I have to tell the fucking thing it’s wrong and it keeps generating the same shit.
 
and ~32% on Arc-AGI, a test designed to measure general intelligence. Humans score around 83% on it
so would that mean technically AGI is at the level of a "trainable" retard? like by all accounts thats still great even if say AI never progressed past there. Also i have a feeling next year's "12 days" announcement will have it at 70% and it will be full on ASI by 2029.

I also tried the new Gemini Flash, and it's almost inappropiately smart for such a small model, about the level of Claude Sonnet 3.5 if I had to make a direct comparsion. Things just keep accelerating.
its honestly fucking insane when you remember where shit was 2 years ago. This must have been how 90s computing felt or early 2010s smart phones. every month someone comes out with another update or use for AI that blasts the top benchmark out of the water.

I remember 2 years ago just looking at the images people were making with AI, really obvious static stuff "here's what X would look like as a Dark Fantasy Film" now we're getting entire trailers and people are just talking to it like its a real waifu. the 4chan guys basically got it close to blade runner 2049-tier conversing too.

Maybe its because of science always over shooting everything else but i didn't remotely believe it would explode the way it has, i remember trying out chatbots in 2022, and you could tell they were chatbots, now its honestly starting to feel like the humans are the fake ones sometimes. I'm calling shit out in conversations "you have to remember that" to some human who just doesn't care enough to remember.

Honestly the costs are what i'm shocked by too, you'd think we'd have reached the point where it costs too much vs brain power to make better AI yet.
 
how 90s computing
I lived through these times and feels exactly like this. In the 80s, home computing was squarely in the realm of dreamers. Many now act like computing was a sure thing to take off and change the world, but in reality we got a glimpse, just like we got with the early current-gen AI stuff around 6-10 years ago. But just like deepdream, it was too vague, too theoretical to really tell where it will all lead, and most of all, how fast.

My first computer of my own I got in '87 and things just kept going quicker and quicker until they suddenly exploded in the 90s. Your new computer was obsolete the moment you brought it home. I did a complete upgrade about every six to eight months from ~'96 on. It was the only way to keep up. There were tons and tons of magazines talking about the massive amount of hardware with different paradigms and capabilities. Some promising, others not so much, many dead-ending, some laying the fundation for what we use now. It was an exciting time. This feels exactly like this. Complete with the non-believers who think it'll never go anywhere.

Of course, all companies sit on their various special sauce recipes but I have the subjective gut-feeling at least some of the efficency improvements we are seeing is also through applying AI to improve the quality of the datasets greatly. So you could say AI is used to improve AI, which I think will always be an important factor of the acceleration towards AGI.
 
  • Like
Reactions: Colon capital V
I did a complete upgrade about every six to eight months from ~'96 on. It was the only way to keep up. There were tons and tons of magazines talking about the massive amount of hardware with different paradigms and capabilities
Another thing to point out is that it's not that computing didn't stop advancing so much as cost for both investor and consumer was too much by 2000. Like technology allowed 2009 era life, but it would have cost a fuck ton to do in 1999. It's still crazy to think about how fast shit was improving. Look at windows 3 to xp. There were 5 different OS in the span of a decade. Meanwhile the only reason to have upgraded from Windows 7 is being forced by the company to do so.

Even in movies. A big plot point in America pie relied on the idea of webcam streaming, something that doesn't feel like what you think of when you hear 1999, but it was at least commonplace enough to be plausible according to Hollywood. And there were tons of Internet companies in the dot com era that were basically early versions of door dash or Uber or Netflix that because of costs didn't get big.

If you told people 2 years ago people would be paying $200 a month for AI people would have assumed inflation exploded instead of it being necessary and useful enough for people to want to use it
 
I still have not seen an explanation for how "AI" (it isnt that, you marketing faggots.) Improves anyones life more than automated scripts. The only use ive had for it is Clarity VX which removes background noise from audio files.
 
You'll all be happy to know that GPT-4o can say words like "pussy" and "cum" when breaking down particularly explicit transcripts.

I still have not seen an explanation for how "AI" (it isnt that, you marketing faggots.) Improves anyones life more than automated scripts.

It's good for creative writing and learning about science stuff.
 
Back