"Mad at the Internet" - a/k/a My Psychotherapy Sessions

It’s an attainable dream for many, and if you can do it and maintain the divide between bijou tiny house and trailer park life, good luck to you.
There is probably more dignity in living in a trailer park than living in a Ford Transit Van or a shoebox in the middle of nowhere in Idaho that barely has enough room to lie down in. At least the hicks in the trailer park aren't pretending like they're being cool and trendy while living in what amounts to abject poverty for Instagram likes.
 
I will not be doing the meme!
yes, don't know if these people love Apple more than Google.
people can tell its monopoly & Epic won't be fined.

Three years after Fortnite-maker Epic Games sued Apple and Google for allegedly running illegal app store monopolies, Epic has a win. The jury in Epic v. Google has just delivered its verdict — and it found that Google turned its Google Play app store and Google Play Billing service into an illegal monopoly.
After just a few hours of deliberation, the jury unanimously answered yes to every question put before them — that Google has monopoly power in the Android app distribution markets and in-app billing services markets, that Google did anticompetitive things in those markets, and that Epic was injured by that behavior. They decided Google has an illegal tie between its Google Play app store and its Google Play Billing payment services, too, and that its distribution agreement,
Mind you, we don’t know what Epic has won quite yet — that’s up to Judge James Donato, who’ll decide what the appropriate remedies might be. Epic never sued for monetary damages; it wants the court to tell Google that every app developer has total freedom to introduce its own app stores and its own billing systems on Android, and we don’t yet know how or even whether the judge might grant those wishes. Both parties will meet with Judge Donato in the second week of January to discuss potential remedies.

Judge Donato has already stated that he will not grant Epic’s additional request for an anti-circumvention provision “just to be sure Google can’t reintroduce the same problems through some alternative creative solution,” as Epic lead attorney Gary Bornstein put it on November 28th.

“We don’t do don’t-break-the-law injunctions... if you have a problem, you can come back,” Donato said. He also said he did not intend to decide what percentage fee Google should charge for its products.

Although Epic didn’t sue for damages, Epic Games CEO Tim Sweeney suggested Epic stood to make hundreds of millions or even billions of dollars if it doesn’t have to pay Google’s fee.
1702395366982.png1702395406377.png1702395443072.png1702395491992.png1702395536158.png
...............
In other Gaming News E3 is DEAD DEAD
Pretty much Nitendo killed it.

1702395873531.png1702395917004.png
“Any one of these major companies can create an individual showcase … [and] also partner with other industry events to showcase the breadth of games,” he said. “That’s exciting for our industry, and it means it’s an opportunity for them to explore how to engage new audiences in different ways.”
 
Last edited:
As we are approaching the end of the year will Nulls prediction about Ralph dying come true or will Jersh take another L by the Ralphamale.
:diddler: winnin', losin', awl of these are meer gossamer in the grander schem of the universe. Mah sawl is eternal, ay am enlightened, nothing is important aside from the spirit and the next life
 
MATI By The Numbers

I have finished the first phase of my transcription project. Which was just to get everything initially processed. Next is finding a place to host it, rectifying the names, getting all the files linked to a video source(that I'm not hosting), marking which speaker is Josh, and a search engine. But here's some fun numbers:

286 episodes transcribed Note: may be a couple duplicates, need further checks.
568 hours of content.

5.2 million words.

Harry Potter is only 1 Million words. Obviously some percentage of these words were not Josh since it also transcribed videos and guests.

27MB of text

About 1GB of raw JSON, which contains word by word speaker identification and timing.
Code:
    {
      "start": 286.867,
      "end": 288.267,
      "text": "Uh, but whatever.",
      "words": [
        {
          "word": "Uh,",
          "start": 286.867,
          "end": 286.907,
          "score": 0,
          "speaker": "SPEAKER_13"
        },
        {
          "word": "but",
          "start": 287.807,
          "end": 287.907,
          "score": 0.54,
          "speaker": "SPEAKER_13"
        },
        {
          "word": "whatever.",
          "start": 287.947,
          "end": 288.267,
          "score": 0.939,
          "speaker": "SPEAKER_13"
        }
      ],
      "speaker": "SPEAKER_13"
    },

The transcribing engine I used seemed to run at about 15x real time, so about 35 hours of processing on a single GPU.

Edit: More fun facts.
N-words: 161 + plural: 23
N-per-hour: 0.32
nigga: 414 + plural: 152
troon: 134 + plural: 58
tranny: 1009 + plural: 21

Top 5 words, 5 letters or greater: the real top 5 are "the, i, and, to, a" which are boring:
26919 because
24581 thats
23411 people
23307 about
21956 fucking

I attached the word list sorted by frequency if anyone wants to do more statistical analysis.

Edit2: There may be a couple duplicates in the file list. I need to sort this out.
 

Attachments

Last edited:
MATI By The Numbers

I have finished the first phase of my transcription project. Which was just to get everything initially processed. Next is finding a place to host it, rectifying the names, getting all the files linked to a video source(that I'm not hosting), marking which speaker is Josh, and a search engine. But here's some fun numbers:

286 episodes transcribed
568 hours of content.

5.2 million words.

Harry Potter is only 1 Million words. Obviously some percentage of these words were not Josh since it also transcribed videos and guests.

27MB of text

About 1GB of raw JSON, which contains word by word speaker identification and timing.
Code:
    {
      "start": 286.867,
      "end": 288.267,
      "text": "Uh, but whatever.",
      "words": [
        {
          "word": "Uh,",
          "start": 286.867,
          "end": 286.907,
          "score": 0,
          "speaker": "SPEAKER_13"
        },
        {
          "word": "but",
          "start": 287.807,
          "end": 287.907,
          "score": 0.54,
          "speaker": "SPEAKER_13"
        },
        {
          "word": "whatever.",
          "start": 287.947,
          "end": 288.267,
          "score": 0.939,
          "speaker": "SPEAKER_13"
        }
      ],
      "speaker": "SPEAKER_13"
    },

The transcribing engine I used seemed to run at about 15x real time, so about 35 hours of processing on a single GPU.
Is it enough words for our Null AI bot?
 
If the text database is uploaded I will personally create a null LLM. I've been wanting a project to learn fine tuning with.
Give me a couple days and I'll share the raw JSON privately with anyone who wants it. I need to get the filenames fixed as I have 3 different naming schemes and a bunch of quotes and parenthesis and stuff that I don't like and a bunch with just a date as a name, some with numeric dates and some with words. And maybe a couple duplicates from different sources.

You'll still need to extract just the Josh. The speaker tagging doesn't persist between files, so in one he's SPEAKER_17 and in another he's SPEAKER_22. Also it sometimes gives him 2 or maybe 3 different speaker tags throughout the file.
 
  • Feels
Reactions: gildersleeve
MATI By The Numbers

I have finished the first phase of my transcription project. Which was just to get everything initially processed. Next is finding a place to host it, rectifying the names, getting all the files linked to a video source(that I'm not hosting), marking which speaker is Josh, and a search engine. But here's some fun numbers:

286 episodes transcribed Note: may be a couple duplicates, need further checks.
568 hours of content.

5.2 million words.

Harry Potter is only 1 Million words. Obviously some percentage of these words were not Josh since it also transcribed videos and guests.

27MB of text

About 1GB of raw JSON, which contains word by word speaker identification and timing.
Code:
    {
      "start": 286.867,
      "end": 288.267,
      "text": "Uh, but whatever.",
      "words": [
        {
          "word": "Uh,",
          "start": 286.867,
          "end": 286.907,
          "score": 0,
          "speaker": "SPEAKER_13"
        },
        {
          "word": "but",
          "start": 287.807,
          "end": 287.907,
          "score": 0.54,
          "speaker": "SPEAKER_13"
        },
        {
          "word": "whatever.",
          "start": 287.947,
          "end": 288.267,
          "score": 0.939,
          "speaker": "SPEAKER_13"
        }
      ],
      "speaker": "SPEAKER_13"
    },

The transcribing engine I used seemed to run at about 15x real time, so about 35 hours of processing on a single GPU.

Edit: More fun facts.
N-words: 161 + plural: 23
N-per-hour: 0.32
nigga: 414 + plural: 152
troon: 134 + plural: 58
tranny: 1009 + plural: 21

Top 5 words, 5 letters or greater: the real top 5 are "the, i, and, to, a" which are boring:
26919 because
24581 thats
23411 people
23307 about
21956 fucking

I attached the word list sorted by frequency if anyone wants to do more statistical analysis.

Edit2: There may be a couple duplicates in the file list. I need to sort this out.
This is really mean lmao
 
Isn't it great that giant domain registrar like Epik have the power and influence to effectively deplatform a website that isn't illegal and didn't break their terms of service. Meanwhile Discord one of the biggest chatting apps in the world regularly traffics csam and is fully operational 24/7. What a fucking world we live in.
 
Last edited:
Back