CrowdStrike down first reported in Australia

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Wonder why Crowdstirke are plugged by the WEF?

We're not affected at work by any of this. However I live in UK, our parent company are foreign, founding family company of us are based AF, religious, and not gonna lie it's awesome.

We use in house stuff and have the means to do this or stuff from companies in in country our head office is. Company is even leery of MS as a rule.

Funny how in UK companies or bodies affected are ones who like to do things on the cheap to reality level staff (Tesco and Spoons are modern day slavers) and public sector or public sector adjacent.
 
Explanatory and Speculatory Sperging
More or less yeah. The suspected cause of the CS issue was it addressing invalid memory area leading to windows systems absolutely shitting themselves. Whoever the fuck pushed this without it getting caught in CI/CD and/or without following correct change control is going to end up with their head stuffed onto a pike.
1721465417019.png
 
I tried to fix one of my work computers myself. I was able to boot into safe mode but I can't access the needed directory without administrator privileges. Normally I can elevate my privileges myself, but the corporate script that authorizes it won't run in safe mode. and this is to say nothing of my encrypted laptop. I'm at the mercy of IT.
 
  • Horrifying
Reactions: Overly Serious
I would think hospitals and emergency services would have backups in case of a server lapse. Either they're cheap, they can't afford it, or they bought into the Cloud infrastructure sell. That in mind, are servers nowadays that poorly maintained?

This actually reminds me of the 2003 outrage that affected the Northeastern US. Software bug or something on a major grid that connects to other grids.
The problem is usually nobody even tests the back up systems and so nobody has any experience using the back up systems and so even if they do have backup computers in place or back up applications ready to go or phones or whatever nobody knows what to do and they run around like chickens with their heads cut off.
So is Jersh right? Are they going to have to walk USB sticks around the world and manually fix all these servers?
You could email instructions to people or otherwise get it to them and tell them how to do it themselves, but a email is down and nobody can access email. This is why it is very very very important to have a second line of communication that is completely independent and unconnected to your primary means of communication. In a small company this is easy you just text or Signal or whatever personal emails but in a large company they’re often fucked.
lol I went shops yesterday and half of them were refusing to sell anything because they don't know how to do it without the computer
It is absolutely as blasted amazing how most clerks can’t even figure out how to just sell you the item for the price or the price. They can’t calculate sales tax. They can’t add items together. There’s no sales tax on food and they won’t sell you any food they don’t even have the idea of writing it down and then scanning it into the system later. This is why Pajeet run convenient stores are actually important to the survival of the nation because those fuckers will just keep selling you stuff no matter whether there’s power or energy or computers or whatever if you have coin, then they have wares.
 
It is absolutely as blasted amazing how most clerks can’t even figure out how to just sell you the item for the price or the price. They can’t calculate sales tax. They can’t add items together. There’s no sales tax on food and they won’t sell you any food they don’t even have the idea of writing it down and then scanning it into the system later. This is why Pajeet run convenient stores are actually important to the survival of the nation because those fuckers will just keep selling you stuff no matter whether there’s power or energy or computers or whatever if you have coin, then they have wares.
ya I just went to the pinoy grocery instead they have a kitchen table in the middle of the shop instead of a counter and they don't look at you like you're trying to offer them a dog shit when you pay cash. I'm so done with wh*toids.
 
  • Informative
Reactions: HIVidaBoheme
More or less yeah. The suspected cause of the CS issue was it addressing invalid memory area leading to windows systems absolutely shitting themselves. Whoever the fuck pushed this without it getting caught in CI/CD and/or without following correct change control is going to end up with their head stuffed onto a pike.
View attachment 6212052
Seriously. You would think something about to be pushed out to millions of computers globally would first be tested on one local computer.

Imagine the world we would be in right now if the code bricked the affected computer instead. Having to boot in safe mode and go in manually is bad enough, but this was a bullet dodged for sure.
 
Whoever thge fuck pushed this without it getting caught in CI/CD and/or without following correct change control is going to end up with their head stuffed onto a pike.
The pajeets responsible for this shitshow will be fired, but will immediately be replaced by fresh pajeets ready to ask you to do the needful.

The problem is usually nobody even tests the back up systems and so nobody has any experience using the back up systems and so even if they do have backup computers in place or back up applications ready to go or phones or whatever nobody knows what to do and they run around like chickens with their heads cut off.
Sadly true. Many companies operate constantly short-staffed, with no slack in the constant stream of work tasks, so there's always some emergency that needs to be handled right now. Nothing gets addressed unless it is immediately, in the present moment, preventing the company from taking in revenue, or messing with the HR lady's ability to play Candy Crush. So any ability to restore from backup or recover from some major disaster doesn't get tested. At best, some checklist will get glibly filled out to make the compliance auditor fuck off so some work that is less bullshit than a compliance checklist can get done.
 
  • Like
Reactions: Wood
That's a fucking massive security vulnerability that one bad push can paralyze a good chunk of western infrastructure. No need for fancy coding, just taking the badge of a pajeet. I doubt any company will actually make sure it never happens again
but if it was done on purpose to make the stakeholders realize they need security features for this inevitability it's pre-emptive defence
 
Crowdstrike is a good example of an internet service that stays the biggest because it is the biggest, not because it's the best. They really didn't do that good of a job when we had them here and it's hard to argue that the in house DDoS retarding hasn't been better or even that DDoS guard wasn't better for the brief time we had it. Google and YouTube are the same way. Other services out there are far superior; they just don't get bigger because people gravitate to the better known services online.

One of the reasons I'm not invested in big tech or really anything on the Nasdaq is I feel like a lot of them are approaching MySpace territory where they can't modernize effectively and are basically one big mistake away from going over the cliff and falling into irrelevancy.
 
Last edited:
Preliminary PIR report is out. Turns out the suspected cause of it reading invalid memory was correct.
Website link | Archive
PDF link | Archive (PDF is also attached to this post)

Exerpt:
On July 19, 2024, two additional IPC Template Instances were deployed. Due to a bug in the Content Validator, one of the two Template Instances passed validation despite containing problematic content data. Based on the testing performed before the initial deployment of the Template Type (on March 05, 2024), trust in the checks performed in the Content Validator, and previous successful IPC Template Instance deployments, these instances were deployed into production. When received by the sensor and loaded into the Content Interpreter, problematic content in Channel File 291 resulted in an out-of-bounds memory read triggering an exception. This unexpected exception could not be gracefully handled, resulting in a Windows operating system crash (BSOD).
 

Attachments

Preliminary PIR report is out. Turns out the suspected cause of it reading invalid memory was correct.
Website link | Archive
PDF link | Archive (PDF is also attached to this post)

Exerpt:
A more detailed update is now available.
(Archive seems to work poorly on pdf, use the attachment)
Also, they're fucking using wordpress.
 

Attachments

  • Informative
Reactions: HahaYes
A more detailed update is now available.
1722963181874.png
1722962851984.png
LOL that they didn't have staged deployment for these in the first place considering their product is an EDR, Jesus wept. Report is pretty decent though overall, it's good they've gotten two separate third party auditors to come in and look shit over that'll help start to restore some of the reputational damage a bit on the client side

Also, they're fucking using wordpress.
Tbh that wouldn't even be the first instance I've heard of people/companies involved in information security (so they should know far better) using fucking wordpress, actual insanity
 
Back