The Linux Thread - The Autist's OS of Choice

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Is this Linux? For reference it's a video from T90 tank.

View attachment 6700397
Sure looks like it.

I see: "... done." which is a common Linux boot trope, "[ OK ]" which is a systemd thing, I think, and the timestamps "[0.001234]" also look like Linux kernel proper messages. I also see "usb X-Y:" which is another Linux kernel boot trope. It might be something that's trying to look like Linux too, but I don't think that's as reasonable.

Edit: Yeah, this is 100% linux. ledtrig-cpu spotted. https://github.com/torvalds/linux/blob/master/drivers/leds/trigger/ledtrig-cpu.c

00023.png
 
Is this Linux? For reference it's a video from T90 tank.

View attachment 6700397
Looks like it. At the beginning it claims to have nine EISA slots, which is kind of interesting because that's early 90s tech and the computers in this thing would have been updated many times since then. The boot log also looks like SysV init, which is also rather obsolete, modern distros use systemd init. If it isn't Linux it's a BSD. BSD is very similar to Linux, but has a different lineage.
 
used to be true, now IMO, not anymore. Anything that matters runs just fine in wine/proton. The few things that don't are probably not worth the effort, again IMO
Except anything Adobe related which, in that case, might be worth the effort.
 
Hypothetically, if someone had a big server database on a m.2 nvme drive, what would be the easiest way to have a near-love backup so if that drive melts another one can get up and running quickly with little data loss? Assuming you found out the motherboard had a spare nvme slot you weren't using?
 
Hypothetically, if someone had a big server database on a m.2 nvme drive, what would be the easiest way to have a near-love backup so if that drive melts another one can get up and running quickly with little data loss? Assuming you found out the motherboard had a spare nvme slot you weren't using?
Optane’s pretty good for this stuff. High IOPS, low latency, very reliable. I’d use that rather than M.2 NAND. If I had to use NAND, I’d get some refurb U.2 drives rather than M.2 sticks, again for reliability’s sake. Stick them in RAID1 to get redundancy and performance.
 
Hypothetically, if someone had a big server database on a m.2 nvme drive, what would be the easiest way to have a near-love backup so if that drive melts another one can get up and running quickly with little data loss? Assuming you found out the motherboard had a spare nvme slot you weren't using?
You've asked one of those questions where you add so much information you preclude the proper answer. You're clearly asking about the recent Farms downtime. The way you get database resiliency (simple version) is to have a master database and slave(s). Then if the master fails you do a live swap over to one of the slave databases becoming the new master and you take the former master offline while you fix it.

You're focused on the file system. DBs have very, very high volumes of writes to the file system so it needs to be maximised for performance. And there are lots of things that can go wrong with a server beyond that. So resiliency is first achieved at the level of server redundancy.

Null, is obviously constrained in what he can do and has to improvise a lot. Like he put some of the site on the boot partition or something. A lot of the time he's flying by the seat of his pants.
 
Well hopefully he will add some redundancy once he gets his hands on that server. I'd recommend getting M.2 to U.2 adapters to run beefier high endurance enterprise drives in mirror since he discovered that there's extra slot.
 
As was mentioned, sync replication is the best solution if you have to be 100% up to date but at a performance hit having to wait for the write to hit the replica. For shit like a shit forum then async replication(log shipping) would get you to the 99.9%. Sure you'll lose any transactions that didn't get in the current log and shipped but that's a fairly minimal loss in most cases if you're not a financial company. But it absolutely has to be a second database/server. It can be a warm standby and importing logs in real time or a cold standby requiring a switch, import all current logs and start up.

I'm going on the assumption that the corruption was not something that would get stuck in a log and replicated. Replicated corruption is bad(tm). Although if you keep the logs and the base backup you can simply play forward and stop before the troublesome logs with minimal data loss.

It also improves things if you can take a binary backup and not a raw db dump. This can be an lvm snapshot or other mechanism. "FLUSH TABLES WITH READ LOCK" backup/snapshot the binary files, unlock, copy the backup to storage.
 

"I love Linux as a technology, I hate what is accompanying it"

"Watch the video to the end, because I am tired of debating with fucking idiots"

"Please don't comment, you make yourself look foolish"

I swear this dude wants to be trolled.
I just do not listen to YouTubers opinions in general at this point. Nothing personal, but I feel that they aren't interesting. He does look like a super sperg though.
 
Last edited:
I just do not listen to YouTubers opinions in general at this point. Nothing personal, mind all of them but I just feel that they aren't interesting. He does look like a super sperg though.
The only YouTuber opinion I ever found interesting was one about drama in the fountain pen ink community
 
Hypothetically, if someone had a big server database on a m.2 nvme drive, what would be the easiest way to have a near-love backup so if that drive melts another one can get up and running quickly with little data loss? Assuming you found out the motherboard had a spare nvme slot you weren't using?

Optane’s pretty good for this stuff. High IOPS, low latency, very reliable. I’d use that rather than M.2 NAND. If I had to use NAND, I’d get some refurb U.2 drives rather than M.2 sticks, again for reliability’s sake. Stick them in RAID1 to get redundancy and performance.

I've had a couple of times I've had the freedom to qualify hardware for database performance (including the go ahead to do weird esoteric shit, even though realistically at the end of the day everyone usually just buys Dell or Lenovo, or sticks with a well supported Supermicro config), and done some testing at home. M.2 was exciting when NVMe was getting big around 2016, but it's honestly a huge pile of shit and I don't understand why anyone would ever use it outside of desktops or laptops. You've got basically asshole flash on them most of the time, almost none of them have PLP, they all massively struggle with cooling when you actually use the drive (especially with the way cooling works in a server chassis). You might think, "I'll work around it, it's amazing they can do 500k or 1M IOPs in such a cheap package", and yeah it might do that for like 3 seconds when it hits what's likely SLC cache, but they all quickly degrade to really pedestrian IOPS numbers when you use them the way a database would.

The smart thing (IMO) is to never use M.2 for anything you give a shit about on a server, and to either stick with SAS 12G or U.2 if performance is important and just eat the fact the power draw on them is a little more, or use enterprise SATA SSDs where you can get away with lower performance. Optane was cash money, and kicked serious ass, it's sort of shame Intel wound that down, as it legitimately did have absurd performance (and I've seen them used really successfully for cache on high performance storage setups).

Also, if Null is using stock Oracle MySQL, I think he's leaving performance on the table (and regular ass old MySQL is kind of retarded). I managed a few LARGE MySQL databases (all of which were 5TB+ and one over 15TB) a while back (like MySQL 5.6/5.7 era), which were getting absolutely flogged day in and day out (we're talking tens of thousands of QPS around the clock, working the piss out of the Dell R930s they ran on), and Percona's MySQL variant with xtradb had a significant performance advantage over stock InnoDB. I'm not sure if the gap has closed in the last half-decade, but Percona's MySQL variant was always across the board much better. The other huge advantage was xtrabackup, which allowed us to do online backups for these databases (which previously, if you wanted not shit performance, required us to stop a read replica and basically rsync the database files to an external filer).
 
The smart thing (IMO) is to never use M.2 for anything you give a shit about on a server, and to either stick with SAS 12G or U.2 if performance is important and just eat the fact the power draw on them is a little more, or use enterprise SATA SSDs where you can get away with lower performance.
depends on your budget and availability.
the server I've got (a 45 drives AV 15) has no U.2 capability since it predates the adoption of the standard. (when i bought it, the latest revision might have it i don't know)
SAS is possible, but you need to find a compatible drive which can be a problem if you're unlucky. and with U.2 taking over, SAS drives will stop being made at some point and prices will skyrocket when they do.
SAS can use sata drives though, just without the benefits of native SAS drives.
 
depends on your budget and availability.
the server I've got (a 45 drives AV 15) has no U.2 capability since it predates the adoption of the standard. (when i bought it, the latest revision might have it i don't know)
SAS is possible, but you need to find a compatible drive which can be a problem if you're unlucky. and with U.2 taking over, SAS drives will stop being made at some point and prices will skyrocket when they do.
SAS can use sata drives though, just without the benefits of native SAS drives.
I doubt null is using even a fraction of his PCIe lanes. He’s got an EPYC7002/3 platform, that’s a lot of lanes. He should be able to fit a U.2/3 HBA (Tri-Mode LSI etc). The issue there would probably be getting the right cables to actually plug stuff in, and I’m not sure how confident null is with getting weird with his hardware. If his server isn’t already using it he probably doesn’t have an appropriate backplane, for example. But I also get the impression he might not even have a backplane, and this is just a loose motherboard in a desktop tower case? Budget also enters into it, I’m talking about backplanes which on their own are more expensive than his whole CPU+mainboard.
 
Back