The Linux Thread - The Autist's OS of Choice

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
uhhhhh how do i create an account? and for some reason my old one doesn't work (plus it's a Ring 2 identity which means people here can figure out my real name based on other info I've revealed on the farms)
it's just a identity.kde.org account, i signed up using my proton email. it's confusing since you have to go to identity.kde.org and sign up there THEN log in to their phabricator instence using the account you set up.
this post will make them immediately kill x11 support
glowie detected. content ignored.
 
Wayland still doesn't have the ability to run a color calibrated/managed stack. I can't believe how feature incomplete it is.

It's like btrfs, IMO. still can't really scrub/fsck it, ship it make it default it's fiiiiine.
 
I'm running a Lenovo Legion 5 with a 4060 graphics processor and even though this is the 3rd computer I've tried I STILL can't get wayland to work properly with Nvidia. The Wayland push is fucking stupid.
 
The *BSDs have their own X stuff going on last I checked. If somebody told me it's either wayland or nothing I'd move over there. It'd suck to fall back to the 00s in terms of software compatbility but going back to VMs for the vidya I play once a year would not be so bad I guess, especially with the computers we have now. Their driver support also apparently has improved quite substantially lately. Many tools I use already come from there as is.
 
  • Like
Reactions: Chen Stirner
I'm running a Lenovo Legion 5 with a 4060 graphics processor and even though this is the 3rd computer I've tried I STILL can't get wayland to work properly with Nvidia. The Wayland push is fucking stupid.
They'll consider 'not working with nvidia's blob' as a feature not a bug, though.
 
Star Trek covered this one a few times with the holodeck. Everyone thinks of Data, the emotionless mathematical genius who can calculated at tremendous speed when they think of AI in Trek, but they never remember the time when the holodeck was given a command to "make an opponent capable of defeating Data" and just decided to spit out a fully self-aware holographic lifeform. Or every time the holodeck generated lifelike responses to arbitrary, unpredictable events and commands, or the way it can just gin up almost everything you're asking for with a couple of sentences of instruction. Star Trek's holodeck really does seem to be a prediction of AI prompting, including all the messy "do it again but better this time" business.
That does match with how AGI most likely will be first created. Not one thinking entity like in a human, but a dozen unthinking ones that can call on each other to handle an issue they’re individually not suitable for.

User has prompted me for a maths question, feeding it into the maths-model.
User has prompted me to draw a poem, feeding it into the LLM. LLM has created a description of what to feel about the poem, feeding it into the art generator model. Art generator has returned a picture.
User has prompted me to make a D&D campaign. LLM creates a setting. LLM asks user for actions. LLM asks maths model to roll some dice and work out what the rules say happens. LLM returns a GM description of whatever happened.

Using just GPT, a set of sensors, and APIs, and DALL-E, we could plausibly create a robot that can move and talk autonomously already today. Prompt something like “help the user”, sensors and DALL-E reversed tell the LLM what’s happening over API, and the robot should be able to notice a person walking into range, ask the person if it can be of service, and bring over a can of soda or something by recognising a fridge and using its hand to open the fridge door, grab a soda, close the fridge door, and then locate the person by “remembering” where they were last seen. It would be extremely slow, but that’s going to improve with time.

The Star Trek ship is a voice recognition system feeding into a chat bot that then tells other systems what the user asked for them to do. Same general idea.
 
That does match with how AGI most likely will be first created. Not one thinking entity like in a human, but a dozen unthinking ones that can call on each other to handle an issue they’re individually not suitable for.
um...
1717895297878.png
 
We are already a step farther. LLMs are not the frontier anymore, the current step is multimodality. GPT 4o is a true multimodal model, for example. The speech in OpenAIs demo you can see can happen at that speed because it actually 'hears' and 'speaks'. This happens all in the same model, with the same internal representations, inside the same neural network. The pathway from hearing to replying is in the 200 ms range, that's about as fast as a human brain. No conversion is taking place, no multiple models, it would be impossible at that speed, too much latency. I'm not sure if many people are aware of how insane that really is.

Because 4o is truly multimodal it can also "see" and generate visual imagery. The interesting thing about this is that because of it's ability to process these different modalities is that it can keep things stable between generations. You can show it a picture of a character, and then tell it to draw this character in completely different situations and it can do it without the character changing. Yes, like a holodeck. This already can be done by that model, it does not need multiple models and complicated conversion steps to do that. It can even do things like rotating geometry because of that. The network can seamlessly convert one kind of data in another kind of data, visual into audio, audio into text, audio into visual. It does not matter which way. Just like we can.

I wrote about this in another thread but multimodality is basically a necessary step because LLMs have a very incomplete inner world. In simple terms, because all they have is text they do not understand the fundamental ways of how "our" world is put together and that's how the mistakes they make happen. Their approximation of reality is completely alien to ours and everyone who ever played around with an LLM knows that. You see these massive amounts of text that are shoved into them and think they should turn out superintelligent, but the truth is that this gives them less information about our world than a toddler gets in it's short life. For a more accurate world representation, you need more data. There's still problems to solve but it's mainly an engineering and scaling problem at this point, IMO. People working at OpenAI seem worried, tho.
 

kde has opened up submissions for what goals they should focus on for the next few years.
i have put in a task to continue X11 support https://phabricator.kde.org/T17393 but i'll require some of you boys to sign up and add your voice.
why do this?
to make the wayland troons seeth. why else?
Forgot where I saw it, I guess on Telegram, but Kubuntu 24.04 release notes explicitly say that wayland login option exists but is only for testing and they have X11 by default. Demoted by the most popular distro. So X11 is definitely here to stay for at least a good ten years of mainstream use/support.
(Just have some AI coding team fix all the bugs in ten years).

I wrote about this in another thread but multimodality is basically a necessary step because LLMs have a very incomplete inner world. In simple terms, because all they have is text they do not understand the fundamental ways of how "our" world is put together and that's how the mistakes they make happen. Their approximation of reality is completely alien to ours and everyone who ever played around with an LLM knows that. You see these massive amounts of text that are shoved into them and think they should turn out superintelligent, but the truth is that this gives them less information about our world than a toddler gets in it's short life. For a more accurate world representation, you need more data. There's still problems to solve but it's mainly an engineering and scaling problem at this point, IMO. People working at OpenAI seem worried, tho.
Definitely. video platforms, and livestreaming platforms, and any platforms that have access to large troves of archived or live video, have a big advantage for training AIs. At least in terms of information density. Possibly already or soon to hit the limits of capabilities on only training on the textual Internet data.
OpenAI totally lacking any sort of even basic iMovie-like interface means that its their game to lose and they're being all faggoty bitchy and coy with Sora (like they were with GPT2, and now random online models routinely show off how they beat GPT2 in benchmarks currently).
OpenAI needs to get knocked the fuck down already.


Also about NPUs and Linux:
Probably misses a fair bit but seems like the rough figure to determine AI power that is getting used (cue megahertz mythcrafting) is TOPS.
A screenshot from the Intel Lunar Lake (fabbed with TSMC) with an obese foid crudely hamhanding chips:
Screen Shot 2024-06-08 at 6.56.40 PM.png
Even on a laptop chip the ecks eeee xD "only barely does directx 12 intel chip is significantly faster than the deditated wam/NPU trash.

Also from the same vid this literal real life soy faggot (actually is a real life soyjack) had a segment on how the cucked brits are shoehorning the AI fad into the rapesberry pi, by having a little addon board ("hat" in faggot speak) to connect an m.2 "AI accelerator" board to the rapesberry pi 5 PCIe 2.0 (yes two point zero) m.2 interface, giving it 13 TOPS.

Having the latest raspylesbianos 'natively support the module' and it can be used for some like computer vision tasks currently
Screen Shot 2024-06-08 at 7.03.08 PM.png

Segment is 30 seconds from 6:33 to 7:00 in timestamp, leaving out the soy commentary this loser feels like people would want to hear:

So summary is, NPUs can do a fraction of what GPUs already can do and only matter for energy efficiency, but overall are boring and gay and MBA-pushed shite.


Also LLMs still just need a way to check themselves for things like being lazy and refusing to do simple things like run code, or not making use of data already provided.
and even a basic ability to break one task into smaller tasks, instead of just doing an extremely poor job of trying to do a large task in one response.
 
Forgot where I saw it, I guess on Telegram, but Kubuntu 24.04 release notes explicitly say that wayland login option exists but is only for testing and they have X11 by default. Demoted by the most popular distro. So X11 is definitely here to stay for at least a good ten years of mainstream use/support.
(Just have some AI coding team fix all the bugs in ten years).
Oh shit really? I wonder if any other major OS or DE will roll back to default X11 instead of wayland. I still feel like it would be better to rebuild X11 into something more modern, and doing so would be faster than trying to make Wayland work.
Also from the same vid this literal real life soy faggot (actually is a real life soyjack) had a segment on how the cucked brits are shoehorning the AI fad into the rapesberry pi, by having a little addon board ("hat" in faggot speak) to connect an m.2 "AI accelerator" board to the rapesberry pi 5 PCIe 2.0 (yes two point zero) m.2 interface, giving it 13 TOPS.
using a m.2 interface for processing is an interesting concept tho, could that allow you to add additional processing power to a laptop (that has a second drive connector)?
 
using a m.2 interface for processing is an interesting concept tho, could that allow you to add additional processing power to a laptop (that has a second drive connector)?
m.2 can be SATA, USB or PCIe. This appears to be PCIe, so yes you could put it in a PCIe m.2 slot in any hardware unless it has some odd blacklisting or something.

And the CPU on most laptops is probably almost as fast anyway.
 
Oh shit really? I wonder if any other major OS or DE will roll back to default X11 instead of wayland. I still feel like it would be better to rebuild X11 into something more modern, and doing so would be faster than trying to make Wayland work.

X11s use cases extend far beyond that of the desktop side of things so it's not going to go away if priorities at large have anything to say about it.

If the heads of BSDs are working on their own derivative of existing X11 builds there's going to be some mutual fixes here and there that carry over into the Linux space. This is the part that the myth of zero maintenance conveniently leaves out.

In other words, the standard business for open source.
 
using a m.2 interface for processing is an interesting concept tho, could that allow you to add additional processing power to a laptop (that has a second drive connector)?
You can add a GPU, in theory you could have a whole expansion chassis. If you don't mind extreme inconvenience. But anything that is actually going to fit inside and not overheat is just going to be an SSD, maybe one of these silly NPU things, or a Wifi card or cellular modem (if it's Wifi or cell those are often just USB like @davids877 says even though they look like 'real' pci-E M2 cards externally).
I think someone's posted an actual M2 form factor video card in a thread on here before. But that was some kind of extremely shit non-accelerated VGA only thing for servers that would be meant to only be connected to a KVM as some kind of terminal if the built in graphics/management software was broken for some bizarre reason.
 
X11s use cases extend far beyond that of the desktop side of things so it's not going to go away if priorities at large have anything to say about it.
The first time I was sitting in a plane and flipped on the power switch and the original X11 background and cursor appeared before the engine instrument display finished booting I was briefly very confused. Sure it was probably QNX or VxWorks and not Linux, but it was definitely X11.
 
X11s use cases extend far beyond that of the desktop side of things so it's not going to go away if priorities at large have anything to say about it.

If the heads of BSDs are working on their own derivative of existing X11 builds there's going to be some mutual fixes here and there that carry over into the Linux space. This is the part that the myth of zero maintenance conveniently leaves out.

In other words, the standard business for open source.
I still feel like there is a way to more granularly divide desktop composition into smaller components. like if you wanted to view pictures or videos or websites from the commandline without having a proper desktop environment installed. like if you could subdivide everything into a series of streams, so a desktop environment could have applications be transmitted like HTML to a final compositor that applies a theme (like CSS) to the apps and inserts videos and pictures, and if you need a special function like fractional scaling or multiple monitors you could insert a step in the pipeline really easily. I feel like there's a great solution in there, but i'm hampered by my limited understanding of the subject.
 
The first time I was sitting in a plane and flipped on the power switch and the original X11 background and cursor appeared before the engine instrument display finished booting I was briefly very confused. Sure it was probably QNX or VxWorks and not Linux, but it was definitely X11.
If only they'd had inflight Wifi back in those days.. You have to imagine that the engineers for those first systems had a bit of 'xhost +' action going on for ease of debugging (authorizing clients running an application from anywhere in the network to display windows on any X terminal on board).
 
Oh shit really? I wonder if any other major OS or DE will roll back to default X11 instead of wayland. I still feel like it would be better to rebuild X11 into something more modern, and doing so would be faster than trying to make Wayland work.

using a m.2 interface for processing is an interesting concept tho, could that allow you to add additional processing power to a laptop (that has a second drive connector)?
I was lazy but I got it:

Archive: https://archive.is/bGB4O

Screenshot:
Screen Shot 2024-06-08 at 8.32.21 PM.png

Turns out I was wrong, not even installed by default. You have to install an entirely separate package for wayland. Indeed, "available for testing", and this is on the LTS release, so 5 years of support I think. Though for the different desktop environments the actual Kubuntu team in the past has still only comitted ot supporting it for something like three years, or so, until the next LTS.

Though, no idea how well it works out, but Ubuntu Pro (free on 5 desktops for personal use) claim to cover security releases for ten years for both "main" and also "universe" repositories. 23k packages in the "universe" repositories":
Archive: https://archive.is/vkV1g

Screen Shot 2024-06-08 at 8.37.57 PM.png

At the cost of keeping the snapd canonical spyware on. For like a NAS that should be untouched at some family house this seems good. Then 3-2-1 backup to it with only encrypted data.

using a m.2 interface for processing is an interesting concept tho, could that allow you to add additional processing power to a laptop (that has a second drive connector)?
I mean, any like 'accelerator'. There are those like USB accelerator deals. Whatever you're accelerating has to only need PCIe 2.0 though, but expecting speed and/or performance from any arm device that doesn't specifically have dedicated ASIC hardware, like h.264 video decoding, is such a fool's errand.

X86 is the immortal emperor, until the AI makes risc v suck less.
 
  • Thunk-Provoking
Reactions: Vecr
If only they'd had inflight Wifi back in those days.. You have to imagine that the engineers for those first systems had a bit of 'xhost +' action going on for ease of debugging (authorizing clients running an application from anywhere in the network to display windows on any X terminal on board).
The one I saw it in was not an IP network device, it was RS-422 to the sensor pods on the engines. The only other interfaces were plain old serial or ARINC to another display device and a USB port to download the engine data.

I think it was a few years before we started seeing even Ethernet used between devices.
 
The one I saw it in was not an IP network device, it was RS-422 to the sensor pods on the engines. The only other interfaces were plain old serial or ARINC to another display device and a USB port to download the engine data.

I think it was a few years before we started seeing even Ethernet used between devices.
Sorry, I'm a retard- I missed the mention of engine instrument displays and thought you were talking about inflight entertainment.
 
  • Like
Reactions: Vecr and DavidS877
Back