SBC / Low Power boards general - Raspberry Pi and what not

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
I think they had trouble with SoC availability.
Quite possible, I assume the SoCs will have recovered but a lot of other parts never went back to normal. Quite funnily, the Zero is more expensive than the Zero Two Pro which has the better A311D. I can only stress that I don't think this SBC and SoC are worth $100, I feel it's way too expensive for what it is. I just needed this specific part. You can get the same SoC in cheap TV-Boxes and TV-Sticks but I needed the board as it is.

If anyone wasn't aware (before I started this project I personally didn't quite put two and two together), is on a budget and doesn't mind tinkering: You can get these ARM computers in TV boxes (with proper cases, decent IO and everything) and TV sticks for sometimes very cheap on aliexpress and kick their malware infested android off and "liberate" them to run mainline Linux. This box for example I saw on Ali for 20 eurobucks, this one for 10. (These are Allwinner but there are also Rockchip, Amlogic etc. boxes) Considering how they nickel and dime for extras like eMMC, cases and additional access ports on "enthusiast" SBCs, it might be worth it if you have a specific application scenario in mind. They mostly seem to dump cheap, low-capacity, slower DDR3 ICs into these and of course you won't get a GPIO header but hey, still. Also it is quite possible that some SoCs will have security fuses burned that won't allow overriding whatever is installed, but from research, it looks like most manufacturers don't bother.

The real luxury is running mainline Linux on them anyways. I could not overstate how garbage the vendor kernels are if I tried. I noticed the Vim1s doesn't properly recognize the 8" monitor and just sends 1080p and according to the kernel, thinks it's some 21" Daewoo 1080p screen. I first thought maybe the EDID of the monitor is shitty and doesn't identifiy it properly, but in fact the EDID is fine. Then I googled a little and found various people on the Khadas forums complaining about the same thing, then the Khadas people suspiciously asking for EDIDs&resolutions in each case and promising a fix. Are they hardcoding these? Finest chinese engineering your money can buy.


Missed a chance to try a graphics card, but I bet kernel support for that is a mess. Wonder what the power consumption is. Really doesn't feel like RISC-V is ready for prime time quite yet. If this just eventually goes into the same situation with questionable driver support and very similar stats across the board with lower end x86, it'll just end up like ARM.
 
I spent almost the entire week doing stuff on the Radxa off and on. It's amazing what a nice and quiet, distraction-free enviroment it is. Without getting philosophical, there's certainly some value in less sometimes. I notice that all the time when I'm sitting down at my Amiga. Super primitive computer but I always get productive somehow.

Emulation isn't a problem, fs-uae works absolutely fine, dosbox-staging does too. More lightweight 3D DOS games like Daggerfall and older runs pretty fluidly, System Shock in a lower resolution plays about the same speed it did on my 486 back in the day. I say it like that on purpose because anybody else would have told you it is "virtually unplayable". Nah, that's pretty much how we played such games. Framerates hovering in the 20s and30s were the norm. Yet I don't think it's running as efficent as can be and there's probably improvements to be had. 800x480 kinda sucks for emulation because none of the resolutions scale nice to it. 1280x800 would be better. Now that I have more of an impression what the system can do, I might actually attempt to get such a monitor at 8" (since I started planning on the casing, the 8" are locked in now). I also hooked it up to my 2560x1600 desktop monitor and it didn't struggle with the increased amount of pixels in emulation etc. at all.

If you really want to, even browsing and youtube is fine, some websites are slower than others, a global javascript ban would have probably done a lot of good, I don't know, didn't try it a lot. I did try "the old web" via wiby.me in firefox and all these pages loaded pretty much instantly. Video playback of h265 in software via mpv in 480p and 720p is without framedrops on the 800x480 screen, framedrops at 720p on the 2560x1600 one. There's libreelec which seems to have VPU hardware acceleration going but doesn't seem to use the vendor kernel, I need to try if there's something I can salvage from that maybe. This all was with ratpoison and X which works just fine, with GPU support. I can just repeat myself in how good it is to have proper mainline kernel support. Everything simply works without a hitch, and for all intents and purposes this is a normal, linux-running computer. No hardcoded 1080p, no spammy dmesg in the veins of chinglish "a Event happened!!" every five minutes for no clear reason. That's the power of mainline quality. If it isn't in mainline, 9 out of 10 times, do not bother.

So far I didn't bother with a heatsink, the SoC barely crosses 54C, same for the RAM IC which interestingly has a temperature sensor that works in linux. Especially in an enclosed space a small heatsink is probably prudent though. I'll put on a somewhat bigger heatsink because I want to cover a wide enough array of temperatures. I could have one machined, but there are Pi Zero heatsinks in several sizes that might fit on this well. I need to measure.

Now, I did some actual measurements with an actual real setup that exists with cables and everything. Mechanical keyboard with some ST MCU or other+QMK, the screen, a stand with integrated USB hub that contains a USB C HDMI output chip I can't seem to turn off, a card reader that seems to turn off, and a USB 2 hub, SD-Card in the Radxa (eMMC works fine, but I didn't finish setting up the system yet). The Radxa is connected to power via pins where I measure, then to the hub via it's USB 3 port. Everything else is connected to the hub.

In idle this setup consumes about ~4.6W. If we disconnect the monitor (which also contains a USB touchscreen and audio circuity) from the hub, the power consumption drops to ~1.7W. Oof, but not surprising. Monitors are expensive. It's interesting that the monitor consumes about a watt more in this configuration, but my old measurements weren't that accurate so it's also not super suprising. We're still in the envelope of the 6W I wanted to stay in with headroom to spare even under full load and not taking a lot of care to be efficent, so that's pretty good. Just like with all notebooks, the monitor is the costly part in this setup, battery-wise. There would probably have been gains possible with a panel connected via MIPI DSI, but the radxa doesn't wire that out. I also read with anything non-Pi it's a gamble if it actually works so I didn't really bother to go looking for it.

Let's also do a real test. I have one of these "USB energy packs" that claims 20.000 mAh (more like 13.500 at 5V in reality). This thing is already a few years old and I'm sure not at full capacity anymore either. I disconnected the computer from the power pins and hooked it's OTG USB up to that, so all devices connected to it run off the battery pack. I didn't stress test, but just regularily use the computer like I plan to, run graphical emacs in X to do various things on the network and remotely and interact with AI from within emacs with my own scripts, write some scripts and such, run a few games in emulation (for a time I just let fs-uae and Speedball 2: Brutal Deluxe run in demo mode) and even watch a few shows (SD quality Deep Space 9 mostly) and also listen to music and internet radio on my bluetooth headphones. Also some idle time because that's also realistic usage. Usage was fairly "comfy", I configured emacs to mostly look like the terminal version (with none of it's downsides) and used a very legible bitmap font. It was fun to use it, away from the big internet. (I browsed some gopher/gemini)

I ran this computer for 12 hours and 5 minutes now and the battery pack ticket over to 2 of 4 bars a bit ago, whatever that means. This is a wild guess, but if we assume the little IC that keeps track of the battery pack charge is fairly accurate and we can assume that the battery keeps voltage level until the very end because of the technolgy used, we can extrapolate from this that a 40 Ah battery of similar quality would last for ~48 hours of regular usage like I do it. Mine will probably be a bit better but that will be compensated by additional periphery I will add (Modem and GPS, mostly).

It would have been interesting to do this with e-ink. I dug some more into the topic and while it's true that full refreshes on eink are rather expensive energy-wise, partital refreshes on a character by character base wouldn't be, and this would probably make the battery last at least a week if controlled via SPI, only doing sparse partital updates and the controller racing to sleep, but all I can find about DIY eink terminals online is discouraging me to even try. Also it would bind me to (black and white) text fully which is not great either, especially now that I am aware that the SoC can do more than I thought. So I'll stick with a normal screen and might try out what a 1280x800 one changes.

Now all is up to the actual design. I think I'll design a PCB around the SBC, add a GPIO connector at a 90 degree angle to the SBC and "plug" it into this PCB, which I will slot into the casing and fasten down with screws. It's a bit pricy but would be very robust, easy to add hardware on and also easily upgradable with a newer SBC when/if that will become a thing.
 
Last edited:
are these exclusively the newer revision ones, or a mixed bag with some of those with the GPIO bug thrown in? Problem with the Pis always is that you get a few nasty bugs thrown in with the early revisions.

Amlogic A311D
Longpost incoming. Nope, S905Y2. The other one would've blown the power budget. While you can sometimes screw with the regulators in the device tree file and also limit the maximum clock and thinks like RAM clock, I didn't like my chances to take 2 watts off that device. Didn't even try. That's at the very least 1/3 of it's total power consumption from what I could gather. It's just too much. And even with the 2W off it would consume a bit more than I like, especially now that I have some more accurate screen measurements. The only thing that spoke somewhat for that maybe being possible was the S922X apparently being very overclockable, which odroid openly advertises for the N2+ and hints at the design not being at the "silicone limit", if you wanna call it that. Even if we said "then let it consume more and shorten the battery life somewhat" that SoC definitively needs me to add active cooling or a pretty big heatsink to the case and that's just a no. With projects like this it's always important not to get lost in "nice-to-haves", the main role of this system is being a text-oriented emacs machine with incredible battery life and mechanical keyboard, being able to remote connect to other computers in various ways to do the heavy lifting for some things. The S905Y2 excels in that role. That it even does 720p movies (in software) and most old vidya fine is a welcome plus, but not a must. I also like the challenge the limitations offer and always just find this low-end computing incredibly fun. If I just wanted something powerful and battery efficient, I'd buy a MacBook.

That said, I actually saw a video of someone running TES Oblivion on the S905Y2 radxa zero via box86/64. It looked a bit broken but he reached framerates in the 30s. That's quite crazy for a 3W SoC. That box64 sure is something. These ARMs might be pretty power efficient retro machines for Win9x/early 2000s gaming one day.

My assumption is these ARM SoCs are designed to be as frugal as possible, that might give them overclocking headroom, especially these 12nm amlogic ones, but it'll probably be hard to actually make them consume much less, even if we generously assume that the power consumption rises exponentially at the top spectrum of clocks, which the overclockability of the S992X doesn't really indicate. I also noticed that the S905Y2 doesn't consume much more under load than in idle, which makes me think the cpuidle drivers in Linux are probably less than perfect for these SoCs, slimming my chances to bump that A311D down further. I might get a S992X or A311D based system sometime because they seem fun and the amlogic mainline support seems to be very good as far as these things go (and from what I read, no thanks to Amlogic), but I don't think it's possible to limit them to 3 or 4 Watts. They seem to need 6W at the very least, with some reports I could find talking about peaks of 10W under certain conditions. It's just not gonna happen.
 
are these exclusively the newer revision ones, or a mixed bag with some of those with the GPIO bug thrown in? Problem with the Pis always is that you get a few nasty bugs thrown in with the early revisions.
This, right?
Raspberry Pi RP2350 microcontroller has a bug that causes faulty pull-down behavior on resistors

I'm not sure what the revision code is on the board. You might be able to see it in CNX's photos. The chip says "RP2350A0A2". Maybe it is afflicted? They have also moved around the debug pins on the Pico 2 W.
 
Last edited:
  • Thunk-Provoking
Reactions: AmpleApricots
This, right?
Yes.
Maybe it is afflicted?
Good question, I think the safest thing is to just wait it out until these have "naturally" sold out from the distributors, or find reports online. The Pi was soured on me when I was gobbled up in the hype of the early Rasperry Pis and got one (I think it was 1 or 2. Maybe 3?) and that thing was basically impossible to power porperly except you have some power supply that gives slightly over 5V. That's just poor engineering. It also had some other fault with it's USB chip I don't remember clearly. The Picos are cool, especially for their price, but they have inherited the curse.

As I heard, the Pi fundation is moving away from "educational" to focus on big, commercial customers. I think it means they'll focus on embedded and the success of the Pico. Wonder what it'll mean for their often cited good support. They might've seen the writing on the wall re: the desktop future of ARM vs. x86 and decided to cash out. intel's just much more compatible n100 and it's very agressive pricing might have given the ARM desktop the death kneel before it even existed. That's intel for you; leaving no surviors.

The RK3588 is often cited as direct competition to the n100, but it's considerably slower. It also uses a little bit less power under full load, but if it takes longer to finish the job, does it really overall, then? Maybe these alternative ISAs aren't the magic efficency bullet they were touted as. Gues for the magic sand to be able to think the electrons do have to sing at the right volume. Or you need control over the full stack so you can design your chips around the OS and vice versa, like Apple.

Apropos RK3588, I'm not sure it was mentioned here but the often refrenced Joshua Riek of Ubuntu Rockchip fame has given up, citing lack of funding and support/communication by Rockchip. There was so much goodwill towards ARM as architecture and ARM SoCs and ARM Holdings but they all only thought in the now and didn't see the long term potentials and profits of a new market and squandered it all. They should've just cut some of the kernel developers that were already doing it for free a cheque and give them whatever information they needed and it would have 100% paid out eventually. Peanuts for them, I assure you. Now even if they beat out chips like the n100 performance and efficency-wise, nobody will trust their products in this market.

And RISC-V will have the same fate. Some GPL violating chinese will not deliver us from x86.
 
RPi CM5 is out, and it includes the elusive 16 GB RAM option although those variants will launch in 2025:
With the CM being a enterprise/industrial focused option. I wonder what uses 16gb has in these environments. 8gb seems about as high as you would need to go, much less 16gb. A ZFS based NAS is about the only use I can think of.
 
With the CM being a enterprise/industrial focused option. I wonder what uses 16gb has in these environments. 8gb seems about as high as you would need to go, much less 16gb. A ZFS based NAS is about the only use I can think of.
Web server? IDK. But more RAM is almost always better and there are a myriad of ways to "waste" it.

pi16gb.png

Woo, one whole PCIe lane, still officially Gen2... Dumb.
Definitely shitty, but still enough for some ~4K30 gaming and AI (he is probably using it at the non-guaranteed PCIe 3.0 x1, which I assume you could also do with CM5):



Here's Geerling's CM5 video while I'm at it:

 
Last edited:
I got an RPi3B+ sitting around kinda uselessly. How is the situation with Netflix on it these days? The RPi400 handles 1080p streaming in the browser quite well, but last time I tried Netflix on the RPi3 with Kodi it didn't work well (in fact, trying to stream at 1080p crashed the thing so hard I had to reinstall the OS). Does it work better these days? I'm seeing some posts on the net that make it seem so, so I wanted to see if anyone here has any experience there. I guess Xbian would be a good choice for OS then?
 
Seems like a good one.

I tried the new Qwen QwQ, a 32b coding model today. It was the only model (including o1) that not only correctly understood an emacs elisp mode I've written, but also could make a sensical addition to it. Meanwhile some conservative western politicans sperg out about the potential of fucking Llama of all things to fall into chinese hands by open weighting it. As if they needed it. The West is so shortsighted in regards to China, it's unreal. To be fair, 4o is most likely in the same weight class (albeit much worse). If the models keep sizing down like this it becomes more and more of a realistic scenario to run them on hardware like that.

That said, without drivers it'll be useless. Here's to hoping Cixin contributes to the kernel. Fat chance if you ask me but stranger things have happened...
 
I tried the new Qwen QwQ, a 32b coding model today. It was the only model (including o1) that not only correctly understood an emacs elisp mode I've written, but also could make a sensical addition to it. Meanwhile some conservative western politicans sperg out about the potential of fucking Llama of all things to fall into chinese hands by open weighting it. As if they needed it. The West is so shortsighted in regards to China, it's unreal. To be fair, 4o is most likely in the same weight class (albeit much worse). If the models keep sizing down like this it becomes more and more of a realistic scenario to run them on hardware like that.
Reuters: Exclusive: Chinese researchers develop AI model for military use on back of Meta's Llama (archive)
The Register: Meta gives nod to weaponizing Llama – but only for the good guys (archive)

I glanced at that story when it came out, and didn't notice any politicians talk about it but they are retarded anyway. I hope we keep getting open models. It looks like transformer LLMs are hitting a plateau anyway, which is great if that means uncensored/locally run and censored models end up with nearly equivalent capabilities.

What is the rule of thumb about parameter counts and RAM required? Because with the technically old RK3588 shipping with up to 32 GB, I wouldn't be surprised to see 64 GB paired with some newer SoCs. And outside of ARM land, AMD's Strix Halo should support 128 GB LPDDR5, and I would not be surprised to see 192-256 GB supported within one generation (Zen 6 "Medusa Halo").
 
This crop of model we currently have is really good at recognizing patterns, even very subtle ones. Weaponizing it though, in an efficent manner? I'm not sure. I'm always critical when I hear this because the reasoning you can trust current models to do while very cool (it's just incredibly cool that you can "teach" a model to do a thing by simply requesting it in natural language, mindblowing even if you ask me) isn't really reasoning you couldn't arrive at with more conventional coding approaches which are reams more reliable (not to say less hardware intensive). So I don't see the danger in current AI, at least not like that. Intelligence gathering like in the articles, maybe. I use AI through emacs and I was monetarily considering giving the AI tool access, basically the ability to call elisp functions to basically do things I request in natural language via the minibuffer, but then thought better of it. As optimistic of the technology as I am, even I wouldn't trust it that far and it's not even all that useful.

transformer LLMs are hitting a plateau
I said it often on this forum and I'll gladly say it again and again: I think it's too early to say that. Just a year or two ago people were saying "you'll never be able to run a model with the smarts of gpt4 at home. In ten years maybe. But that's a big maybe and local models will never be nearly as smart as gpt4. They just won't let you have such a model" now many models are smarter than gpt4 and you can conceivably run many of them at home, with relatively normal hardware. It's simply too early for such statements, IMO. The envelope is still getting pushed all the time, QwQ and deepseeks R (also chinese) is a good example of that.

What is the rule of thumb about parameter counts and RAM required?
That really depends on the model in question, it's architecture, implementation of the inference software, context size and type of context used etc. but for a modern model you can roughly say - at FP16 around double it's size in parameters in gigabytes of ram + a few gigs, for INT8 same amount of memory in gigabytes plus a few GB. So, QwQ at Q8 would probably need around 40 GB of RAM. Since it's a relatively small model, you could probably stuff it into 32 GB RAM system if you quantize it somewhat harder. With a 16 GB VRAM GPU and DDR5 RAM it'd probably run fast enough to be usable. It's hard to say what an effect this has on models and opinions diverge, the more modern smarter models use "their brains" more efficently so quantization seems to hit them harder as the older gen ones, as does it hit the smaller ones. How well a model actually quantizes without "brain damage" is probably really dependant on the quant technique used (yes, you can totally screw this up) and the individual model.

That's what I meant in the GPU thread with being careful with investments in hardware. They might become outdated very quick and you might not need them later on. It's all a huge flashback to the 90s, in a way.
 
Last edited:
The only way to truly compare N100 and N150 is to have them use identical components, especially memory. A DDR4 vs. DDR5 comparison isn't useful, as this bumbling failure of a review by leepspvideo found out:


Maybe it's better than ~5% (3.6/3.4) somehow, but probably not. Using the fastest memory is crucial for a single-channel turd chip.


Crosspost:
After the embarrassing failure of Qualcomm to launch the Snapdragon X Elite Dev Kit, the "world's first" mini-PC with the chip has been revealed.

View attachment 6713393

GEEKOM is a Chinese brand (how could you tell?!). Some of their mini-PCs are expensive, but this could be a sign that the X Elite is going to be whored out for cheap.

Only about 720,000 Qualcomm Snapdragon X laptops sold since launch — under 0.8% of the total number of PCs shipped in Q3, or less than 1 out of every 125 devices (archive)
 
  • Informative
Reactions: AmpleApricots
Using the fastest memory
also important for everything with an integrated GPU every single bit of additional memory performance translates directly to better GPU performance in my experience. Single channel is brutal tho. I think intel just knows it's customers of these SoCs. Just because a chip supports dual channel configurations doesn't really mean it's used. Many a low end craptop has slow, single channel RAM even if the SoC could do better. Not something you can change as customer usually. Often not something you can even find out before buying the system. There's also RAPL; intel allows you to configure your SoCs in a way where they do internal optimization of what instructions to call when etc. so they stay inside a specific thermal envelope in watts. You can set this up in the firmware and even decide if you let the OS control it or not. The idea is that you can sacrifice some performance to get the perfect thermal design in your device. Most low-end manufacturers don't bother with setting this up at all, glue a small metal shim to the SoC, add a tiny fan if they're feeling generous, and just let thermal throttling take care of it. Makes these SoCs also perform poorer than they need, with the added bonus of murdering the (usually close-by) battery by letting the system always run hot as hell.

I fired up the old Thinkpad Convertible today and that i5-7Y57 (dual-channel and yes, the RAM is actually implemented that way in this system) is actually kinda similar to the N100. A bit slower (but also significantly lower TDP, I measured it a while ago and the whole system had something insane for x86 in complete idle with screen off, I think around 2-3W or so), only two cores, but core vs. core only somewhat slower. When I did my research when it was handed to me I found that it's a rather rare machine in the way where such a entry-level SoC was put together in a system with relatively high quality parts, decent, proportional heatsink, nice screen (but also not overwhelming for the iGPU) fast dual channel RAM etc.. They didn't do the thermal limits and it got hot as hell but I found a nice config for it and it barely gets warm now without losing any significant amount of speed. Not all entry level/low end systems are created equal and it's the small things that can make or break them. It's a pity because a lot of these low end chips are pretty good, they just get put into absolutely shit systems.
 
I fired up the old Thinkpad Convertible today and that i5-7Y57 (dual-channel and yes, the RAM is actually implemented that way in this system) is actually kinda similar to the N100. A bit slower (but also significantly lower TDP, I measured it a while ago and the whole system had something insane for x86 in complete idle with screen off, I think around 2-3W or so), only two cores, but core vs. core only somewhat slower.
Gracemont is similar in IPC to Skylake, also adding AVX/AVX2 for the first time to an Atom core, so no surprise there. Clocks are similar but the N100 has 50% more L3 cache and 2 MB L2 cache for the single quad-core cluster, vs. 256 KB per core for the Kaby Lake-Y dual-cores. Then support for single-channel DDR4-3200/(LP)DDR5-4800 instead of dual-channel LPDDR3-1866/DDR3L-1600, so the potential to outpace it slightly there with DDR5.

I know you know all that, but it's simply to reiterate that Alder Lake-N continues a computing baseline set by 4th-10th gen quad-cores, stretching it to over a decade and encompassing hundreds of millions of sales. The systems are all around $100-200 now, and are sufficient for all "light" computing tasks.

It also means that this performance level has become predictable, boring, and we are itching for low-end x86 disruption. Bring on the Skymont cores already, or cheap 4-core Zen 5c Sonoma Valley from AMD. That being said, I might end up with more of 6th/7th gen because of the October 14, 2025 Windows 10 support cutoff forcing them to be dumped onto the used market (or the trash bin) in a year. Fun times.

BredOS Arch Linux Arm distribution runs on Rockchip RK3588 single board computers
 
Back