SBC / Low Power boards general - Raspberry Pi and what not

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
The real place where ARM shines outside of mobiles is in the lower power (both consumption and performance) segment when you need a "warm body"; A system that offers you the linux ecosystem and "real CPU" performance while only consuming a handful of watts. A lot of the lower performance ARM SBCs can hover between 1-4W idle-full blast and my recent research has shown me that mainline support for many of those is actually quite good. (My beloved, antique Allwinner A20 Cubietruck with it's 32-bit Cortex A7 dual core goodness is now fully mainline supported in a blobless way that'd make Stallman consider using it. Only took 8 years or so!) No matter what electricity costs in your area, there's no reason to ever feel bad to let that run 24/7.

I do get drawn to these alternative ISAs but honestly never end up seeing anything that impresses me in the has-an-MMU-and-will-run-a-modern-OS consumer segment. The higher end ones have serious overlap with the low- to mid range x86 mobile chips. Granted, the x86 have the (somewhat, that gap has been closing in recent years) higher power consumption but are usually also cheaper and most of all, they work. Where's the ARM revolution in end user computing I was promised? I was told it's nothing if not inevitable. And what happened to RISCV anyways?

I actually did fire up that cubietruck with recent armbian to check the support and lo and behold, even the weird wlan chip works now, but it hangs when starting X. The kernel is at fault and I actually could find a discussion and a patch to fix it, which surprised me the most because I didn't think anyone still used this thing. Ironically: The bug is basically a result of these things having no unified firmware like x86 and it's also only a bug that affects the Cubietruck, not all A20-based systems. See, that's the kind of BS you have to deal with with ARM, even if your thing is older and supposedly perfectly supported.

I kinda want to buy an expensive eink monitor and hook this thing up to it and use it as an emacs terminal. Maybe even build a luggable out of it to embarrass my wife. Somebody please tell me what a bad idea that is.
 
Where's the ARM revolution in end user computing I was promised? I was told it's nothing if not inevitable. And what happened to RISCV anyways?
The Windows RT (ARMv7) fiasco was over a decade ago now. I believe Microsoft lost at least $1 billion on their initial ARM-based Surface tablet alone. Before Snapdragon X Elite, there was the Snapdragon 835 in 2017, Snapdragon 850, and Snapdragon 8cx/8c/7c starting in 2018, followed by Gen 2/3 refreshes.

Snapdragon X Elite is better than previous efforts, and comparable to AMD's Phoenix APU at least on paper. The OS side has also improved. But they made the usual mistake of giving it Elite pricing, and OS/drivers still weren't polished at launch. I think we'll get a post-mortem on their sales soon, and new generations have already been found on a leaked Dell roadmap.

According to Jeff Geerling in his latest video, "a lot more games are playable on ARM processors than even just a year ago." Referring to both his experiment in connecting an old AMD GPU to the Raspberry Pi 5 over PCIe 3.0 x1, as well as his ARM-based Ampere workstation with an RTX 4070 Ti in it.

There was also this recently, which led to speculation about an ARM-based Steam Deck 2, but is probably more about VR headsets or generally preparing Steam to work on more ARM devices:
Valve appear to be testing ARM64 and Android support for Steam on Linux

RISC-V is arguably less relevant to consumers than anything else. It could be the basis of niche open hardware, but it can easily become more fragmented than ARM. The purpose is to allow companies the freedom to extend it however they want, without paying license fees to ARM. Which is why there will be billions of RISC-V cores in devices like hard drives and SSDs.

Back to x86, Intel and AMD are joining forces to prevent x86 fragmentation, as Intel pushes changes like x86S and AVX10 (the band-aid for broken AVX-512). I believe you can get the newer processors to idle at pretty low wattage too, particularly the mobile chips that are optimized for lower power. Both companies have their own approaches copying ARM's big.LITTLE: Intel with its E-cores and even LP E-cores, and AMD with 'C' cores.
 
(ARMv7) fiasco

In my haste to ignore everything Apple I glanced over it, but you kinda have to admit for fairness' sake that Apple pulled the ARM switch off in the desktop and laptop segment with their M(x)s. That's the power of having complete control over the stack. And it's such a popular platform that even Linux runs on it, although don't ask me how well. From a feature matrix I could find at least the M1 has even support for the GPU with that custom apple silicon kernel. I have no idea how deep that support goes through, and I didn't feel like digging. On the other hand also: It's Apple.

Linux projects trying to add support for some platform to the kernel have the tendency to be overtly optimistic to the point of omitting the nasty details, so you can rarely just take the first piece of info you find at face value. They'll claim "full support" and then you dig a little and find fun things like the kernel will boot, but simple framebuffer, ethernet, USB and audio not supported. Or cpuidle and downclocking not supported, encryption acceleration not supported, need to download patches from github xy etc. etc.. Often things you need to make the particular SoC useful in any practical way, even if not even hoping for GPU support. For Allwinner offering kinda the bottom-of-the-barrel ARM SoCs, I gotta say sunxi-linux is a particularily well documented project and one of the few that keep an updated, honest feature matrix that's worth anything at first glance and you don't have to dig through githubs, forums and mailing lists for.

I gotta say I have to walk back from my previous posts somewhat. I spend a lot of time researching this the last few days and a lot of the older ARM SoCs are actually supported fully now. I used to have an Odroid N2 with amlogic S922X and that thing caused me so much frustration that I gave up on ARM because it was *so close* to being usable even as a desktop (I had only low demands and it really ticked all the boxes) yet not really. Fully supported now. Video decoding and encoding included. I guess the real curse is them only becoming supported in mainline when they're so outdated they're not quite as interesting anymore. I'm actually thinking of rebuying this one because I remember it having very low power consumption figures, and what it does would be enough for what I had in mind. I'm not sure the N2+ (somewhat faster version) is truly worth the 60 bucks it costs in my area though. The RK3588 is better on paper but from all I could gather I got the impression full mainline support there will take a while. Even though tbh I just wanna play around with a (somewhat faster) ARM and I like that it's tiny and basically needs no electricity. I did the measurments myself a few years ago and the idle wattage (with screen connected!) was something insane like 1.5W or so.

usual mistake of giving it Elite pricing
Yeah I'd say. I think they hallucinated they're Apple and have their brand recognition for a second there. If ARM wants to get an inroad on the average consumer market that x86 has right now, the transition needs to be absolutely painless, including price. Otherwise it's just not gonna happen, IMO.
 
Last edited:
And it's such a popular platform that even Linux runs on it, although don't ask me how well.
It uses the power of lolcows:

Yeah I'd say. I think they hallucinated they're Apple and have their brand recognition for a second there. If ARM wants to get an inroad on the average consumer market that x86 has right now, the transition needs to be absolutely painless, including price. Otherwise it's just not gonna happen, IMO.
In my previous rant, I forgot to mention that there are persistent rumors of MediaTek and Nvidia teaming up to make their own PC SoC. We don't know how that will go (if it's happening), but MediaTek has been making good decisions lately and growing, while Nvidia/JENSEN plays to win, not to flail around impotently like Qualcomm has been doing.

Exclusive: MediaTek designs Arm-based chip for Microsoft's AI laptops
The MediaTek PC chip is set to launch late next year after Qualcomm's exclusive deal to supply chips for laptops expires, two of the people said. The chip is based on Arm's ready-made designs, which can significantly speed development because less design work is needed using ready-made, tested chip components.
Nvidia and AMD are working on Arm designs for Windows machines, Reuters reported last year. The Nvidia effort for its PC chip involves help from MediaTek, according to a person familiar with the matter. The MediaTek effort for a PC chip is separate from its collaboration with Nvidia, two of the people said.
Nvidia and MediaTek collaborate on 3nm AI PC CPU — chip reportedly ready for tape-out this month
The MediaTek AI processor is expected to be paired with an Nvidia GPU. The post also names Lenovo, Dell, HP, and Asus as prospective customers looking to adopt the processor in OEM hardware. The chip has also been linked to a rumored $300 price tag.
 
  • Informative
Reactions: AmpleApricots
It uses the power of lolcows
Oh, that guy.

MediaTek and Nvidia
I'm wondering how the AI inclusion will go. Nvidia is basically the only serious contender for AI hardware right now, that said barring some breakthroughts this SoC included AI stuff will never be that super interesting because at least currently, for capable AI you need at the very least a metric ton of (very fast) RAM. But well, that's the perspective of an AI enthusiast. There's some relatively tiny LLMs even including multimodals around now and they are not that terrible at simple RAG-based Q&A and I could imagine some of them to be conditioned to simple tool use, Alexa style but better, which I guess these AI accelerators will be good for. Also image recognition/manipulation/simple, safe corporate art doodles and espionage for advertisers. Makes sense to distribute that to the mobile devices instead of trying to host centrally. If I had to guess, Microsoft and Google etc. are probably planning to have very small models "report" to bigger cloud models/databases. Always on and not disableable of course. I mean they already do a lot of spying now, so this is not a big stretch. Malicious prompt injection (probably in the future labeled "AI virus" by the media because it sounds cool) will be a hoot.

---
I did some research and math on panels for the luggable (yes, this is a thing now), - this is still about low power so it fits maybe - an interesting result I got is that eink is mostly NOT more power saving. Depending on use case, it might actually consume MORE power than a similar-sized bog standard LCD. Yes, that surprised me too, even though I am not sure why because it makes sense considering how they work. Refreshing them is really expensive, even partitally. Then there's not really a off-the-shelf HDMI/DP-to-eink ASIC you can throw on it so you either need to do a SPI controller thing, which not impossible, but would need some custom code in Linux to work as TTY and would be really an ugly hack with rendering to a virtual framebuffer first if you wanted to use anything else (like X), or throw on an FPGA that does HDMI-to-eink magic, which probably will end up eating more watts than some cheap, small IPS panel. All these ereaders last so long because they refresh the screen exactly once on page turn, then basically turn it off and most likely mostly suspend the SoC. Writing text on one is defintively a very different scenario. OLED *can* be more efficent, but that's attached to the big if of turning the pixels you don't use black. Means colored text on black screen, which honestly, is not something I like, even though it looks fun. (I prefer light, contrast-rich themes. Ignoring burn-in, they get really power hungry really quickly in such a scenario) I mean for many this all might be not news, but I honestly never really thought about it.

There's a few other panel techs (RLCD, sharp memory etc.) that are either unobtainium and/or not really practical, otherwise looks like bog standard LCD screens are actually already as mostly efficent as you can get, especially when small, keeping the amount of backlight LEDs down. How boring.

I also need to find an ARM SBC that strikes the balance between power saving/usable/mainline supported. Thankfully, performance is not that important. The S922x is actually not a terrible candidate.
 
Last edited:
I'm wondering how the AI inclusion will go. Nvidia is basically the only serious contender for AI hardware right now, that said barring some breakthroughts this SoC included AI stuff will never be that super interesting because at least currently, for capable AI you need at the very least a metric ton of (very fast) RAM.
A MediaTek "AI PC" is likely just marketing speak for the usual 40+ TOPS machine capable of running Microsoft's Copilot+. I would expect a MediaTek-Nvidia collab to consist of off-the-shelf ARM cores, probably all "big" cores like the Dimensity 9300 has moved to, Nvidia's GPU IP, and whatever NPU MediaTek is using right now. Probably more cores than the usual 8. It's funny because MediaTek was among the first to try to stick 10-12 cores in phones (e.g. Helio X30), only to retreat from that unless I missed something.

The next big thing may be "AI workstations" using AMD's Strix Halo, with a 256-bit memory bus and up to 96 GB of LPDDR5X allocated as VRAM for LLMs and other big models. But it's possible that the performance and memory bandwidth limitations (probably around 273 GB/s max IIRC) mean it's not enough to see an advantage over consumer flagship GPUs like the RTX 4090/5090.

I did some research and math on panels for the luggable (yes, this is a thing now), - this is still about low power so it fits maybe - an interesting result I got is that eink is mostly NOT more power saving. Depending on use case, it might actually consume MORE power than a similar-sized bog standard LCD. Yes, that surprised me too, even though I am not sure why because it makes sense considering how they work.
A "luggable" like this?
Or more of a "cyberdeck"?

That is interesting about the e-ink. I remember seeing vaporware demos for color and full or partial refresh video.

I think microLEDs and other technologies are expected to cut OLED power consumption while retaining the advantages. Hopefully we see those before we die.

I also need to find an ARM SBC that strikes the balance between power saving/usable/mainline supported. Thankfully, performance is not that important. The S922x is actually not a terrible candiate.
I've heard that. Benefit of Amlogic being a US company?
 
  • Informative
Reactions: AmpleApricots
It's funny because MediaTek was among the first to try to stick 10-12 cores in phones (e.g. Helio X30), only to retreat from that unless I missed something.
I think phones and other media consumption devices mostly profit from burst speeds with processors quickly ramping up to e.g. render a website to give both the feeling of fastness and also allow the processors to go back to sleep quickly to save on battery. I'm not sure how well such tasks would parallelize because I'm not familiar with such software, but I imagine "not well to a point that 12 cores have an impact over e.g. 8 cores". Mobile devices all suck anyways for more regular loads because battery life and because thermal throttling. I don't see that changing anytime soon. That might have encouraged them to go for less, but more powerful cores. The ARM SoCs primary moneymaker is still mobiles, after all.

But it's possible that the performance and memory bandwidth limitations (probably around 273 GB/s max IIRC)
96 GB is okay, but yeah, that's not fast for big LLMs. Would make it or break it depending on the price, but they're probably gonna be greedy about it, considering they're throwing words like "enterprise" around already.

A "luggable" like this?
For me a luggable is more like the "portable computers" of the 80s, where the term comes from. I guess a bit more mobile-like the toolbox computers, but less flamboyant. I've taken to multi-day traveling more lately. I went through several mobile computers in that time and settled on a thinkpad now, but it's kinda garbage. I hate the keyboard to the point that I only use it with an external one most of the time. The battery life is kinda pathetic and the case just doesn't feel robust and gets scratches from looking at it wrong and I take very good care of my stuff. If anything breaks, the spare parts, while available which is not even the norm, would cost more than just buying another used one.

I don't need performance, I just need a computer that can run a terminal well. I don't even care much about browsing because the places I go to often have questionable internet connection to begin with. I'm not even settled on media playback because it's good to get away from internet videos and movies too sometimes. There's no computer for a guy like me. I'd be happy with the specs of a small netbook already, thing is these are awfully put together (slow, single channel RAM on SoCs that could handle more, awful display panels, subpar cooling, poor batteries, shitty keyboards) and you don't even get features like controlling the battery charge so it doesn't kill itself when you leave it plugged in because of proprietary hardware. The other end of the spectrum are some overpriced gamer notebooks that can't move away from a power socket for more than ten minutes and actually still share many of the problems with the craptops. In the middle you have ARM notebooks who do have better battery life but also share their problems with the cheap netbooks while often also being locked down to hell. Every single modern mobile computer also feels like it is meant to be disposable. I don't like that feeling. Then there are some etsy/drop-tier DIY (or close to) projects that might address some of my concerns, but they are ridiculously overpriced and obviously aimed at webdev hipsters with more money than sense.

I thought about this a lot lately and I think I want a mobile computer that's reliable, repairable, rugged, linux mainline compatible (big bonus points if no blobs) and with good (12 hours, 18 would be excellent) battery life. It doesn't need to be mobile and elegant to the point that it's feather light and I can pull it out and brag with it at a local starbucks, and it doesn't have to pass airport security (on account of the gargantuan battery I'tll end up having). Just mobile enough that everything is in one nice piece I can carry around and sit in a forest or park using it (without having one eye on the battery gauge at all times) and later on maybe in a RV, should I ever be able to convince my wife. I realized I pretty much have to build it myself. I also think it might be fun.

for color and full or partial refresh video
There's a wide range of eink displays. Some only support full refresh (and need several seconds to redraw the entire screen) these are obviously not usable as monitor, but are relatively cheap. Some do partial refresh but with significant ghosting, some do partial refresh somewhat better. The faster the eink, the more expensive it usually gets. AFAIK the current state of the art technology you also can actually buy and isn't just seen in some demo is refresh at 10-14 FPS. That is good enough for text and okayish for videos, but using a mouse cursor already could get annoying. Refreshing an eink screen rapidly kills it quickly though. The dots can "change color" only for a limited amount of times. Eventually, they get stuck. In "professional" eink monitors, (I say it as if there were many, it's basically just Dasung and Waveshare) you usually have to unlock specific refresh modes via menu and it voids the warranty. They really are just for text realistically.

I saw an aliexpress offer for a 2560x1600 12" eink panel with FPGA controller board (apparently, a repurposed bitcoin miner board) that gives it HDMI and refreshes at 14 FPS for $150. I googled a bit around and think I know from which github they got the FPGA code. Finished product eink screens of comparable specs cost about $650-700 so it's an absolute steal even if it's not perfect and I bought it. It will probably not be appropriate for my project because of power consumption, but I always wanted an eink monitor.

microLEDs and other technologies
If I had a buck for every time I've heard some new display technology is gonna shift paradigms. Maybe them never quite making it is actually a side effect of the bog standard LCDs already being pretty efficient and lacking downsides, who knows. On all low-end notebooks I owned, the screen was the most power hungry part, sometimes by far. Between different LCD screen technologies there are also differences, e.g. IPS is more power hungry than the other ones. If you want a ton of battery time, these things start to matter. The best here would be an MIPI DSI LCD directly driven by the SoC, but that pretty much limits the selection to the Pi and some very crappy LCD displays. MIPI is like the other exotic display techs in that way, there are no real standards. HDMI is plug&play and just works, but already a bit more power hungry.

I've heard that. Benefit of Amlogic being a US company?
It's one of the few chips that actually reach the low idle/full tilt power consumption numbers that it advertises with mainline linux and seem to have otherwise good support. Second choice would've actually been Allwinner (a chinese) but for their SoCs the cpuidle drivers just don't seem to work, so the SoCs always are fully on, which of course makes them consume more power than necessary. That's what I meant in my earlier post with the devil being in the details with mainline support.

The rockchip drivers seem to be kind of a mess from what I could gauge. Maybe the ones for the older SoCs are better.
 
Last edited:
I think phones and other media consumption devices mostly profit from burst speeds with processors quickly ramping up to e.g. render a website to give both the feeling of fastness and also allow the processors to go back to sleep quickly to save on battery. I'm not sure how well such tasks would parallelize because I'm not familiar with such software, but I imagine "not well to a point that 12 cores have an impact over e.g. 8 cores". Mobile devices all suck anyways for more regular loads because battery life and because thermal throttling. I don't see that changing anytime soon. That might have encouraged them to go for less, but more powerful cores. The ARM SoCs primary moneymaker is still mobiles, after all.
The purpose of 10-12 cores in smartphones was to offer 3-4 clusters for performance/efficiency granularity, so it would be unlikely for them to all be active at the same time. What they have found is that it's better to ditch Cortex-A55/A510 cores because Cortex-A710/X4+ can be more efficient. They handle the bursty workload and power down, or can have clock speeds lowered. Additional L3 cache (of prime cores like X4) could also help. MediaTek's Dimensity 9300/9400 use all prime and big cores, as does the newly announced Oryon-based Snapdragon 8 Elite.

Spend more die area on big cores, not a third/fourth cluster. For the moment anyway. It's likely that ARM is to blame for designing bad Cortex-A5xx cores, especially in comparison to Apple's efficiency cores. ARM has made no attempt to bring the Cortex-A32/A34 to the ARMv9 era, either.

 
  • Informative
Reactions: AmpleApricots
Meet the NUC 14 Essential (archive)
ASUS launches NUC 14 Essential Mini-PCs featuring Intel Alder Lake-N Refresh (archive)

ASUS has revealed the existence of Alder Lake-N Refresh CPUs. They are just as boring as expected, with an awful, non-future proofed naming scheme:

Intel N100 3.4 GHz -> Intel N150 3.6 GHz
Intel N200 3.7 GHz -> Intel N250 3.8 GHz
Intel N305 3.8 GHz -> Intel N355 3.9 GHz

I don't know where these clock speeds are coming from because they aren't on ARK yet, but I believe them. We want to see Skymont cores come to Atom with its gigantic IPC uplifts. But maybe these refresh chips will lower the prices of older models.

The more interesting and relevant news is Apple M4/M4 Pro-based Mac Minis, which are smaller and now starting with 16 GB of RAM instead of 8 GB:

Apple’s first Mac mini redesign in 14 years looks like a big aluminum Apple TV (archive)
 

Meet the NUC 14 Essential (archive)
ASUS launches NUC 14 Essential Mini-PCs featuring Intel Alder Lake-N Refresh (archive)
ASUS has revealed the existence of Alder Lake-N Refresh CPUs. They are just as boring as expected, with an awful, non-future proofed naming scheme:
[...]
Thank you very much for bringing these to the attention of the thread!!!

I've been using an asrock N3700-ITX board as an always-on home server (samba, torrent, the works) for close to 8 years now and it's been a delight, but I was worried that it'll one day break, and I was unable to find anything else comparable on the market, low power consumption being most important for me.

The closest alternative I could think of was of course the Raspberry, but that comes with its known drawbacks.

Those new N(hah)-series processors and the systems they're building around them look great. So far none with the built-in SATA ports the N3700-ITX came with, but I already have an external USB JBOD (FANTEC QB-35US3-6G) I bought for this purpose. Will be nice to have a reason to use it. The 2.5GbE is a nice feature, and an even better one for the models that come with two of these ports (firewalls and other network shenanigans).

Edit: I'm an idiot and there are also classical mobos available with onboard SATA ports, pretty much a drop-in replacement for the board I already have but with ~2.5x the compute performance and more RAM capacity. Neat!!
 
Last edited:
Anyone got a Flipper Zero, bros? The firmware on those devices is getting pretty elaborate, and now there are cheaper alternatives like the LilyGO T-Embed CC1101 (which still has no good firmware out).
 
Had some time to do some luggable research today. Wall of text incoming for people who might be interested in that kind of project and/or have any input:

When you start thinking about projects like this you first need to think about what your goal really is because it's very easy to get lost in "nice-to-haves" and lose sight of what you really wanted to do. I want a mobile system that works well as a linux almost-text-only, minimalist system with very long battery life. Minimalist graphic interface via X, very lightweight software. Emacs is about the heaviest software I wanna run. Network accesses do happen, but mostly on the level of ssh, APIs and server services = social media or internet browsing is not required/wanted. A little bit of multimedia would be a nice to have, as in viewing PDFs, creation of pixel art via lightweight programs like grafx2, and looking at pictures. Old DOS/Amiga system emulation would be great. Video playback and streaming is not necessary. Most of the older SBCs top off at around 2-4 GB of RAM which is a bit tight but not impossibly so and not with a lightweight customized Linux, especially if you don't wanna do any webbrowsing or (modern) gaming. A SBC with onboard eMMC (min. 16 GB) and wlan chip would be ideal, as SD cards can be unreliable and suprisingly power hungry. I might want to add an USB modem later. The speed of the interfaces doesn't really matter much.

Some napkin math later and I don't think the S992X can be "it": It's too power hungry and also gets quite hot under load and in an enclosed space that's gonna be plastic, you'd need some active cooling to make it run stable under sustained loads in varying climates, which could happen, which again, uses power. I don't want to do this "edge of the seat" calculation many notebook manufacturers seem to do and want some more robust numbers that rather overshoot than undershoot, so I calculated with about 18 hours of runtime, 90% conversion efficency from 3.7V cells, with a margin of 20% (to account for cell aging/imperfections and misc. losses) and not even idle or average power consumption of the system, but 18 hours at full CPU load of all cores. This limits the SBC I can use to about 3W at full load with 3.7V ~40 Ah of battery which is in the ballpark of amount of battery I'd be comfortable using (You kinda don't wanna stack batteries endlessly even if there's space). If we consider the screen of the system to be always on and consume about 3-4W of power during that time, too. Again, napkin math. This number will increase with periphery hardware. An upside of ARM SBCs is that you can edit their device tree files and sometimes experiment with underclocking and undervolting, which might make it possible to shave off a few mAs here and there with no ill effect, power consumption also usually rises, sometimes quite dramatically, towards the top end of the clock speeds and there might be some measurable gains to be made here.

A possible SoC as of here and now is the Allwinner H618 (28 nm, 4xCortex-A53 e.g. Orange Pi Zero 3, 4 GB of RAM, not fully mainline supported yet but Sunxi has a proven track record of making it happen) the RK3566, (22nm, 4xCortex-A55 e.g. Odroid-M1S, 4 GB of RAM, 64 GB eMMC, and also an m.2 slot which would probably consume too much power, the extend of the mainline support seems to be "full"? It is slightly unclear for me here right now) the Amlogic S905X3 (12nm, 4xCortex-A55 e.g. Odroid-C4, 4 GB of RAM, eMMC slottable) or Amlogic S905Y4. (8nm, 4xCortex-A35 e.g. Vim1s - I fell in love with this thing and felt I could make 2 GB of RAM work before I realized it's vendor kernel only! Sadly no mainline support for that generation of Amlogic SoCs. Besides a one liner of "an ongoing effort happening" could not find a hint of it ever coming either, this thing idles at 0.5W!) These are low end ARM SoCs, meant for things like cheap "Android TV" boxes. Not an Option: Raspberry Pi 4-5 - too power hungry, too hot (shocked at the power consumption numbers under full load of the Pi 5. Seems to be easier to just use an x86 system at that point). Pi Zero 2W - only 512 MB of RAM. Rule of thumb: If an IC has a metal heatsink/"lid" glued from the factory, it's because it's hungry and gets considerably hot to the point of straight up dying without. I haven't fully looked at everything there is yet, and this odroid heavy list is that way because it's surprisingly hard to find any reliable power consumption numbers for ARM SBCs. Hardkernel are pretty much the only ones that seem to bother with making measurements and publishing them, which is odd considering usage scenarios for these often are critical in that department I'd assume. Must be the well-known chinese allergy re: publishing measurements of any kind. I noted the process sizes because these translate directly to idle power consumption as far as I can see. These all overshoot my budget of 3W by several hundred mW (except the S905X3/C4) the one closest of fitting into it is unsurprisingly the Vim1s, which actually undershoots the 3 W by all accounts I could find (~2.5-2.8W). I might wanna look into older SoCs, both for better mainline support and also for lower power consumption.

I'm also almost tried to attempt to use the Vim1s with vendor kernel just because of how snugly it fits into my power consumption bracket, even though I am sure I would regret it later. They're on some ancient 4.15 kernel that's not even LTS anymore and has a list of unaddressed CVEs. Practically, it already starts with having no 2D acceleration whatsoever in X, which might be fine for lower screen resolutions but doesn't exactly feel like a good idea. Then again, this thing is no server and runs no "webapps" so the attack surface for old kernel CVEs might be limited. Famous last words?

Brings me to the screen and some more napkin math. eink is most likely not an option and also conflicts with some of the goals so we'd have to go with an LCD. The best way to save power with a normal lcd panel is to reduce it's size. (= less backlight LEDs) The screen should at least feature 80 columns and 25 lines with a 8x16 bitmap font (A bitmap font is chosen because these are always sharp and readable, 8x16 because that is pretty much the standard size of old and would give me a wide array of fonts to choose from). This font should also be comfortably readable at 50 cm of distance without (GPU) expensive upscaling so the screen can't be too small or too high of resolution. All programs will be run without window decorations and full screen with a WM like ratpoison so text applications will fill out the screen. In your average book, a letter is about 2.5-4 mm tall. Aliexpress has cheap 8" 800x600 panels that according to datasheet, have a typical consumption of about 2 Watts + ~1 W for the control panel. On an 8" 800x600 panel, the pixel density would be ~125 PPI. a 8x16 bitmap character would be about 3.25mm tall on this screen. Very readable. 100 columns, 37 rows. Sadly, this is not a great screen otherwise. Color reproduction will be poor and there's also only one real angle you will be able to see anything at. PDFs and other "multimedia" will be hard to look at. Aliexpress also has 1280x800 IPS screens at 8". These have ~188 PPI. IPS has good color reproduction and angle stability, the resolution (and form factor) would even be good for videos. Downside: an 8x16 character would only be ~2mm tall. Over a prolonged period, this would be an eyestrain. We could double the size of the characters (16x32) which would make them nice and readable at around ~4mm, then we could fit the classical 80x25. Yes, I'm aware these would be very small screens, but honestly, I don't think I would mind much. Biggest problem would be covering the pdf requirement which doesn't scale well to such things, but I might drop it altogether.

Not that simple a project if you don't want to end up with some useless gadget with 2 hours of battery life. What's mostly making it difficult is the spotty linux support/documentation of many of the ARM SoCs. Even "good" support is often anything but, as important features are just plain missing or work poorly. The older SoCs you take, the better the support usually gets. Android is king with these things and almost all of these SoCs have flawless android support, but if I wanted that I'd just buy a tablet. It also needs to be stressed how inefficent these ARM SoCs are compared to an e.g. Apple M1. Well, it's low end, minimalist computing. Always fun to work with the limits.
 
S905Y4. (8nm,
small Errata: it's also 12 nm, like the X3. The big difference is Cortex-A55 in the X3 vs. A35, which are actually faster but less effcient and AV1 decoding, which will probably never get implemented in mainline Linux anyways. Apparently Amlogics' VPU work in the kernel is stale and works poorly. I dug some in mailing lists and sources and there's actually quite a bit of work for the generation the Y4 belongs to going on. A repeated observation: Good mainline support doesn't mean good mainline support. Compared to vendor kernels, the code is actually often quite incomplete.

Now after I did all my research, I don't understand the popularity of this SoC. Yes, it's powerful for this category of ARM SoCs (and more powerful than all the other ones) but it kinda only touches the lower end of the x86 world while consuming about as much power (or at least not less in a way it would truly matter if plugged into the wall) you also need blobs to even init it so the stallmanite type of people this also should not draw. Also the driver support is kind of a mess right now and probably will always be behind some boring n100. So you'll end up with something that's like a poorly supported low end PC and also most likely rather expensive for what it is. Why do people love this thing so much?

experimental retards
Hi

I stumbled into these in my research too, they're actually quite good and the support seems also to stumble into less hurdles. But that part is hard to judge. I think they enjoy a lot of excitement by developers. That can be over in a pinch.

I also figured out which SoCs you can run absolutely blobless in a way that would made Stallman use them, with good mainline support:

Allwinner A20
Allwinner A64
Rockchip 3288/3399
i.MX 6 (this one is the only one who both is blobless and has a serious commitment by the manufacturer to the mainline kernel, including GPU)
Some older Amlogic (????)

Everything newer needs at the very least non-free blobs to boot. They all have in common that they're old (some positively ancient even) and hovering around the level of your average core2duo performance-wise, at best. It's like we are not allowed computers faster than that generation to be blobless. It's quite curious.
 
Last edited:
Also the driver support is kind of a mess right now
I got one years ago. Good theoretical specs, lots of PCIe lanes, RAM, etc.

But the support is crap. Say what you want about x86 but I can just click a button and have 3d support, etc etc etc etc.

The only decent support is the Raspberry Pi. And it doesn't have the performance or expansion.
 
I got one years ago. Good theoretical specs, lots of PCIe lanes, RAM, etc.

But the support is crap. Say what you want about x86 but I can just click a button and have 3d support, etc etc etc etc.

The only decent support is the Raspberry Pi. And it doesn't have the performance or expansion.
If Intel was more aggressive with Atom, it wouldn't even be a conversation. But N100/N305 must be low-margin products that have to be made on Intel nodes.

i3-1215U and its several refreshes have made it into a lot of cheap laptops. Those should be going into more mini PCs and SBCs. That chip has dual-channel memory support, 64 EUs (of 96) instead of 16-32 EUs, and two P-cores.
 
  • Like
Reactions: TheBigZee
But the support is crap.
Yeah, that's pretty much the theme. "Amazing support" is what they say about the Pi, but it's honestly not always amazing for the Pi either, if you really go digging.

Something you can basically forget with most of the non-Pi systems (and Pi systems once in a while, apparently) is accelerated video playback in Linux in any normal way. Stale code that likes to break all the time, often only subsets of h264 supported, only works with a specific heap of software and patches etc... With x86, video just works. Even if it wouldn't, low power x86 can power through decoding for 1080p at least usually. These small Cortex cores can't, especially if you want them to do anything else at the same time.

That's where the "multimedia" usage already dies.

Also Linux and Linux userland is not really optimized in a way Android is for these, Android had a lot of work invested to make it feel smooth even on very weak hardware. The GPU is used a lot. Then also sleigh-of-hand tricks like these little application switching animations. They're not only there to look pretty but to make the system feel more responsive and hide potential lag from the context switch and swapping into memory. Tons of stuff like that. Sadly Android has the same problem on SBCs: No support. They drop one Android version on you (with some "interesting" chinese background processes sometimes I am sure) and that's never gonna get updated again. There's also things like core/libreelec and such that meet you half-way, usually with the vendor kernel and bespoke versions of kodi, which to be fair, might be good enough for some SoCs and works reasonably well. In general though: These things need to have all their limited hardware to be fully utilized to make the most of them, and your average Linux distro and kernel simply doesn't do that and also will not be made to do that.

The only interesting usage scenario for these SoCs is minimalist computing. The low-end SoCs are more interesting for that because the high end non-Apple SoCs don't seem that much more efficent than x86 to me and x86 just works a lot better. If mainline supported, they're also relatively blobless compared to x86, and don't have fun things like intel's ME (probably), if that is important for you.

Basically, if you wanna buy one of these, IMO aim for the sub $100 low-end ones, be aware you're gonna get approx. core2duo or Cherry Trail and lower performance, at the fraction of the electricity cost of these old systems. If you are happy with that - and there are usage scenarios - they are good. You want a modern desktop experience with complex JS websites, video streaming/playback, or multi monitor setups or games - you're not gonna be happy. The high end ones are utterly uninteresting vs. low-end x86 IMO. You get all the downsides with none of the advantages. This might change one day - that day is not today and also won't be tomorrow.

Me, I'm gonna grab an armful of them now and do some tests as I did find out all I could online. Online reviews are absolutely useless. Wiggling your mouse around in Ubuntu tells me nothing about the performance of the device.
 
So it looks like the Amlogic SoCs are the most efficent overall, maybe not that surprising considering they are around 12 nm when all the chinese competition seems to be 22nm+ at best. I actually ended up also getting the vim1s because I really wanted to see if Cortex A-35 are really more efficent and that's the only available SoC/SBC I could find with them. I pitched the S905Y4 (4x Cortex-A35 @ 2 Ghz) in the vim1s against the S905Y2 (4x Cortex-A53 @ 1.8 Ghz) in the Radxa Zero. The Amlogic chips have an absolutely insane amount of SoCs with tiny, minor differences and the naming schemes are super confusing.

So turns out the S905Y2 consumes slightly more power in lowest idle I could get (roughly a difference of about 200 mW, ~0.5W vs. 0.7W but this is small enough a difference that this might as well come down to board differences and not SoC differences) but actually consumes less under full load. (~250 mW, clocking in at an impressive ~1.7W under full load of all cores. Yes you read that right) Also, even if the A53 cores are clocked lower, they are slightly faster than the A35 cores. Not dramatically, but the difference is there. To really see the efficency of the A35 cores you would probably need to record power consumption over a longer period of time but I'm gonna wager a guess and assume it's probably not dramatic. The S905Y2 also has no integrated ethernet, but USB 3.

I also got the Orange Pi Zero 3 with Allwinner H618 and it idles at about 1.1W and goes to 2.3W under load, it doesn't sound like much of a difference but it's actually quite big considering that it's A-53 cores top out at 1.5 Ghz. The H618 is in general quite noticeably slower and it gets HOT, I assume it ran into thermal throttling because it perfomed quite a bit worse than the Amlogics. (I didn't test it much because it became quite clear it was the worst contender) In my completely unscientific back-of-finger-on-SoC measurement, I'd say the Amlogics get warm. Small heatsink and you're good.

The nice thing on top is that the S905Y2 enjoys mainline support. The Raxda zero I got is also better equipped than the Vim1s with 4 Gigs of RAM (max. amount the SoC can support) and 32 GB of eMMC. It has the same form factor as a Pi zero, to boot. That thing is tiny. At over 100 euros it's also really expensive for what it is. I noticed these Amlogic boards in general are. I have fond memories of my Atom Netbook so this will be good enough for me, I am aware it wouldn't be for many others.

I also got the eink screen + FPGA driver board and the board was toast. The quality of the soldering was very poor and the the HDMI receiver (ADV7611) was mangled (bent pins) so even though the screen seems to work, the computer doesn't see it. Instead of dealing with the chinese and getting a similarily mangled board in a month I ordered a new ADV7611 and will swap this one out. It was only three bucks.
 
I actually ended up also getting the vim1s because I really wanted to see if Cortex A-35 are really more efficent and that's the only available SoC/SBC I could find with them.

The funny modern SoC with Cortex-A35 and AV1 hardware decode. Good for a (slow) TV box.

Do you think ARM has made a mistake by not updating the Cortex-A35 (or Cortex-A34, Cortex-A32) for the ARMv9 era?
 
  • Informative
Reactions: AmpleApricots
Good for a (slow) TV box.
It's actually not terrible. I tried Android on it and even emulation of pretty advanced retro systems works well and browsing is *mostly* fine too. Honestly, I used worse which were more expensive. The biggest limitation is the RAM, but that also depends on the use case. But yes, by no means "modern computer" performance. Gotta have a use case. I didn't quite understand why people like these for retro gaming, but if you look at the space requirements and power consumption even of the cheap ones it can make sense to use them as "retro console". I mean, I can watch movies and emulate on my former Ryzen APU just fine and better, but decoding a video or just running the emulator is 10-20W extra power consumption at the very least even with the iGPU, even though my Ryzen doesn't struggle with it by any metric. There are newer x86 SoCs now but I don't think they use less power, probably quite contrary. Can comfortably run half a dozen of these in that bracket with room to spare. That's x86 for you. Idle numbers aren't interesting.

Cortex-A35
I'm not as much into the topic of processors as I once was when x86 wasn't the only desktop platform and 32-bit was SOTA and I've read more about modern SoCs and CPUs in the last few weeks than in a decade so I don't have a strong opinion, but this thing feels kinda pointless the way it is now, and yes, mostly outdated. I would really have to measure normal usage for a while to see if it's actually more efficient with this software, in this usage scenario. It might be! Since these go up to 2 Ghz and usually power consumption goes up exponentially at the upper end of the frequencies, maybe limiting it to 1.8 or even 1.5 Ghz would bring some measurable gains (I tested peak load after all, that's why a long term test with "normal usage" would be interesting) but when I have an application where I'm starting to shave off double-digit mW with a razor blade from my design, somehow using these heavy cores doesn't come to my mind to begin with. Feels like I'm not alone with that at least considering that I barely came across the A35 in the last few weeks. I do have to stress though: It's a bit slower than the lower-clocked A53 in that very similar design. Not a lot.

I also got a few screens to get a ballpark estimate on what they consume. Power consumption with screens seems to be about the size. These are all at "comfortable brightness" so it is subjective, also I got full monitors, not panel only, so there's that also. My measurement technique here was a bit simpler than in previous posts, so the measurements are less accurate. On the lowest end I have an 8" 800x480 which consumes around ~1.5W. That's the one I call the Eee PC special. On the highest end, I have a 2560x1600 13.3" I already own for a while now and use with my desktop PC (what can I say: I like small screens, sitting close to them, high PPI and sharp fonts) which consumes ~3W. Then I have another one somewhere in the middle, a 1280x800 10.1" screen that also consumes around 3W. The resolution doesn't seem to matter here on the screen end, it's mostly the size and with that, the amount of backlight LEDs. When you up the brightness to 100% on them, they go to about 5W. These are all IPS panels and all look nice with good color reproduction, pretty good (for IPS) black levels etc., even the 800x480, although small, looks good, very unlike the original Eee Netbook's screen, which also was an inch smaller. For completeness sake, I also measured the broken FPGA eink controller and it comes out at around 5-6W, and that's without the panel connected.

I want to avoid going HiDPI on this luggable because it doesn't always work that well and scaling is kinda fraught with it's own problems and a full commitment. Since I won't browse the web on it the vector font clarity doesn't matter much and PPI doesn't matter to bitmap fonts at all who always look the same. The 1280x800 10" screen is in kind of an awkward place where it's not quite HiDPI, but pixels are dense enough that things will appear small. An 8x16 font would net me 160 columns and 50 rows, with characters about 2.7mm in size. That's slightly too small to read well at a normal distance. Since you can't upscale bitmap fonts in fractions you can only double the size, which would lead to the classic 80x25 with characters which would be 5.4 mm and way too big. It'll feel cramped.

A way in-between would be an 10" screen at 1024x600. It has around the same PPI as the 8" one so the characters would be the same size at 3,5mm - very readable. It has about ~60% more pixels and physical screen real estate. It also has an awkward aspect ratio of 17:10 which makes sure that nothing will scale well to it, be it retro resolutions or movies. So in my specific usage scenario, I'm not actually sure it'd be any better.

Almost can't believe it, but I'll probably design around the 8" screen. It just suits me best.
I even played back some TV shows etc. on it and it was alright. Not exactly "home cinema" level but could absolutely imagine watching some of my old SD-Quality TV shows and old movies on it as a distraction. Pixel art was fun too, as was epub reading via emacs nov.el. On a screen like this, you of course run everything full screen. I never minded small screens to begin with, so YMMV.

I also disregarded SPI-driven eink which might actually be very power efficent if you program carefully and race to sleep, but that would completely lock me into text, realistically. Since the 8" is so small, I might actually be able to add a secondary SPI-controlled e-paper panel as kind of a small text display. That's a maybe though.

Raxda also has a "Pi Zero form factor" board with A331D (4xA73, 2xA53, mainline supported) which is basically like the S922X just with NPU. If I could limit that SoC to about ~4-5W somehow, I could make an reasonable argument for budgeting power for it. It would also need a considerable heatsink for passive cooling. I doubt it'd be feasible considering the odroid N2L (S992X) peaks at 6W but it would be quite attractive, if the Idle power consumption goes down with whatever magic I apply to shave off a full two watts. I think these SoCs probably are as efficent as can be, so I don't like my chances. It'd depend on how hard they pushed the silicon.
 
Back