SBC / Low Power boards general - Raspberry Pi and what not

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Allwinner A527, T527, and A733 datasheets, user manuals, and Linux SDK released
Allwinner SoCs used to be some of the most popular processors for SBCs in the years 2012 to 2015 due to their feature set, low cost, and support by the sunxi-linux community. Since then, interest has plummeted due to a lack of interest in open-source software by Allwinner management and the constant release of new low-cost “Cortex-A7” SoC, and Rockchip RK3566 or RK3588 are now preferred on non-Raspberry Pi SBCs. But we first heard Allwinner say they planned to work on mainline Linux support for the Allwinner A527/T527 at the Orange Pi Developer Conference in March 2024. This was further confirmed by Radxa with the Cubie A5E launch in January 2025. But about six months have passed, and the lack of progress on the software front has some users feeling doubtful about Allwinner’s commitment.

The release of the documentation and SDK is one step in the right direction.

Get your carrier board for the LattePanda Mu x86 Compute Module manufactured for free, win prizes (Sponsored)
 
  • Informative
Reactions: AmpleApricots
I've played around with power saving knobs on the MiniPC and with the AMD P-State driver there's the option to turn off core boost and set a "power consumption hint" (four options, from full performance to full powersaving, or an unsinged byte) for the EPP in active mode, in which the hardware decides mostly how to clock the SoC. I came up with a "desktop profile" where I turned off the cores boosting and set the hint to powersaving and I'm currently compiling llvm in gentoo and all cores are under full load, the CPU sits at 60C, the case is lukewarm around the back and the CPU fan isn't really audible. My assumption is the most dramatic effect had turning off the cores boosting as the power consumption (and heat production) of silicone like this usually rises exponentially at the top end of the spectrum. With Ryzenadj you also have the option to dynamically set a TDP target, similar to intel's RAPL, if your kernel offers /dev/mem. The most dramatic power savings can probably be had there with a profile of, let's say, 25W, things probably do get noticeable in the speed department then though. Most firmware for these probably allows to set this too, but it's more interesting and useful to do it on the fly in Linux, IMO. There is also probably the option to undervolt in there, I haven't played around with it yet.

I didn't get around to make proper measurements at the power socket but the driver readings make the power savings out to be quite dramatic with these settings alone (~35W now during compiling, vs 60+ before) and from the sheer, subjective difference in heat output I'm inclined to believe it. If you're on battery with one of the new ryzens, there's probably a lot of savings to be had by playing with these knobs. Theoretically it should be possible to only boost certain cores you pin tasks to (might make sense with some games) but in practice, I'm not sure if the SoCs can up the voltage only for one or two cores like that. Maybe that's documented somewhere. I'm also pretty sure the linux scheduler is not smart enough to prefer boosted cores for certain tasks by itself. There's also quite a few knobs for the iGPU I haven't played around with yet either.

I haven't noticed any difference in perceived quickness in normal desktop interaction. For games I have a "performance" profile I turn on by hand.

In a perfect world the scheduler would know when a boost is truly warranted and advise the hardware as such but in practice you have a spinning circle on some webshit that's drawn via tight loop and 6 of 8 cores will happily boost perpetually until they run into thermal limits to process this tight loop without even making the circle spin more even. There isn't even a mechanism to cpu-time quota certain tasks in a straightforward way in linux so things like this are the most helpful, in my experience.

If you want to play around with these knobs yourself, they reside at:
/sys/devices/system/cpu/cpuN/cpufreq/energy_performance_preference
and
/sys/devices/system/cpu/cpuN/cpufreq/boost

You need a recent kernel for boost toggling to work with AMD P-State. Also consider the "passive" and "guided" profiles of P-State, that allow you more control over clocking. Be aware they turn off energy_performance_preference.

RE: Luggable, I have all the parts, will soon start putting it together.
 
NanoPi R76S dual 2.5GbE SBC and router supports up to 16GB LPDDR5, M.2 WiFi module, HDMI 2.0 video output

RP2350-PiZero – An RP2350 board with Raspberry Pi Zero form factor, mini HDMI, 16MB flash

If you're on battery with one of the new ryzens, there's probably a lot of savings to be had by playing with these knobs. Theoretically it should be possible to only boost certain cores you pin tasks to (might make sense with some games) but in practice, I'm not sure if the SoCs can up the voltage only for one or two cores like that. Maybe that's documented somewhere. I'm also pretty sure the linux scheduler is not smart enough to prefer boosted cores for certain tasks by itself. There's also quite a few knobs for the iGPU I haven't played around with yet either.
AMD's power/scheduling options could get a lot more complicated in the future. Strix Point introduced a mix of classic and dense cores, up to 4+8, in two different core complexes, with 16 MiB L3 for the 4 "P-cores" and only 8 MiB L3 for the 8 C-cores. This is different than the straightforward Cezanne, Rembrandt, and Phoenix single-CCX designs.

Strix Point may have been a fallback design because a chiplet-based version or TSMC N3 weren't ready. With Zen 6 APUs, AMD may include "Zen LP" cores, similar to Intel's LP E-cores. In the chiplet-based Medusa Point variant, two LP cores would be located in the I/O chiplet, like the two Crestmont LP E-cores in Meteor Lake's SoC tile. If your workload or background tasks only need 2 cores, 4 threads of low-clocked Zen 5/6 with less L3 cache than usual (probably 4 MiB or less), the CPU chiplet could be turned off to save even more power.

These could also come to the I/O chiplet in Zen 6 desktop chips, making the flagship technically have 26 cores, 52 threads.

videoframe_716583.webpvideoframe_1241202.webp

In this leak, there are multiple designs with all three core types. Desktop and the Medusa Point/Halo chiplet APUs would be simpler and share 12-core CPU chiplets, but have the LP cores off to the side. TSMC's N2P and N2X nodes can be used interchangeably for these CPU chiplets, depending on clock speed targets.

LP cores could be a great solution for reducing idle power... if the OS uses them properly. And if they are very compact cores that can clock to e.g. 2-3 GHz while using a fraction of 1 Watt, they should add more than two in future I/O chiplets. Some products could get by with only LP cores. Put 8 of those in a Steam Deck 2. Steam Deck 1 uses quad-core Zen 2 at only 2.4-3.5 GHz.
 
  • Like
Reactions: AmpleApricots
If you're on battery with one of the new ryzens, there's probably a lot of savings to be had by playing with these knobs.
No laptop vendor is going to give you access to those. I would dearly love to undervolt my Ryzen Lenovo, but it just isn’t an option they give me. They don’t even let me set the fan curves, it’s basically locked to full speed at any temperature over forty degrees, which on a modern x86 laptop happens around halfway through the boot process.
 
What should I do with some leftover Raspberry Pi Zeros? I think I have two 2Ws.

>RetroPie
I have enough emulation devices and RetroPie sucks.
>PiHole
I tried it years ago and had some issues with my internet going down when something went wrong, and reworking DNS to not rely on it being up 24/7 was a pain. Unless it’s gotten way more stable since then, I’ll pass.
>VPN
I don’t really need that, I already have a VPN on my desktop and phone.

I’d like to run an ArchiveTeam Warrior, but they seem to be against running it on ARM for some reason.
 
What should I do with some leftover Raspberry Pi Zeros? I think I have two 2Ws.
I'm lazy so I haven't done this yet. But for myself I want to set up a small local file server. It can run entirely on a microSD card since I don't need to share even 1 GB of files.

Then I was thinking of setting up a dedicated Internet radio thing. It could just connect to a Bluetooth speaker, and ideally be controllable by any computer on the network through a browser.
 
  • Like
Reactions: Two Dollars
What should I do with some leftover Raspberry Pi Zeros? I think I have two 2Ws.
Create a solar powered, public access wifi AP with offline zim file server (wikipedia, gutenberg, etc).

PI Zero W access point: https://braindead-security.blogspot.com/2017/06/building-rogue-captive-portal-for.html

Web service: https://kiwix.org/ OR https://internet-in-a-box.org/

Zim files: https://download.kiwix.org/zim/

If you use Kiwix, kiwix-server is installed with kiwix-desktop. You don't have to buy their SD cards.
 
  • Informative
Reactions: Two Dollars
What should I do with some leftover Raspberry Pi Zeros? I think I have two 2Ws.
Make a USB password safe. Mine has a web UI that I authenticate to and when I click the web button it sends the password to whatever it's connected to via USB. I deal with a ton of different customers who sometimes send me laptops so hooking this to my KVM was the easy way to deal with them all.
 
Create a solar powered, public access wifi AP with offline zim file server (wikipedia, gutenberg, etc).

PI Zero W access point: https://braindead-security.blogspot.com/2017/06/building-rogue-captive-portal-for.html

Web service: https://kiwix.org/ OR https://internet-in-a-box.org/

Zim files: https://download.kiwix.org/zim/

If you use Kiwix, kiwix-server is installed with kiwix-desktop. You don't have to buy their SD cards.
I can feel myself going down a rabbit hole now… Kiwix needs more power than a Pi Zero 2W but I might also have an extra Pi 4 laying around, I’ll have to check. Any preference between Kiwix and Internet in a Box, or are they too different to compare?
 
I can feel myself going down a rabbit hole now… Kiwix needs more power than a Pi Zero 2W but I might also have an extra Pi 4 laying around, I’ll have to check. Any preference between Kiwix and Internet in a Box, or are they too different to compare?
I was just going down this rabbit hole a bit myself. IIAB has pluggable services such as nextcloud that look possibly interesting. However, Kiwix is easier, more mature, and there's a desktop version.

If you're on Linux, I can save you some googling. Install the Kiwix flatpak or "pacman -S kiwix-desktop kiwix-tools". The web service is started by going to the folder with your downloaded zim files and "kiwix-serve -p 8080 *.zim").

I've heard that kiwix-serve will run on the pi zero, but I haven't tried it yet.
 
  • Feels
Reactions: Two Dollars
Are there any useful solutions that can run on a RPi 2 Model B in the current year? I've had it lying around for years without knowing what to use if for. Would it be competent enough to function as a NAS or a media server, so I could finally stop worrying about my data by manually backing things up onto an external HDD?
 
No laptop vendor is going to give you access to those. I would dearly love to undervolt my Ryzen Lenovo, but it just isn’t an option they give me

These are hints you give to the EPP via P-State driver, if the EPP isn't turned off (no real reason to do that), you should be able to set at least these. My MiniPC has a similar anemic BIOS, down to no fan curve settings, and I can manipulate all of these, including TDP via Ryzenadj. I'm not even sure the vendor can even lock RAPL via firmware like with intel. I'd try it first. I think not unlocking the fan curve settings is not a vendor laziness issue, but something AMD holds over vendors now for their mobile APUs, actually. I do not remember the reasoning but there was one. Warranty? (not end user, but AMD towards the vendor)

AMD's power/scheduling options could get a lot more complicated in the future.
It's always like this. The hardware can do all sorts of really cool things (I mean think about it, you can tell the hardware to stay within a certain TDP and it intelligently can select the right instructions/loading of the cores etc. to do that almost perfectly, that's pretty cool already in itself) it's just the software that often doesn't take proper advantage of it. It's why I also left the P-State driver in active mode and let the firmware of the ryzen decide. I don't trust myself to make better decisions than it, and don't trust the kernel either.

That said, there already was some work on the Linux kernel to be aware of these core layouts via topoext and SCHED_CLUSTER, at least for current ryzens (cache locality etc.), that's already in and I'm sure it'll also work on future, more complicated layouts. The driver is also aware of "preferred cores", meaning the cores that are reported by the hardware as being able to boost the highest/best. If that is actually used intelligently though, I have no idea.

AMD P-State is already aware now of AMDs "C" cores vs. normal full cores. How well that works in practice I have no idea either. I know that until very recently, the advice in linux for intel's efficiency cores was to turn them off. I can imagine how to improve load balancing not to cripple performance with these cores, but I can't really imagine how a SoC or scheduler can make intelligent decisions where a task belongs without any hints from userspace even though the AMD driver claims to be able to do that very thing via hardware feedback to the OS scheduler and intelligent classification happening there (X86_FEATURE_WORKLOAD_CLASS, AMD_HFI). Personally, I think I'd just pin things like ntpd to these cores and feel really clever and good about myself.

A lot of the ARMs sport these configurations with big.little too and I'm pretty sure there it's completely ignored by whatever kernel happens to work on these.

Re: discussion about what to do with anemic ARM SBCs: Try to live a terminal based life on them. No webbrowser, just things like ssh and emacs. Lightweight epub and pdf readers. Be astonished how much shit you get done all of a sudden. Then build a ridiculous case around them with a carrier board and a battery that lasts days and embarrass your wife in public.
 
These are hints you give to the EPP via P-State driver, if the EPP isn't turned off (no real reason to do that), you should be able to set at least these. My MiniPC has a similar anemic BIOS, down to no fan curve settings, and I can manipulate all of these, including TDP via Ryzenadj. I'm not even sure the vendor can even lock RAPL via firmware like with intel. I'd try it first.
Ugh, I hate having to use weird third party scripts to configure things that really should just be available to me from the vendor.
I think not unlocking the fan curve settings is not a vendor laziness issue, but something AMD holds over vendors now for their mobile APUs, actually. I do not remember the reasoning but there was one. Warranty? (not end user, but AMD towards the vendor)
An absurd decision, in that case. Apple laptops not only last a lot longer and come with higher quality screens and trackpads, they don't have fans that go into jet turbine mode immediately during boot.
 
I've been playing around some more with the Mini PC and also tried llamacpp. I haven't tested a lot yet but models up to 24b are quite usable, which I didn't expect. Mistral small 24b at 8 bit runs at ~5 tokens/second generation speed on empty context, slowing down to around ~4 on 10k tokens of context. Around 52 tokens/second prompt processing speed (higher quantized models wouldn't be any faster, in fact, they might even end up being slower, depending).

This is on GPU via generic vulkan support because I'd have to compile like 40 packages for HIP (I'm on gentoo) and I didn't want to go through all that. I didn't even try to look into the NPU, it has 10 TOPS and was definitively not made for LLMs.

CPU only inference isn't much slower in the generation (4-3 t/s) but prompt processing speed falls off a cliff and is about ten times slower with 5 t/s. That almost doesn't feel right but some googling told me it might me.

Not terrible, considering all, even though the Macs are quite a bit faster. Just two years ago this would've been mindblowing. I remember people saying that we'd never run gpt4 (the first one) performance level models at home, "maybe in twenty years" if that. Hehe.

Nowadays at least for me it makes a lot more sense to use one of the various and very cheap APIs and their huge models. Still, quite cool on such a small machine and definitively in an usable performance bracket if you wanted.
 
Back