- Joined
- Aug 28, 2019
Raxda has made at least three of those Pi Zero clones now. I think they had trouble with SoC availability.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Quite possible, I assume the SoCs will have recovered but a lot of other parts never went back to normal. Quite funnily, the Zero is more expensive than the Zero Two Pro which has the better A311D. I can only stress that I don't think this SBC and SoC are worth $100, I feel it's way too expensive for what it is. I just needed this specific part. You can get the same SoC in cheap TV-Boxes and TV-Sticks but I needed the board as it is.I think they had trouble with SoC availability.
Please specify the chip somewhere when you longpost. I guess you meant Radxa ZERO 2 Pro with Amlogic A311D.I spent almost the entire week doing stuff on the Radxa off and on.
are these exclusively the newer revision ones, or a mixed bag with some of those with the GPIO bug thrown in? Problem with the Pis always is that you get a few nasty bugs thrown in with the early revisions.
Longpost incoming. Nope, S905Y2. The other one would've blown the power budget. While you can sometimes screw with the regulators in the device tree file and also limit the maximum clock and thinks like RAM clock, I didn't like my chances to take 2 watts off that device. Didn't even try. That's at the very least 1/3 of it's total power consumption from what I could gather. It's just too much. And even with the 2W off it would consume a bit more than I like, especially now that I have some more accurate screen measurements. The only thing that spoke somewhat for that maybe being possible was the S922X apparently being very overclockable, which odroid openly advertises for the N2+ and hints at the design not being at the "silicone limit", if you wanna call it that. Even if we said "then let it consume more and shorten the battery life somewhat" that SoC definitively needs me to add active cooling or a pretty big heatsink to the case and that's just a no. With projects like this it's always important not to get lost in "nice-to-haves", the main role of this system is being a text-oriented emacs machine with incredible battery life and mechanical keyboard, being able to remote connect to other computers in various ways to do the heavy lifting for some things. The S905Y2 excels in that role. That it even does 720p movies (in software) and most old vidya fine is a welcome plus, but not a must. I also like the challenge the limitations offer and always just find this low-end computing incredibly fun. If I just wanted something powerful and battery efficient, I'd buy a MacBook.Amlogic A311D
This, right?are these exclusively the newer revision ones, or a mixed bag with some of those with the GPIO bug thrown in? Problem with the Pis always is that you get a few nasty bugs thrown in with the early revisions.
Yes.This, right?
Good question, I think the safest thing is to just wait it out until these have "naturally" sold out from the distributors, or find reports online. The Pi was soured on me when I was gobbled up in the hype of the early Rasperry Pis and got one (I think it was 1 or 2. Maybe 3?) and that thing was basically impossible to power porperly except you have some power supply that gives slightly over 5V. That's just poor engineering. It also had some other fault with it's USB chip I don't remember clearly. The Picos are cool, especially for their price, but they have inherited the curse.Maybe it is afflicted?
With the CM being a enterprise/industrial focused option. I wonder what uses 16gb has in these environments. 8gb seems about as high as you would need to go, much less 16gb. A ZFS based NAS is about the only use I can think of.RPi CM5 is out, and it includes the elusive 16 GB RAM option although those variants will launch in 2025:
Woo, one whole PCIe lane, still officially Gen2... Dumb.RPi CM5 is out, and it includes the elusive 16 GB RAM option although those variants will launch in 2025:
Raspberry Pi CM5 launched with Broadcom BCM2712 SoC, up to 16GB LPDDR4 ECC memory
A first look at the Raspberry Pi Development Kit for CM5
Web server? IDK. But more RAM is almost always better and there are a myriad of ways to "waste" it.With the CM being a enterprise/industrial focused option. I wonder what uses 16gb has in these environments. 8gb seems about as high as you would need to go, much less 16gb. A ZFS based NAS is about the only use I can think of.
Definitely shitty, but still enough for some ~4K30 gaming and AI (he is probably using it at the non-guaranteed PCIe 3.0 x1, which I assume you could also do with CM5):Woo, one whole PCIe lane, still officially Gen2... Dumb.
Seems like a good one.
Reuters: Exclusive: Chinese researchers develop AI model for military use on back of Meta's Llama (archive)I tried the new Qwen QwQ, a 32b coding model today. It was the only model (including o1) that not only correctly understood an emacs elisp mode I've written, but also could make a sensical addition to it. Meanwhile some conservative western politicans sperg out about the potential of fucking Llama of all things to fall into chinese hands by open weighting it. As if they needed it. The West is so shortsighted in regards to China, it's unreal. To be fair, 4o is most likely in the same weight class (albeit much worse). If the models keep sizing down like this it becomes more and more of a realistic scenario to run them on hardware like that.
This crop of model we currently have is really good at recognizing patterns, even very subtle ones. Weaponizing it though, in an efficent manner? I'm not sure. I'm always critical when I hear this because the reasoning you can trust current models to do while very cool (it's just incredibly cool that you can "teach" a model to do a thing by simply requesting it in natural language, mindblowing even if you ask me) isn't really reasoning you couldn't arrive at with more conventional coding approaches which are reams more reliable (not to say less hardware intensive). So I don't see the danger in current AI, at least not like that. Intelligence gathering like in the articles, maybe. I use AI through emacs and I was monetarily considering giving the AI tool access, basically the ability to call elisp functions to basically do things I request in natural language via the minibuffer, but then thought better of it. As optimistic of the technology as I am, even I wouldn't trust it that far and it's not even all that useful.
I said it often on this forum and I'll gladly say it again and again: I think it's too early to say that. Just a year or two ago people were saying "you'll never be able to run a model with the smarts of gpt4 at home. In ten years maybe. But that's a big maybe and local models will never be nearly as smart as gpt4. They just won't let you have such a model" now many models are smarter than gpt4 and you can conceivably run many of them at home, with relatively normal hardware. It's simply too early for such statements, IMO. The envelope is still getting pushed all the time, QwQ and deepseeks R (also chinese) is a good example of that.transformer LLMs are hitting a plateau
That really depends on the model in question, it's architecture, implementation of the inference software, context size and type of context used etc. but for a modern model you can roughly say - at FP16 around double it's size in parameters in gigabytes of ram + a few gigs, for INT8 same amount of memory in gigabytes plus a few GB. So, QwQ at Q8 would probably need around 40 GB of RAM. Since it's a relatively small model, you could probably stuff it into 32 GB RAM system if you quantize it somewhat harder. With a 16 GB VRAM GPU and DDR5 RAM it'd probably run fast enough to be usable. It's hard to say what an effect this has on models and opinions diverge, the more modern smarter models use "their brains" more efficently so quantization seems to hit them harder as the older gen ones, as does it hit the smaller ones. How well a model actually quantizes without "brain damage" is probably really dependant on the quant technique used (yes, you can totally screw this up) and the individual model.What is the rule of thumb about parameter counts and RAM required?
After the embarrassing failure of Qualcomm to launch the Snapdragon X Elite Dev Kit, the "world's first" mini-PC with the chip has been revealed.
View attachment 6713393
GEEKOM is a Chinese brand (how could you tell?!). Some of their mini-PCs are expensive, but this could be a sign that the X Elite is going to be whored out for cheap.
Only about 720,000 Qualcomm Snapdragon X laptops sold since launch — under 0.8% of the total number of PCs shipped in Q3, or less than 1 out of every 125 devices (archive)
also important for everything with an integrated GPU every single bit of additional memory performance translates directly to better GPU performance in my experience. Single channel is brutal tho. I think intel just knows it's customers of these SoCs. Just because a chip supports dual channel configurations doesn't really mean it's used. Many a low end craptop has slow, single channel RAM even if the SoC could do better. Not something you can change as customer usually. Often not something you can even find out before buying the system. There's also RAPL; intel allows you to configure your SoCs in a way where they do internal optimization of what instructions to call when etc. so they stay inside a specific thermal envelope in watts. You can set this up in the firmware and even decide if you let the OS control it or not. The idea is that you can sacrifice some performance to get the perfect thermal design in your device. Most low-end manufacturers don't bother with setting this up at all, glue a small metal shim to the SoC, add a tiny fan if they're feeling generous, and just let thermal throttling take care of it. Makes these SoCs also perform poorer than they need, with the added bonus of murdering the (usually close-by) battery by letting the system always run hot as hell.Using the fastest memory
Gracemont is similar in IPC to Skylake, also adding AVX/AVX2 for the first time to an Atom core, so no surprise there. Clocks are similar but the N100 has 50% more L3 cache and 2 MB L2 cache for the single quad-core cluster, vs. 256 KB per core for the Kaby Lake-Y dual-cores. Then support for single-channel DDR4-3200/(LP)DDR5-4800 instead of dual-channel LPDDR3-1866/DDR3L-1600, so the potential to outpace it slightly there with DDR5.I fired up the old Thinkpad Convertible today and that i5-7Y57 (dual-channel and yes, the RAM is actually implemented that way in this system) is actually kinda similar to the N100. A bit slower (but also significantly lower TDP, I measured it a while ago and the whole system had something insane for x86 in complete idle with screen off, I think around 2-3W or so), only two cores, but core vs. core only somewhat slower.