GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

Apparently the main issue with Nvidia cables is that the big brain engineers forgot to add a way to tell the user when it was actually plugged completely to the card, if the cable is not totally inserted then it melt like butter in a frying pan, when tech jesus did a demo on how to do it the cable SOMETIMES do a click that tell you its full inserted and sometimes even if its is full inserted it doesnt click so there is no a actual way to know unless you do it several times until you hear the click, worst of it in the FE editions you have to push it very hard to make it click
PCI-SIG are the ones to blame, although it was Nvidia's choice to move to something untested capable of supporting +600 Watts.

Whatever the real reason for these failures, it's enough of an issue that PCI-SIG is accepting Engineering Change Requests (ECR) for the connector until December 6th. It's not clear whether this will result in an actual modification to the plug, or what that modification might be, though. The ECR request notes that it is "to address a system-side shroud design" and that it is "unrelated to high-power connections."
 
  • Informative
Reactions: The Ghost of Kviv
@AmpleApricots posting about the Fujitsu Q616
I realize this sort of thing is highly variable, but what kind of battery life are you getting out of that laptop? I'm sort of tempted to get one, but only if I can reasonably expect >2 hours of battery life with standard office-type use. Related: what's the charging voltage? (I'm too lazy to look it up.)
 
Kind of GPU related, is there a way to have AMD ReLive record the gameplay without shitting the bed everytime?
Never had issues on my vega with normal recording, mostly Instant Replay used to shit the bed very often (would save 60 seconds of green screen) which was resolved by DDU and reinstall.
 
I realize this sort of thing is highly variable, but what kind of battery life are you getting out of that laptop? I'm sort of tempted to get one, but only if I can reasonably expect >2 hours of battery life with standard office-type use. Related: what's the charging voltage? (I'm too lazy to look it up.)
Old-school 19V. Battery life was actually not really important for me as my plan is to use the device "semi-stationary" (I do travel with it, but at my travel destination it'd stay more or less in a room, and at my home it stays at the desk with occasional trips to the living- or bedroom, so I really just needed portable)

Mathematically with a new battery and my OS setup&usage (web browsing with script blocker, watching videos, typing stuff into a terminal) I should be able to expect anything from 6-8 hours on a full battery charge. I do implement a lot of tricks though, for example freezing background processes when their windows aren't focused, like on android. It's not so much that it uses a lot of power, it's more that it's a small device and so the battery had to be small. Fujitsu claims 10+ hours for this particular model, that might maybe be possible with a new battery and maybe running a slim Win10 on it on minimum brightness and letting it idle some of those hours, but I don't think it's super realistic. I usually found that windows supports and implements the various hardware power-saving options much better, but then turns around and wastes that same power on bloated background processing, so I actually don't believe the figures would be that different. If you have a bloated Linux like Ubuntu though (that'd randomly had some background process peg one CPU core to 100% for minutes for no apparent reason when I was testing if it even works correctly) it's possible the 6 hours are not reachable.

Be aware that a used device might have a very worn battery that lost 10-20%+ of it's initial capacity. For me it was an option to send the device back, for you it might not be.

If you want really long battery life there's still no way around something ARM and Android/iOS. It's really 50/50 about the combination OS and chip. These devices have these marvelous runtimes (if built well with reasonable battery) because tons of OS tricks, harsh temperature/voltage limiting of the SoCs, very power-saving GPUs and the OSes using hardware decoding relentless for just about everything and the chips also being very good at it. With some of the more modern ARM chips I had when I still was into ARM like the S922 , there was barely a measurable difference between idling and decoding a video running Android, all the x86 I have encountered are very inefficent in comparsion to the point that software decoding might actually be sometimes more power-saving with lower resolution videos.


Apparently the main issue with Nvidia cables is that the big brain engineers forgot to add a way to tell the user when it was actually plugged completely to the card, if the cable is not totally inserted then it melt like butter in a frying pan, when tech jesus did a demo on how to do it the cable SOMETIMES do a click that tell you its full inserted and sometimes even if its is full inserted it doesnt click so there is no a actual way to know unless you do it several times until you hear the click, worst of it in the FE editions you have to push it very hard to make it click
I have no idea about these cables but I'd make GND and VCC physically connect in that order and then put shorter and narrower sense contact(s) into it that need to connect in order for the delivery side to actually deliver and maybe optionally throw a warning via firmware that they're not connected. If properly designed, you could make it physically impossible to connect these sense wires without connecting the power properly. Modern EEs are often downright scared of mechanical solutions like this and I do not understand why.

Then again that's a lot of watts and a bit of oxidation on the contacts already can cause a fire. Such things are not super easy to design.
 
I have no idea about these cables but I'd make GND and VCC physically connect in that order and then put shorter and narrower sense contact(s) into it that need to connect in order for the delivery side to actually deliver and maybe optionally throw a warning via firmware that they're not connected. If properly designed, you could make it physically impossible to connect these sense wires without connecting the power properly. Modern EEs are often downright scared of mechanical solutions like this and I do not understand why.

Then again that's a lot of watts and a bit of oxidation on the contacts already can cause a fire. Such things are not super easy to design.
Thats the solution tech jesus said, at least i will give props to nvidia to not go full big tech asshole mode and blame the users for the issue because they wanted to spin it as it was them that put it wrong not them that designed it like dogshit

the lesson out of this is, if you are going to die with a series 4, use a third party cable that probably already fixed the issue and ignore Nvidia saying that they are going to revoke your warranty if you do it
 
On the topic of RTX4000 cards, it seems like sales for the 4080 are lackluster at best.
The 4090 (somehow) still sold out almost instantly, but the 4080s are sitting on shelves at several retailers.
This is especially interesting considering that the 4080 was generally shipped out (to resellers) in way lower numbers than the 4090.
All of this despite the fact that the 7900XTX has not even launched (or been shown in independent benchmarks) yet.



Also kinda funny, some people just blindly bought some to scalp on Ebay, and now are sitting on a 1200$+ GPU nobody wants.
This is also making what little 4080 sales actually happen seem even more sad.

 
The 4090 (somehow) still sold out almost instantly, but the 4080s are sitting on shelves at several retailers.
The 4090 MSRP is 33% more than the 4080, and some, not all, scenarios have it performing more than 33% better. So it's almost better price/performance for the 4090, or just slightly less. The MLID video points out that some AIB 4080s were selling for $1500.
 
  • Informative
Reactions: Brain Problems
The 4000 series are priced the way they are because Nvidia has a pile of 3000 series in the channel they haven't managed to shift yet. They can't sell lower or nobody will buy the logjam of 3000 series. Meanwhile, AMD are taking advantage by slipping in the gap between both in offering something better than the 3000's but cheaper than the 4000's. Nvidia have screwed over the market.
 
I read that the 4060 will deliver better performance than the 3070 at 3060 Ti prices. That headline is not as good as someone thinks it is. I understand how inflation and prices work, everything is more expensive now, but still. Nvidia, make a sub-200 card, just let us see what it is.
 
I read that the 4060 will deliver better performance than the 3070 at 3060 Ti prices. That headline is not as good as someone thinks it is. I understand how inflation and prices work, everything is more expensive now, but still. Nvidia, make a sub-200 card, just let us see what it is.
Also, a 4050 appeared in a laptop.


That will be your sub-200 card most likely. Released forever from now.
 
Between that 4xxx fiasco and a ~2015 tablet PC being a perfectly useful computer in almost 2023, we seemed to have really hit sort of a wall, haven't we?
Can confirm. I'm putting my tiny Celeron N4500 laptop through its paces. Gets pretty good battery life ~4 hours thanks to passive cooling and can drive a 4k display 60hz no problem. Compiling small cpp projects is fast enough as is dealing with npm projects.
 
I basically just need a xterm machine (with occasional switch to the browser) and the playing of an occasional movie and the Q616 is downright overpowered.I can get a pretty solid 6-7ish hours (with some battery life to spare, it's not good to run it down to 0%) in my scenario and can confirm that now.

The only thing that stings a little about the old Skylake architecture is the poor video decoding options, and just a generation later (Kaby Lake) that was resolved. (more or less the only really tangible difference between these generations, at least in my usage scenario) Then again, it doesn't seriously matter for my usage case either and it can decode x265 10 bit 1080p at normal bitrates too, it just eats 1-2W more on average for it. With a power socket always nearby it is meaningless, although it would've been interesting to see if Kaby Lake is more energy efficient in video decoding in general. This is high level complaining though as x264 doesn't seem to go anywhere and it *can* decode x265, just only 8 bit. The stuff I download is rarely ever x265 to begin with.

Before I bought this one I was looking at a Lenovo Thinkpad tablet with such a Kaby Lake and better screen. Just like the HP though that one just felt too big and the going price for a unit that wasn't absolutely trashed was about more than double (350 and up to 400ish) which at the end of the day, is just too expensive for such an old, low end processor IMO. (For about double that money you could get an recent iPad new to give an extreme example, the math just doesn't add up there, not sure what these commercial sellers are thinking)

These Thinkpad tablets also looked flimsy as hell to be entirely honest. The IBM times are definitively over there.
 
Last edited:
Looking up GPUs to put in my new build, and uhh, holy shit, I'm gonna have to re-do the wiring and put in industrial outlets if I want the newest ones, ain't I?

Back in my day having a 750W PSU was considered ludicrous overkill reserved for SLI/CrossFire and multiple spinny hard-drives.
 
  • Feels
Reactions: Judge Dredd
Looking up GPUs to put in my new build, and uhh, holy shit, I'm gonna have to re-do the wiring and put in industrial outlets if I want the newest ones, ain't I?

Back in my day having a 750W PSU was considered ludicrous overkill reserved for SLI/CrossFire and multiple spinny hard-drives.
Minimum recommendation for the 7900 XT is 750W, 800W for 7900 XTX. I bet 750W is fine if you don't go crazy on the CPU.

I'm interested to see if those gain any traction at all in machine learning. Also would be interesting to see sales do better than at least the 4080. That seems like a slam dunk but you never know with the eternal #2.
 
  • Like
Reactions: Brain Problems
So I made some further attempts on the Q616, first of all there's a hybrid GPU driver for older intel GPUs in linux that allows some limited VP9 hardware decoding support. It's not incredibly interesting though performance wise.

I'm a Linux-user and even the most normie distribution setups are kind of ass with touchscreen in "Tablet mode". I tried to finangle a lightweight touchscreen keyboard into my setup and while workable, the results weren't exactly amazing. I then stumbled across android-x86 and copied it's ISO directly to some usb stick I had lying around,planning to boot directly off it and expecting for absolutely nothing to work especially after people noting to use it in VMs. Indeed it booted fine, supports all the important hardware and is nicely usable with many f-droid apps. Quite interesting. If you just wanna retire to bed to consume some media and browse the net without having to juggle a keyboard this might be useful for some of these lower end devices as dual-boot option. (with your main partition encrypted and safely stowed away and this on an extra partition you can wipe at will) That said, I have my doubts about the power efficency on x86 with Android.

Maybe some of you find this interesting. It is a thing. Be Aware though that the android version is quite old.
 
I then stumbled across android-x86 and copied it's ISO directly to some usb stick I had lying around,planning to boot directly off it and expecting for absolutely nothing to work especially after people noting to use it in VMs. Indeed it booted fine, supports all the important hardware and is nicely usable with many f-droid apps. Quite interesting. If you just wanna retire to bed to consume some media and browse the net without having to juggle a keyboard this might be useful for some of these lower end devices as dual-boot option. (with your main partition encrypted and safely stowed away and this on an extra partition you can wipe at will) That said, I have my doubts about the power efficency on x86 with Android.
Have you tried 'Bliss OS' out? I've only used it within a VM but they're a bit more up to date, Android base version wise.
 
No, until a few hours ago I didn't even have the possibility to use Android without some sort of ARM emulator contraption on the radar. But thanks for the advice, defintively will do. While I would never use Android as "serious OS" it is good as "finger operated media-consumption" OS, which prompted me. Even though this mobile skylake is incredibly low-end for x86 standards, the cores and GPU are still a lot faster than a lot of the ARM SoCs out there, and it's peripherials like WLAN adapter and SSD are better, so the experience was very snappy. Battery is draining like crazy though as I guessed and my guess is you'd not get more than 4-5ish hours out of it.

(Actually I managed to talk a friend with M1 into the 7z benchmark test and while that's not incredibly representative, his M1 ARM was only about three times as fast than this skylake, which was honestly less than what I'd expect from that 20W TDP beast, especially after all of Apple's marketing - I bet the direct integration with an OS which is 100% optimized to this hardware alone makes quite a bit of difference in perceived speed though)
 
I would love to see a Threadripper size MCGPU with HBM thrown as close to the dies as physically possible.

Inb4 someone links me a datacenter GPU that does this.
Well GDDR6+chiplet GPU is better progress than increasingly large monolithic dies. The Fermi-level housefires that are 4090s have convinced me that monolithics aren't profitable nor the wise choice going forward. AMD is way ahead of where Nvidia is on this as well, having experience in chiplets since 2015 now. The recent GN video about the AMD cards is really quite interesting.
 
I want to ask about something that was big in computer tech about a decade ago I've not seen or heard about since.

Bathtub Graphs.

Anyone remember these? For those who missed it, or forgot, a bathtub graph was a graph of failure rates over time. You had an initial spike to account for errors in transport or the like, a low, flat line for most of the graph, and a slower spike at the end representing failures due to wear. Making a shape of a bathtub.

I remember people obsessing over these back in the day whenever they went to buy anything. So much so that they even appeared on store pages iirc. I never payed it much attention because a bathtub graph isn't going to give you much for consumer level parts under normal conditions until it's obsolete, which I'm guessing is why no one talks about them now.

Minimum recommendation for the 7900 XT is 750W, 800W for 7900 XTX. I bet 750W is fine if you don't go crazy on the CPU.
Part of me wonders if this is to cover power spikes, or if they are this demanding only under load.
 
Back