GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

V-Cache doesn't uniformly speed things up. It improves only memory-bound tasks. I've seen it boost memory-bound tasks by 1.5x or more. Compute-bound tasks will see a 0% speedup. I have no intuition about whether or not consumer applications really will benefit much.
I meant +30% average in games compared to the +15% average they delivered last time. And they would do this by improving aspects of the V-Cache other than the capacity, since it's still going to be tripling from 32 MB to 96 MB.

With Milan-X according to Microsoft:
  • Up to 80% higher performance for CFD workloads
  • Up to 60% higher performance for EDA RTL simulation workloads
  • Up to 50% higher performance for explicit finite element analysis workloads
But in many cases 0% or a small regression from lower voltage and clock speeds.
 
Last edited:
  • Informative
Reactions: Brain Problems
I meant +30% average in games compared to the +15% average they delivered last time. And they would do this by improving aspects of the V-Cache other than the capacity, since it's still going to be tripling from 32 MB to 96 MB.

With Milan-X according to Microsoft:
  • Up to 80% higher performance for CFD workloads
  • Up to 60% higher performance for EDA RTL simulation workloads
  • Up to 50% higher performance for explicit finite element analysis workloads
But in many cases 0% or a small regression from lower voltage and clock speeds.

Note the "up to." Those workloads all do lots of sparse linear algebra, which has tons of indirect array lookups, and you see gains that large only when the problem is the right size such that moving from 256 MB of L3 to 768 MB results in a significant difference in how much of it is sitting in cache instead of RAM. IDK how much that really matters for games, obviously real benchmarks would tell you more. I'm skeptical that it really matters just due to consoles being the lead platforms. If the problem is too small, it's already all sitting in L3, so the V-Cache makes no difference. If it's too big, the extra L3 doesn't matter that much, so you get much smaller benefit. More details here.


It's a very nice enhancement to the chip, just be aware that those are marketing numbers, meaning they're bullshit. Even those "averages" are taken from hand-selected software in hand-selected situations.
 
  • Informative
Reactions: Brain Problems
Who gives half a fuck about wattage in a CPU when it's still well under a 200w TDP? I'm running a fucking Ryzen 7 2700x overclocked to 4.5ghz with a 3060 on a 430w PSU with a 15% OC on the GPU. My system is rock solid.

The days of worrying about power draw are long gone. Every new ram generation comes with an expensive new generation of chipsets. That's par for the course. AMD will charge a premium because that's literally what always happens when you have a limited supply of fresh of the assembly parts.

It's a great time to be buying and building and selling PCs again. It was a long 2 years in the wilderness.

@Tom Nook's Gloryhole you're never gonna futureproof anything but buying an AM4 right now is just silly. If you really want to try and buy AM4 just wait until Black Friday you're gonna see crazy deals around Christmas and holiday season.
I somehow like running at low power just as much as I like overclocking. Only got into it with my 8700K PC which I delidded and after I got bored with a 5GHz OC that didn't really help me in my daily tasks, I just ran at a heavy undervolt, with very low temps and power usage.
I kinda miss the times where I needed a new PC to play games I really wanted to. The last 5-10 years have been pathetic for me, could barely finish a few titles, I remember obsessively playing RPGs and MMOs for so much time that it negatively affected my job (bad, bad, bad), thank God gaming is so much worse these days that I prefer to sperg out on a forum like this that play anything.
 
It's always valid to be concerned about power draw. Not so much for PSU capacity or even component cooling, but heat dissipation into the room in which the PC is located. An extra 100w into an average sized pc/office room can make it feel a decent amount warmer than other areas of the house.

My wife and I have our pcs in the same room. An extra 100w+ watts added to each one can ratchet the temp of the room up.
 
Note the "up to." Those workloads all do lots of sparse linear algebra, which has tons of indirect array lookups, and you see gains that large only when the problem is the right size such that moving from 256 MB of L3 to 768 MB results in a significant difference in how much of it is sitting in cache instead of RAM. IDK how much that really matters for games, obviously real benchmarks would tell you more. I'm skeptical that it really matters just due to consoles being the lead platforms. If the problem is too small, it's already all sitting in L3, so the V-Cache makes no difference. If it's too big, the extra L3 doesn't matter that much, so you get much smaller benefit. More details here.


It's a very nice enhancement to the chip, just be aware that those are marketing numbers, meaning they're bullshit. Even those "averages" are taken from hand-selected software in hand-selected situations.
I don't think the 768 MB per socket number actually matters for determining performance increases, because it's split up into 8 CCDs. What matters is the L3 cache that 1-8 cores can access, which is 96 MB. This is the same between the 8-core 5800X3D (96 MB) and 64-core Milan-X with 768 MB.

A workload on a 64-core Epyc CPU that gets a +80% performance increase from V-Cache should also get +80% on a 5800X3D when compared to 5800X. It's just that most consoomers aren't running those types of workloads, and if the workload scales to 64 cores, then the 5800X3D will be about 1/8th as fast, give or take based on factors like frequency, DDR4 memory channels, etc.

If the cores on one CCD are able to directly benefit from the cache on other CCDs, that's news to me.

For gaming, you can look at the reviews. The 5800X3D is about 10-20% faster than the 5800X at lower resolutions, dependent on the games tested since there is a lot of variation.

Techpowerup: +11.9% (720p), +8.3% (1080p), +7.2% (1440p), +1.3% (4K)
Techspot: +22.6% (1080p)
Tom's: +22-28% (1080p), +6-11% (1440p)
 
  • Informative
Reactions: The Ugly One
I somehow like running at low power just as much as I like overclocking. Only got into it with my 8700K PC which I delidded and after I got bored with a 5GHz OC that didn't really help me in my daily tasks, I just ran at a heavy undervolt, with very low temps and power usage.
I kinda miss the times where I needed a new PC to play games I really wanted to. The last 5-10 years have been pathetic for me, could barely finish a few titles, I remember obsessively playing RPGs and MMOs for so much time that it negatively affected my job (bad, bad, bad), thank God gaming is so much worse these days that I prefer to sperg out on a forum like this that play anything.
Same, there was a time when I HAD to upgrade to play most of what I wanted, Crysis and I remember Dead Space were a couple big ones I wanted to run as well as I possibly could. These days it's like eh my current rig runs it all well enough. Rust I guess might be something that I would like to run at higher settings but given the nature of the development plenty of people with 3-4K dollar rigs reports FPS problems as well in the game.

I didn't mean to really disparage more power efficient designs, lord knows that heat is the enemy of electronics and anything that runs hotter and with more watts will tend to break faster -- I just was saying as an old fag we're a long way away from the days of a FX series or some shit (pentium D as well) that you could cook an egg on.
 
  • Feels
Reactions: AgendaPoster
I don't think the 768 MB per socket number actually matters for determining performance increases, because it's split up into 8 CCDs. What matters is the L3 cache that 1-8 cores can access, which is 96 MB. This is the same between the 8-core 5800X3D (96 MB) and 64-core Milan-X with 768 MB.

For workloads like sparse linear algebra, it does. In most products that do this sort of work, each core is assigned to an individual process (or in some cases, thread) with its own memory pool and nearly identically sized pieces of data. So whether it was 1 giant pool of L3 cache or 64 individual pools, you'd get a similar gain. It doesn't really matter all that much. You pretty much just care about memory per core and bandwidth per core, the additional structure beyond that doesn't have much of an impact.
 
With rising energy prices and the topic of power draw in this thread, what do you guys think of building a PC to be as power efficient as possible?

This is just hypothetical instead of practical. I know you could just buy a laptop. If it were to happen though, what would you go for? 12100 cpu and a 6400 gpu? Go AM5 and rely on intergrated graphics?
 
With rising energy prices and the topic of power draw in this thread, what do you guys think of building a PC to be as power efficient as possible?

This is just hypothetical instead of practical. I know you could just buy a laptop. If it were to happen though, what would you go for? 12100 cpu and a 6400 gpu? Go AM5 and rely on intergrated graphics?
12100 for single core, maybe. Would still rather get a 5600x for energy efficiency.
 
  • Informative
Reactions: Judge Dredd
12100 for single core, maybe. Would still rather get a 5600x for energy efficiency.
If I was doing a fresh build for now using AM4 shit do you think the 5600X would cut the mustard until I decide to switch to an AM5 in God knows when? I've read in this thread about its power consumption but unsure if it is a good long term choice but power gobbling has never been something I really considered until recently. Also, what direction do you go if you wanted to get a similarly power efficient GPU? Do they exist or are they all decent or what? Please help me, I am retarded.

I'm starting to lick my lips in anticipation of Fall hardware deals and the rumours of used GPUs flooding the market in mid-September are getting louder.
 
  • Agree
Reactions: Judge Dredd
My wife and I have our pcs in the same room. An extra 100w+ watts added to each one can ratchet the temp of the room up.
It's like having an extra person or two in the room, and I think everyone who ever had a party at home knows what that does to the temperature of a smaller room, then most computers are designed to purge their excess heat into the air.

intel recently also used to have the buggy atoms that'd just fail eventually because of some internal aging process. I don't remember the specifics, but the particular SoCs affected could be revived by an external pullup resistor. Intel has a bit of a history of such problems.

I also really like the idea of low energy systems, and electricity in my parts is expensive enough (soon to be even more expensive) that power draw of the computer if you use it regularly is actually noticeable in the bill. My Desktop system (4650G Pro) currently manages to do 25W (measured at the socket, including the monitor which gets supplied via USB) if I do stuff like writing this post, which is pretty good for a full system with dedicated GPU and everything. Naturally, a low power consumption like this also gives you a system that is completely silent in normal desktop operation. I accept some little fan noise in heavy loads, in normal OS operation I'd not put up with it anymore. These times are just over over. It's also nice to have a computer that's silent and produces no noticeable warmth when browsing, watching videos or doing some light gaming.

The trick is mostly to use solid state storage, and a low TDP SoC with iGPU. Then undervolt what you can. dGPUs are not really optimized for low power usage and will happily burn several times what the entire rest of the system consumes just to e.g. play back a video. (and the video won't be played back "better" or anything) If you want to go farther than that, you need to go ARM or Notebooks which are built for low consumption. There's big caveats with both and I personally find a proper x86 desktop with exchangeable standard parts endlessly more useful.

In theory a more performant, higher TDP CPU might actually consume less by doing the same tasks at lower clocks snd quicker (= rush back to sleep), in practice software is often shit and the more oomph the CPU offers, the more it'll be utilized, even if there's no subjective improvement in operation. Most OSes CPU governors are also not smart enough or just don't have enough information to make good decisions at most turns and the school of thought to design your OS to be power efficent is also kinda new. Windows here is actually a lot better and I say this as a ~20 year linux user. For power consumption, Linux sucks.

Modern processors allow you to set TDP envelope targets, at the cost of performance. This is sort of the brutal method, but always works as the decisions how to hit the target are mostly made in the firmware/microcode and not by the OS. There is some value in setting up different TDP profiles for different workloads but I haven't experimented with it much yet. (Again, theoretically this should not be necessary, in practice all software sucks) This is usually more to regulate how much heat the chip produces, not to regulate power consumption, mind you. The power consumption is the side effect.

It's a complicated topic you could write books about, really, especially since the hardware has become so complex. For example I have a cheap N4020 Notebook. It does most of the things I want from it fairly well at a very low power consumption, just more complicated DosBox emulation sometimes was a bit slow, with the higher end DOS games. Eventually I figured out by disabling the second CPU core (not really, just putting it offline in linux so it doesn't get any tasks anymore and can go into deep sleep) DosBox would actually run quicker since the whole SoC wouldn't heat up as much with the second core, allowing the other core to stay at boost states almost indefinite. Things like that are just really hard to account for as general rule.
 
Those workloads all do lots of sparse linear algebra, which has tons of indirect array lookups,
Good game engines are like wood chippers not lathes, keeping them fed have always been a problem and a larger cache always helped.
But from the benchmarks I have seen the largest benefit in performance of having a 5800X3D for games is when running the game at 480p... Which makes sense.
 
If I was doing a fresh build for now using AM4 shit do you think the 5600X would cut the mustard until I decide to switch to an AM5 in God knows when? I've read in this thread about its power consumption but unsure if it is a good long term choice but power gobbling has never been something I really considered until recently. Also, what direction do you go if you wanted to get a similarly power efficient GPU? Do they exist or are they all decent or what? Please help me, I am retarded.

I'm starting to lick my lips in anticipation of Fall hardware deals and the rumours of used GPUs flooding the market in mid-September are getting louder.
AM4 CPUs are going to be fine for quite some time unless you're an enthusiast that always wants the best. It's going to be many years still until games push CPUs extensively.

Used GPUs have been flooding the market for months. They should get increasingly cheaper, but there is a much higher floor than ever before, as well. These Youtube whores saying "3090s for $200!" are clickbait trash.
 
With rising energy prices and the topic of power draw in this thread, what do you guys think of building a PC to be as power efficient as possible?

This is just hypothetical instead of practical. I know you could just buy a laptop. If it were to happen though, what would you go for? 12100 cpu and a 6400 gpu? Go AM5 and rely on intergrated graphics?
1. Your wallet should not be killed if you go to for example, i5-13400 and RX 6600 XT. It will be idling most of the time. Is it really that much power?
2. 75W is a nice cutoff for discrete. We should see more options there because of Intel.
3. We can only guess that particular APUs will come to AM5 desktops. But the ones that do should be very good at 1080p as I discussed in an earlier post. All they have to do is beat the Xbox Series S.
4. You could limit your CPU or GPU to use less power. See AMD's numbers for the 7950X at different TDPs. Intel makes a 12900T with 16 cores at 35W base TDP, but you can do that yourself.
5. If you stop playing vidya games, or new ones, you could get away with using a 15W Atom mini PC. Weak mobile CPUs are all anyone needs to shitpoast.
 
Also, what direction do you go if you wanted to get a similarly power efficient GPU?
Hard to say. The reason I suggested a 6400 is because it only gets power from PCIE, so they have a maximum they can go. Getting a similar card and underclocking could potentially be the same, but that's why I asked as I don't know enough.

It will be idling most of the time. Is it really that much power?
It depends how you measure. I tend to measure these things in lightbulbs (remember when they rated 40w, 60w, and 100w?). But still, a computer that idles at 45w is still 3 times more power than a laptop or tiny arm PC.
 
Hard to say. The reason I suggested a 6400 is because it only gets power from PCIE, so they have a maximum they can go. Getting a similar card and underclocking could potentially be the same, but that's why I asked as I don't know enough.
Slot-powered cards were always interesting because they said something about architectural improvements over the previous one because they were limited to the same power envelope as last gen, and the gen before that, and the gen before that and so on...
 
FOR THOSE PEOPLE WHO DID NOT BELIEVE THAT THE X670 MOTHER BOARD WAS NOT GOING TO BE EXPENSIVE.... SUCKS TO BE YOU. AS STATED IT IS DOUBLE ON WHAT I PURCHASE IN 2019 AND IT WAS A MSI A-PRO MODEL FOR $125.00

I.... once again... WAS FUCKING RIGHT.


670 pricing.jpg


I knew about the costs rising because I DID THE RESEARCH INTO IT.

And yes there are things I surely do not know about but, when it comes to the business side of things in this industry I KNOW how it works.

That is why I HATE LISA SU. I saw this coming YEARS AGO. She is not your friend and neither is AMD.

I told you this is where AMD is going to make their money. Not on the CPU of things BUT the chip set part which they have complete and full control of their chipset.

This is not the motherboard companies fault as AMD issues the price on this and that is that.

AND IF YOU THINK THE "B" SERIES ARE GOING TO BE CHEAP??? LET ME SELL YOU A PIECE OF SWAMP LAND AS WELL.

WHAT RETARD IN AMD DECIDED TO PRICE IT THAT HIGH when we are in a recession, globally WITH INCREASE ENERGY AND FOOD PRICES GOING UP EVERY MONTH???

Well it looks like I'll go Intel as....
I can reuse my DDR4 high end memory.
I can resuse my PSU.

I just need to buy a MB and a CPU I'm fine.

This is the last straw with me and AMD. I've completely lost any loyalty to the company.

I will go back and buy product for the best bang for the buck.
 
FOR THOSE PEOPLE WHO DID NOT BELIEVE THAT THE X670 MOTHER BOARD WAS NOT GOING TO BE EXPENSIVE.... SUCKS TO BE YOU. AS STATED IT IS DOUBLE ON WHAT I PURCHASE IN 2019 AND IT WAS A MSI A-PRO MODEL FOR $125.00

I.... once again... WAS FUCKING RIGHT.


View attachment 3685710

I knew about the costs rising because I DID THE RESEARCH INTO IT.

And yes there are things I surely do not know about but, when it comes to the business side of things in this industry I KNOW how it works.

That is why I HATE LISA SU. I saw this coming YEARS AGO. She is not your friend and neither is AMD.

I told you this is where AMD is going to make their money. Not on the CPU of things BUT the chip set part which they have complete and full control of their chipset.

This is not the motherboard companies fault as AMD issues the price on this and that is that.

AND IF YOU THINK THE "B" SERIES ARE GOING TO BE CHEAP??? LET ME SELL YOU A PIECE OF SWAMP LAND AS WELL.

WHAT RETARD IN AMD DECIDED TO PRICE IT THAT HIGH when we are in a recession, globally WITH INCREASE ENERGY AND FOOD PRICES GOING UP EVERY MONTH???

Well it looks like I'll go Intel as....
I can reuse my DDR4 high end memory.
I can resuse my PSU.

I just need to buy a MB and a CPU I'm fine.

This is the last straw with me and AMD. I've completely lost any loyalty to the company.

I will go back and buy product for the best bang for the buck.
Those are RGB high end bullshit. Look, one even has a IPS touch screen. Don't buy anything with "Xtreme" in the name unless you are brain damaged from energy drinks. It's also targeted at early adopters.

Inflation and other shit will make the cheaper boards more expensive than they are 3-4-5 years ago, that is a given, but I would not be surprised if a B650 is under €150 a year from now.
 
  • Winner
Reactions: Judge Dredd
Question: Through a unique fluke I got a 5800x completely free.

Currently running a 5600x. I was thinking about upgrading it in a few years to a 5800x3D.

Should I bother or just use the base 5800x instead?
 
Last edited:
Back