GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

Honestly I feel like my next upgrade is going to have to be an APU (whenever one comes out with performance higher than a 1060), but considering motherboard prices have been rising that doesn't seem likely either.
 
  • Agree
Reactions: The Ghost of Kviv
Why not wait for a while? It doesn't seem you're in a hurry and none of the three have launched the full stack of products from engineering. Plus Yurope prices are not analogous to US prices, sometimes AMD is just as expensive. Now, you may be interested in N32, the 7700 XT that hasn't been released yet.
  • AMD is about a generation behind in RT compared to Nvidia. So their 7000-series is about on par with 3000-series,
  • I don't think you'd stand to benefit much from GPUs in the scenarios you gave,
  • FSR will see wider adoption as it's going to be used on consoles, but Nvidia pays studios a lot of money to make sure their tech is used on PCs.
Well if even some random person on the Kiwifarms, of all places, is also telling me to wait, then that's probably the best option. The thing that was worrying me is all these videos about 8 gb of vram starting to get obsolete but I played RE4R (One of the two games I'm actually excited to play this year, the other one being Tekken 8) with the DLSS mod at 1440p and I have yet to dip below 60 fps with almost everything at max, except for RT so I really haven't been affected by lack of vram.

As far as AMD goes, I haven't seen yet a single Radeon gpu more expensive than their Nvidia counterpart, and considering how conniving and shameless Nvidia has become, when my 2070S starts to really struggle and if the status quo is the same by then, I'll probably upgrade to Radeon, hopefully by the time of the Nvidia 6000 series AMD would have gotten the RT shit sorted out so lazy developers who slap it on games to take care of lights and shadows don't hurt our Gpus too much.
 
Honestly I feel like my next upgrade is going to have to be an APU (whenever one comes out with performance higher than a 1060), but considering motherboard prices have been rising that doesn't seem likely either.
It's a good choice for a small form factor system. As a budget choice, we'll have to see later on in the life of the AM5 socket. There was a rumor that both Rembrandt and Phoenix desktop APUs will launch, which would be a promising start since Rembrandt can be the cheap one. Strix Point rumors are also good, and it should be faster than a 1060 if Phoenix isn't.

You could use a supposedly sub-$100 A620 motherboard for an APU build and you won't lose anything that matters. Prices on every component could drop years from now, as we saw with AM4.

If this is just GPU doomerism, prices will recover. APUs will take over 1080p resolution but GPUs are here to stay for now. There will be $150-250 GPUs that smash APUs.

Intel should make their own desktop APUs too. All advice given is null and void if China invades Taiwan.

The reviews of the 7800x3d are up and it looks like it performs about as well as expected. Nothing major but the power efficiency is nice.


https://www.techpowerup.com/review/amd-ryzen-7-7800x3d/
I don't even understand why the 7800X3D has a 120W TDP.
 
  • Like
Reactions: The Ghost of Kviv
Steam survey showing a huge influx of ex-mining cards are now in the hands of gamers.
steam.png


Rumours are saying nvidia's response (at least for the 4070) is to force the AIBs to wear razor thin margins.
 
Steam survey showing a huge influx of ex-mining cards are now in the hands of gamers.
View attachment 4992480

Rumours are saying nvidia's response (at least for the 4070) is to force the AIBs to wear razor thin margins.
I saw this a couple days ago. That month to month jump for the RTX 3060 is almost unbelievable.

AMDead?
 
  • Like
Reactions: Bananadana
Steam survey showing a huge influx of ex-mining cards are now in the hands of gamers.
View attachment 4992480

Rumours are saying nvidia's response (at least for the 4070) is to force the AIBs to wear razor thin margins.
That's really interesting and really good, maybe raytracing will go beyond the equivalent of "shiny water in morrowind" in a year or two.
 
  • Like
Reactions: Brain Problems
I wouldn't blame them. PC "enthusiasts" are kind of retarded. They'll gladly pay obscene amounts of money for anything. They don't actually do any research based on the equipment they need and instead look at still shot pixel hunts at Digital Foundry or 50 game averages that have a shit ton of outliers to base purchasing decisions on.

They don't actually care when companies screw them over. They bitch and moan about it, but then the pay piggies line right back up. They cry for Intel to "save" them with cheap GPUs, but you know damn well they'll never seriously consider buying one. They'll have one bad driver experience and then it'll be 10 years of "Intel drivers kicked my puppy that one time".

In fact they always do this. They always write off an entire company if they had a single bad experience 5 years ago. Oh, unless it's a company they love. Then it's move the goal posts time instead of acknowledging every company can have issues.
 
The reviews of the 7800x3d are up and it looks like it performs about as well as expected. Nothing major but the power efficiency is nice.

I think it's funny that in the post-Dennard era, taking advantage of a new lithography to volt a chip as high as possible and suck down eye-watering amounts of energy to eke out tiny IPC uplift is being branded as "efficiency." Meanwhile, Apple's taking innovative approaches to what's actually on the die and how it's structured to actually keep power consumption to sane levels.

That's really interesting and really good, maybe raytracing will go beyond the equivalent of "shiny water in morrowind" in a year or two.

The problem with raytracing is that there really isn't much left to do. When I saw raytracing demos back in the 00s, rasterizers couldn't do dynamic shadows and reflections at all. Now, they can do basically everything, so the only obvious advantage raytracing has is that it's much easier to code up accurate shadows and reflections, but still at a huge performance hit, and rasterization techniques still keep advancing. I'm not sure how the rasterization vs raytracing arms race will shake out. If having accurate shadows in a game means you need a 2 kW power supply, it's going to remain a parlor trick.
 
I think it's funny that in the post-Dennard era, taking advantage of a new lithography to volt a chip as high as possible and suck down eye-watering amounts of energy to eke out tiny IPC uplift is being branded as "efficiency.
Huh? Some of these cpus are still extremely low wattage for what they do. That 7800X3D running at about 50w for top end gaming performance is pretty incredible. There's nothing eye-watering about it.

The only time modern cpus get anywhere near that is when you decide to go full OC/PBO and max out all the threads for work. Then they blow Apple shit out of the water. Yes, I hated the initial AM5 release where they just ran everything maxed out and said "nah, 95c is fine:, but then it turns out you can turn on eco mode in BIOS and have almost no real performance drop while cutting the power draw and temps significantly.

Some stuff like the 13900K and whatnot do still suck down a decent amount of power while gaming, though.
 
Huh? Some of these cpus are still extremely low wattage for what they do. That 7800X3D running at about 50w for top end gaming performance is pretty incredible. There's nothing eye-watering about it.

It's running at 50W because it's barely doing anything. Saturate the chip, and then tell me what the numbers are. The touted efficiency of AMD chips is nothing more than the fact that TSMC's N5 has higher density than N7 or Intel 7. I mean, good on AMD for spinning off GloFo and going to TSMC, but that was a good business decision.

The way AMD has been taking advantage of TSMC's manufacturing advantage over intel wins zero prizes for efficiency. They're taking the same, old, bloated x86_64 core designs, bloating them up even more, and driving the IPC up. That's not efficient design. We're at the point now in chip design where every additional stage added to a branch predictor, and every watt burned to push the IPC up, has insanely diminished returns.

Let me illustrate, this is the Cinebench rating (which is more meaningful than gaming "frames per watt," since Cinebench actually keeps the CPU fairly busy):
1680972912181.png

What you are seeing here is that the rated multithread efficiency of a chip fabricated on TSMC's N5 node, is barely ahead of an intel chip fabbed on Intel 7, and another AMD chip fabbed on TSMC N7. In fact, 7800X3D is barely ahead of 7700, so it doesn't look like saving yourself some round trips to the DIMMs is really saving you a meaningful amount of power.

If AMD had efficient designs, in TechPowerUp's chart, all of the chips at the top should be from AMD. Zero intel chips should be anywhere in the running. Intel's core engineers are hobbled by their inferior manufacturing, so they've been forced to find new efficiencies just to keep the company alive. So the fact that we see so many of those BIG.little designs on the top half of the efficiency chart shows just how wasteful x86 cores had gotten for parallel workloads.

This is from the COMSOL physics chart. A $200 i5 rated at 65W fabbed on Intel 7 should not be anywhere close to a Ryzen 7 fabbed on TSMC N5. Yeah, it's 15% slower. Point is, it shouldn't be. It should be getting its shit pushed in.

1680974459398.png

The only time modern cpus get anywhere near that is when you decide to go full OC/PBO and max out all the threads for work. Then they blow Apple shit out of the water.

There isn't an x86-based design in the same league as aarch64 when it comes to performance per watt, and there can't ever be, because an x86 chip will always take a hit to decode CISC instructions. And then Apple silicon is the best of the aarch64 designs out there, since you don't have to pay nearly as much energy to move data.

Yes, I hated the initial AM5 release where they just ran everything maxed out and said "nah, 95c is fine:, but then it turns out you can turn on eco mode in BIOS and have almost no real performance drop while cutting the power draw and temps significantly.

Right, because volting the shit out of a CPU so you can win online dick-measuring contests, which is what AMD is doing, isn't very efficient. Great for selling chips, though. Any time I see some manufacturer bragging about the "efficiency" of a chip that draws enough power to heat a jacuzzi, I just roll my eyes.

My criticisms apply just as much to intel's i9-13900k, FWIW. Consumes more power than my entire laundry room to get something like a 10% uplift over the high-end AMD chips.
 
It's running at 50W because it's barely doing anything.
What are you talking about? The 50w is at it's average gaming performance. You know, the reason people buy this specific chip? I don't care about saturating the entire chip to 100% with "make work" because that's not what they're being aimed at.
Screenshot 2023-04-08 140033.png

As for overall efficiency. Okay. 2 intel cpus are higher. So what? Those 2 cpus are still going to do overall less work. It's cool you have an autistic obsession with only efficiency. Most people want good efficiency, but they also want to achieve overall MORE work. In multithreaded no intel cpus are higher. In fact, wow. That 7900 lays waste to all. Those are also at stock settings when I just said you can lose almost no actual performance and drop the energy usage a lot. You just go into bios. So scary. Yes, they maxed out everything in the presentation. No, it does not reflect poorly upon the chip if someone does not realize you can easily make it much more efficient.
Screenshot 2023-04-08 140104.png
 
This is one of those slap-fights I was promised.

Once you start talking about the 35-65W range, which even the 170W Zen 4 CPUs perform well at, I don't care that it uses more power than ARM. You're still only looking at a incandescent bulb's worth of power.

AMD could do more on the efficiency front. Their new APUs should turn out better since they're monolithic and don't have the chiplet penalty. There will be 65W desktop APUs on AM5 eventually, with good enough graphics performance to not need a GPU for 1080p gaming.

Limit a 7900X to 65W, and you have a chart-topping 7900.

What Apple is doing with memory will eventually be copied by AMD and Intel. They are taking Apple seriously, but products that are coming out today were probably planned 5 years ago and set in stone 2 years ago.
 
This is one of those slap-fights I was promised.

Once you start talking about the 35-65W range, which even the 170W Zen 4 CPUs perform well at, I don't care that it uses more power than ARM. You're still only looking at a incandescent bulb's worth of power.

AMD could do more on the efficiency front. Their new APUs should turn out better since they're monolithic and don't have the chiplet penalty. There will be 65W desktop APUs on AM5 eventually, with good enough graphics performance to not need a GPU for 1080p gaming.

It's funny, because Ryzen and EPYC are almost antithetical in terms of design philosophy. What I mean is, EPYC was designed under the assumption they would have process parity with intel. After Xeon drove Opteron out of the market, AMD knew they had to do something different, which is how they came up with chiplets. It allowed them to put a lot more silicon in the socket at a much, much lower cost than intel. They've just made one smart design choice after another, getting the most out of the design space. Even if Intel's foundry was keeping up, Genoa would be a better CPU than Sapphire Rapids.

Meanwhile, the design philosophy of Ryzen seems to be little more than LMAAAAOOOO TSMC RULES SUCK MY DIIIIIIICK. Intel seems to be much more in tune with how software in the 2020s is actually getting built than AMD is, probably since they develop compilers, profilers, runtime libs, and APIs for parallel programming, while to the extent AMD even produces software, it hires Indians to write shitware. If your product can make meaningful use of more than about 4 hardware threads, it's almost certainly a scalable design. E-cores pay off bigly if you're doing task-based parallelism with big work queues. This results in Alder Lake punching way above its weight for threaded programs.

What Apple is doing with memory will eventually be copied by AMD and Intel. They are taking Apple seriously, but products that are coming out today were probably planned 5 years ago and set in stone 2 years ago.

I think they are going to have to in the laptop market. My Ryzen-based laptop can't even survive for 3 hours, even in eco mode. My M1 MB Pro goes all day.
 
Meanwhile, the design philosophy of Ryzen seems to be little more than LMAAAAOOOO TSMC RULES SUCK MY DIIIIIIICK. Intel seems to be much more in tune with how software in the 2020s is actually getting built than AMD is, probably since they develop compilers, profilers, runtime libs, and APIs for parallel programming, while to the extent AMD even produces software, it hires Indians to write shitware. If your product can make meaningful use of more than about 4 hardware threads, it's almost certainly a scalable design. E-cores pay off bigly if you're doing task-based parallelism with big work queues. This results in Alder Lake punching way above its weight for threaded programs.
And yet AMD chips are still leading the pack on the multicore efficiency chart. Intel skews much lower. Again....what are you talking about? Am I just misunderstanding what you're getting at? In general, Intel has been less efficient for a while now.
efficiency-multithread.png

No, really. I don't get this "AMD is just maxing out the power" shtick. Intel are the ones operating the space heaters.
 
Last edited:
  • Informative
Reactions: Brain Problems
Back