GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

Both Intel and AMD have failed a year's generation worth, with most jokes now being Zen 5% and Arrow Late -5%.
 
Both Intel and AMD have failed a year's generation worth, with most jokes now being Zen 5% and Arrow Late -5%.
tbh Intel was never going to win DIY PC gamers this gen after raptor lake problems and they knew it. They're right to focus on efficiency and IPC for their big-ticket enterprise customers over trying to chase X3D performance.
 
Some numbers to put on that that I found on the internet:
Also worth noting the CORE ULTRA 9 285K flagship will be a bit slower than the i9-14900K, but more efficient:

Intel admits Core Ultra 9 285K will be slower than i9-14900K in gaming

Maybe that's why it's not a Core Ultra 11 295K?

Meanwhile, AMD is announcing a bunch of Epyc and AI products today:

Watch The AMD “Advancing AI 2024” Event Live Here: EPYC, Instinct, PRO Launch & Major Announcements Expected
AMD launches Ryzen AI 300 PRO series: up to 12 CPU cores and 16 RDNA3.5 CUs (Pro version of Strix Point)

AMD launches EPYC 9005 “Turin” with up to 192 Zen5c cores
At the top:
$14,813 for 192-core Zen 5c
$13,564 for 160-core Zen 5c
$13,006 for 144-core Zen 5c
$12,984 for 128-core Zen 5
$12,141 for 128-core Zen 5c

At the bottom:
EPYC 9175F = 16-core Zen 5 / 4.2-5.0 GHz / 320W / 512 MiB L3 / $4,256 - This crazy model must be using 16 chiplets with one core enabled each accessing its very own 32 MiB of L3. It also has the highest clocks of the whole lineup.
EPYC 9135 = 16-core Zen 5 / 3.65-4.3 GHz / 200W / 64 MiB L3 / $1,214
EPYC 9115 = 16-core Zen 5 / 2.6-4.1 GHz / 125W / 64 MiB L3 / $726
EPYC 9105 = 8-core Zen 5 / 3.6-4.1 GHz / 125W / 64 MiB L3 / $527 - Two chiplets with 4 cores enabled on each if L3 is correct.

They will probably announce availability of the Instinct MI325X. This is identical to the MI300X, except with 288 GB of HBM3E instead of 192 GB of HBM3, and higher memory bandwidth.
 
  • Informative
Reactions: Flaming Dumpster
Also worth noting the CORE ULTRA 9 285K flagship will be a bit slower than the i9-14900K, but more efficient:

I think they need a course correction. Yeah, it's good for branding to run something at 200+ FPS at 1080p and low settings, but I doubt it affects the market as much as "our chips are melting due to a firmware bug lol" or "yeah you need 8 fans and a 1000W PSU for this bad boy or your machine will explode."

I'll spare you my every-release autism about people talking about "gaming" as though it's a specific workload, or giving the impression that getting an uplift at minimal settings will translate into a general performance improvement.
 
Besides there’s no real reason to buy one. Any 3-5 year old card will do fine. Shit, an AMD card will do just fine.
I guarantee none of them can tell the difference anyways between a three year old card doing 80 fps in 1440p and a new card running 100 fps or 120 fps at the same resolution.
Or shit, telling the difference between running a game at 4K and 1440p on your below 30 inch monitor. I can barely tell the difference on my 80 inch TV.
Gamers have just gotten obsessed with consoooming. Pointless upgrades because of a minor 10% boost.
Though if your current system is working fine for you, I wouldn't bother. There's no reason to buy a new PC if your current one is playing all the shit you want to play fine.
Also it's worth reiterating - PS5 Pro is around a very gimped 4070 in performance. The base PS5 is around a 2070. The most common cards on Steam are 3060, 4060, 4060 ti, 1660, and 1650. A current-gen midrange card or even a current-gen low-end card is going to remain viable for new releases for a while.
Exactly this confirms what I think, I have a 2070 and the only real substantial change would be playing Helldivers 2 at lowest quality settings 1080p with shitty frames to 1440p and +60fps. That's it, and a couple of Unity games just running better overall. Also modded Minecraft as that's a bottomless resource eating pit that somehow just requires more and more each new version.
About the prices I think that gamers(tm) have fallen into the same shit as iToddlers where they just buy the latest Apple product that just released after buying not even a year ago a new phone which has led to prices getting overinflated. Nvidia doesn't owe me anything and I'll just have to cope with those prices, but holy shit it just hurts to see all of them getting closer to the 1k mark when before the only cards before that were over 1k were the 90 series.

I believe that we are getting closer and closer to the point of stagnation regarding GPUs due to all the diminishing improvements each new series when compared to the previous one.
 
I believe that we are getting closer and closer to the point of stagnation regarding GPUs due to all the diminishing improvements each new series when compared to the previous one.
We go through these cycles where we pump more and more power into components to eek out increasingly marginal performance gains until it becomes too much and we course correct towards focusing on efficiency instead even if absolute performance largely stagnates.

We're in a situation right now on the consumer front that's untenable. These massive 3-slot GPUs that are now midrange with their overbuilt cooling, CPUs pushing power envelopes so hard that they're melting in their sockets etc, all that shit cannot continue. This shit quite literally scares the hoes (aka the median consumer) from buying anything when their only options are lackluster performance or 700W+ RGB monstrosity. And it doesn't help the enthusiast either as it necessitates expensive overbuilt cooling and power delivery hardware on top of the already-expensive dies.

Outside of datacenters and enterprise, nvidia, AMD, and Intel make most of their money in the mobile market. They need to reset their approach towards efficiency from time to time if they want to make money, because you can't (or really shouldn't) stick a 200W+ GPU into a laptop.
 
We go through these cycles where we pump more and more power into components to eek out increasingly marginal performance gains until it becomes too much and we course correct towards focusing on efficiency instead even if absolute performance largely stagnates.

We're in a situation right now on the consumer front that's untenable. These massive 3-slot GPUs that are now midrange with their overbuilt cooling, CPUs pushing power envelopes so hard that they're melting in their sockets etc, all that shit cannot continue. This shit quite literally scares the hoes (aka the median consumer) from buying anything when their only options are lackluster performance or 700W+ RGB monstrosity. And it doesn't help the enthusiast either as it necessitates expensive overbuilt cooling and power delivery hardware on top of the already-expensive dies.

Outside of datacenters and enterprise, nvidia, AMD, and Intel make most of their money in the mobile market. They need to reset their approach towards efficiency from time to time if they want to make money, because you can't (or really shouldn't) stick a 200W+ GPU into a laptop.
They need to refocus on efficiency the same way that Apples M series processors changed the CPU market.

Imagine a 100-150W GPU with decent attracting performance.

One of the most interesting things in gaming these past couple of years was the SteamDeck, because it showed that even a pretty puny CPU/GPU can run most titles just fine.
 
We go through these cycles where we pump more and more power into components to eek out increasingly marginal performance gains until it becomes too much and we course correct towards focusing on efficiency instead even if absolute performance largely stagnates.
It's not so much a cycle as it is the S-curve every technology (steel blast furnaces, airplanes, antibiotics, you name it) goes through:

1728579389561.png

The period of rapid advance in silicon integrated circuits is over. We're butting up against fundamental, physical limits, from how small a wire can get to how much power you can really push through a box, and are now in tail on the right hand side. Each additional gain costs us more and more, while achieving less and less.
 
The period of rapid advance in silicon integrated circuits is over. We're butting up against fundamental, physical limits, from how small a wire can get to how much power you can really push through a box, and are now in tail on the right hand side. Each additional gain costs us more and more, while achieving less and less.
We should be thankful for the exponential scraps we get. Not many technologies get 10% better per dollar every year or three. It's also made it so that the cheap old stuff from several years ago is still good, like a Skylake quad-core.

moresir.jpg
 
It's not so much a cycle as it is the S-curve every technology (steel blast furnaces, airplanes, antibiotics, you name it) goes through:

View attachment 6507124

The period of rapid advance in silicon integrated circuits is over. We're butting up against fundamental, physical limits, from how small a wire can get to how much power you can really push through a box, and are now in tail on the right hand side. Each additional gain costs us more and more, while achieving less and less.
Only until a new technology is discovered which restarts the trend.

...

back to my usecase of a virtual Windows VM with 8/16 of a 2696 v3 and 32gb ram, what would be the best GPU at the $200 price point and at the $400 price point, for 3d CAD design? or would there not be meaningful benefits over the k6000 with 12gb except for power efficiency?
 
  • Optimistic
Reactions: Vecr
Only until a new technology is discovered which restarts the trend.

...

back to my usecase of a virtual Windows VM with 8/16 of a 2696 v3 and 32gb ram, what would be the best GPU at the $200 price point and at the $400 price point, for 3d CAD design? or would there not be meaningful benefits over the k6000 with 12gb except for power efficiency?
At the $450 price point, you could get a 4060 ti 16GB and for just under $300 you can get a 3060 12 GB. Both would be roughly in the price class you're looking at, more performant, and more efficient. I would check benchmarks of those cards for your specific application to make sure the performance is still acceptable but some surface-level googling shows the 3060 would be more than 2x more performant while consuming around half as much power.
 
At the $450 price point, you could get a 4060 ti 16GB and for just under $300 you can get a 3060 12 GB. Both would be roughly in the price class you're looking at, more performant, and more efficient. I would check benchmarks of those cards for your specific application to make sure the performance is still acceptable but some surface-level googling shows the 3060 would be more than 2x more performant while consuming around half as much power.
I suppose I should get a better picture of my use case. it looks like the k6000 has been dropped for Windows 11 but since the vm uses 10 ltsc it'll be fine, but if this configuration works I'll need to upgrade eventually. I keep gravitating towards the Intel Arc a770/16gb if i can get a good price for it, but until i get the motherboard in the mail and build the server it's all theoretical so i'm at the annoying point of deciding if i want to spend more money or not
 
I suppose I should get a better picture of my use case. it looks like the k6000 has been dropped for Windows 11 but since the vm uses 10 ltsc it'll be fine, but if this configuration works I'll need to upgrade eventually. I keep gravitating towards the Intel Arc a770/16gb if i can get a good price for it, but until i get the motherboard in the mail and build the server it's all theoretical so i'm at the annoying point of deciding if i want to spend more money or not
I’d also consider grabbing an RX 6800. Under $400 and has 16GB of vram, while also being about 50% faster than the A770.
 
Only until a new technology is discovered which restarts the trend.
Analog computing can do application-specific computations far faster and more efficiently than the digital equivalent, at the cost of precision and reproducibility. This older form of computing will likely see a lot of use once digital computing becomes more expensive due to supply chain fragility and geopolitical tensions.
 
  • Like
Reactions: Vecr
Analog computing can do application-specific computations far faster and more efficiently than the digital equivalent, at the cost of precision and reproducibility. This older form of computing will likely see a lot of use once digital computing becomes more expensive due to supply chain fragility and geopolitical tensions.
That's where lots of the large-scale AI improvements will probably come from, but for personal computers I'm seeing a stagnation, but one where you still need to buy new hardware because of security issues.
 
  • Like
Reactions: Squishie PP
That's where lots of the large-scale AI improvements will probably come from, but for personal computers I'm seeing a stagnation, but one where you still need to buy new hardware because of security issues.
Yep :c
And for those who can't buy new hardware, they will either reduce their online time or join the botnet I guess.
Though it remains to be seen how the fallout from the Siege of Spruce Pine will affect all the incentive structures in place. If new hardware can no longer be made at a price even close to what it was before due to the mine closure, will it incentivize the hardware companies to work on security mitigations at all? One can only hope.
 
  • Feels
Reactions: Vecr
Personal computers need more power the way cars do: they don't. Frankly, PC technology could be frozen in 2006 eternally, and we'd barely notice.
God no. Remember the huge laptops we used to lug around?

I seriously think we’re reaching an inflection point though.

Graphics are GOOD ENOUGH, and the problem is that graphics that get better and better, also mean games get more and more expensive. One hundred million dollars is like the low end baseline for a AAA game these days.

Shit is unsustainable.

I reckon AI will be the next big thing in gaming (PS6 and on). And they’ll probably push that to GPU as well.

The period of rapid advance in silicon integrated circuits is over. We're butting up against fundamental, physical limits
Yup. Moore’s law is basically over. One day, sooner rather than later, the fab game will just end. Not because it’s not possible to go smaller, but because it’s just too expensive.

Everything after that will be tiny, incremental improvements because new design rules allow transistors to be packed 1% denser or whatever.

Outside of gaming, I fear what that will mean to society as a whole.

Microelectronics that grew 2X faster, smaller or cheaper every few years have led to enormous productivity and economic growth.

Everything you made, whether cars, or microwaves, TVs or refrigerators rode that wave as well. And now it’s basically over.

For the first time in over 50 years, we can’t expect twice as much performance or performance for price every few years.

The 2030ies may look drastically different from every decade preceding it.
 
Last edited:
  • Like
Reactions: Brain Problems
Back