GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

It's not the worst idea. AMD got rid of its foundry, which was a trash heap of incompetence. Even Intel's most recent fuckup is downstream of its foundry problems. 13th & 14th gen were rushed out to market because Intel 4 was behind schedule, which was in turn due to them being way behind the curve on EUV, which again goes way back to Brian Krzanich redirecting funds from R&D and into diversity.
It might have worked a decade ago, but now it’s too late.

The problem is, that the old processes, where TSMC earns most of their money, were specifically designed to make Intel CPUs.

You can’t just click a button and make a 22nm foundry make memory chips or ASICs or whatever.

And besides the tools there’s a huuuge software and design stack.

And their new processes just aren’t delivering yet. The Five nodes in Four Years turned out to be a sham, as they cancelled one process and also cancelled a process for integrated memory on die.

For anyone who thinks that Intel has a super super duper chance at being a real player, I’d encourage you to check out comments from Intel employees at places like this:


It’s doom and fucking gloom in there.

Reasonable.
I don’t know I kinda doubt it.

First of all, the fabs likely has the problem that rest of Intel shares: Politicking and a fucked up internal culture.

Secondly, many of the talented people could tell which way the wind is blowing and left.

Thirdly, who’s going to be their customer? Their old nodes aren’t set up for contract work, and even if they were, there’s plenty of other competitors with a proven track record.

Their leading nodes aren’t there, and on top of that, they NEED Intels Chip business to survive.

And they need money. A shit ton of money in capex just to stay competitive.

Not saying that they’re definitely doomed, but just saying it’s not exactly looking great.

Reasonable.
I don’t know I kinda doubt it.

First of all, the fabs likely has the problem that rest of Intel shares: Politicking and a fucked up internal culture.

Secondly, many of the talented people could tell which way the wind is blowing and left.

Thirdly, who’s going to be their customer? Their old nodes aren’t set up for contract work, and even if they were, there’s plenty of other competitors with a proven track record.

Their leading nodes aren’t there, and on top of that, they NEED Intels Chip business to survive.

And they need money. A shit ton of money in capex just to stay competitive.

Not saying that they’re definitely doomed, but just saying it’s not exactly looking great.
If you're placing an order for 20,000 laptops that will be used in something like the school system you will think about repair costs that won't be covered by warranty.
Meh… If it’s a cheap ass Chromebook for a few hundred dollars, it really wouldn’t take much before just buying a new one is cheaper than fiddling with repairs.

And it comes from public budgets, so…

I agree with Susanna. Repairable laptops are a nice idea in principle.

In reality, computers have become disposable commodities. Few people fix their fridges, washing machines or TVs, you buy a new one.

Same thing with laptops. You buy one for around 1000$ and get a new one when it stops working or 3-4 years later.

It’s capitalism baby.
 
Last edited:
The big problem with power saving functionality in modern computers isn't the silicon, even x86 mobile SoCs are pretty great at it. It's mostly that software up from and inculding UEFI firmware takes little advantage of it, if not completely ignoring/disabling functionality to manage a better burst benchmark. Writing software in energy efficent ways is just not in the vocabulary of the modern programmer and there's only so much the hardware architecture can do.

As a developer, I write platform-neutral C++ code. The energy efficiency of its execution is wholly dependent on the platform. It runs far, far more efficiently on an M2 than any other platform I've tested simply the M2 never blasts up to 5 GHz. It's really that simple. I have basically zero control over clock speed in my code. There are only so many ways to write a loop over an array or a hashtable lookup.

The #1 thing a hardware architecture can do to be efficient is not clock as high as physically possible. This is from Ivy Bridge, but all that's different on 5nm and 7nm is this curve is shifted to the right.

1725891047351.png


At the end of the day, that's the bulk various BIG.little designs are doing. They move entire tasks to a low-speed cores so they're not uslessly exploding up to 5 GHz and gobbling up energy with no real impact on user experience. Apple pushes this further by keeping the big core clocks low and using on-package memory.
 
This isn't a thread for slap-fights. Ignore the title.
I hate it when I try to start slap-fights and all I'll get back is polite. Maybe I should "pretend" to be retarded.

A Sketchy Report Suggests That Intel Plans To Offload Its Sub-3nm Processes On TSMC, And An Industry Analyst Contends That It’s All Downhill From Here
https://irrationalanalysis.substack.com/i/148320728/intel-special-update (archive)

These are some analyst speculation doom and gloom for Intel. I saw the Irrational Analysis Substack mentioned on an Asianometry video.

An actual piece of news: AMD is attempting to reunify graphics and compute-oriented architectures. But it doesn't appear to be happening imminently:

AMD announces unified UDNA GPU architecture — bringing RDNA and CDNA together to take on Nvidia's CUDA ecosystem
JH: So, one of the things we want to do is ...we made some mistakes with the RDNA side; each time we change the memory hierarchy, the subsystem, it has to reset the matrix on the optimizations. I don't want to do that.

So, going forward, we’re thinking about not just RDNA 5, RDNA 6, RDNA 7, but UDNA 6 and UDNA 7. We plan the next three generations because once we get the optimizations, I don't want to have to change the memory hierarchy, and then we lose a lot of optimizations. So, we're kind of forcing that issue about full forward and backward compatibility. We do that on Xbox today; it’s very doable but requires advanced planning. It’s a lot more work to do, but that’s the direction we’re going.
 
Last edited:
I hate it when I try to start slap-fights and all I'll get back is polite. Maybe I should "pretend" to be retarded.

A Sketchy Report Suggests That Intel Plans To Offload Its Sub-3nm Processes On TSMC, And An Industry Analyst Contends That It’s All Downhill From Here
https://irrationalanalysis.substack.com/i/148320728/intel-special-update (archive)

These are some analyst speculation doom and gloom for Intel. I saw the Irrational Analysis Substack mentioned on an Asianometry video.

An actual piece of news: AMD is attempting to reunify graphics and compute-oriented architectures. But it doesn't appear to be happening imminently:

All I can say is that if Intel pulls it off, they will have successfully overcome 10 years of investing in women and negroes instead of technology, and if they become just another TSMC client, I'm not sure what we'll see. Gaudi was a good idea crippled by their foundry. Alder Lake was a good idea, but without a real follow-up for 2 successive generations due to their foundry. Interesting ideas on Meteor Lake...once again held back by the foundry. Xeon is conceptually several years behind where it should be because Sapphire Rapids was 4 years late...foundry again! Ponte Vecchio kind of sucked; nobody I know who has access to it has anything good to say about it. Can't even blame women or Jews, since Brian Krzanich was neither.
 
Can't even blame women or Jews, since Brian Krzanich was neither.
Shush, women and Jews are great at computers. See Grace Hopper, who invented programming languages, or Ada Lovelace, who was the first programmer (for a theoretical machine, but still), or all the IBM secretaries who did the actual work making the mainframes run. And Israel seems to be the only part of Intel doing any actual work.

Direct your complaints at the yanks, that’s where it belongs.
 
An actual piece of news: AMD is attempting to reunify graphics and compute-oriented architectures. But it doesn't appear to be happening imminently:
About fucking time, my goodness. Hopefully they don't screw it up this time around.

Speaking of AMD GPUs, I did end up getting the AMD Firepro S7150x2 I mentioned earlier to work somewhat. It didn't boot at first, but I followed the instructions here and it worked: https://community.amd.com/t5/graphi...oot-the-server-supermicro-x9srg-f/td-p/112753
I was using a Supermicro X9-something-or-other motherboard, so I am unsure if the same options will be available in consumer-grade motherboards. I don't have the means to cool it properly yet, so I haven't done any serious testing on it other than playing around with lightweight LLMs via llama.cpp a bit using the Vulkan backend. The OS I used for this test was OpenSUSE Linux, so I have absolutely no idea if this will work at all on Windows. Needless to say, it does work if you want a cheap headless Vulkan/OpenCL/SYCL accelerator; it only costed me $32 on Ebay.
The only downside is that this thing draws about 265W of power under full load. Once I find my BIOS programmer I may try to modify the VBIOS to lower the power intake to a more reasonable level so that I can fit 3+ of these cards in a single system.
 
Last edited:
Shush, women and Jews are great at computers. See Grace Hopper, who invented programming languages, or Ada Lovelace, who was the first programmer (for a theoretical machine, but still), or all the IBM secretaries who did the actual work making the mainframes run. And Israel seems to be the only part of Intel doing any actual work.

Direct your complaints at the yanks, that’s where it belongs.
Nice try, Jensen. You thought you could be sneaky like that, but I am not buying any of your overpriced Lovelace or Hopper cards. Old is gold!
 
PS5 Pro is official. High $700 price tag without a disc drive (they sell an external one). 45% faster GPU from higher compute unit count and faster memory, 2-3x faster raytracing, and PSSR AI-driven upscaling, possibly using a large NPU on the die.

AMD's Kraken/Krackan Point (small version of Strix Point) is confirmed (German) coming in 2025.
 
Shush, women and Jews are great at computers. See Grace Hopper, who invented programming languages, or Ada Lovelace, who was the first programmer (for a theoretical machine, but still), or all the IBM secretaries who did the actual work making the mainframes run. And Israel seems to be the only part of Intel doing any actual work.

Direct your complaints at the yanks, that’s where it belongs.
Yeah they're great and made amazing contributions to computer science and engineering, but I like to picture Dijkstra teabagging them all under his massive Dutch cojones followed by Alan Turning and my homie G Muhammad Ibn Musa Al-Khwarizmi
 
PS5 Pro is official. High $700 price tag without a disc drive (they sell an external one). 45% faster GPU from higher compute unit count and faster memory, 2-3x faster raytracing, and PSSR AI-driven upscaling, possibly using a large NPU on the die.

AMD's Kraken/Krackan Point (small version of Strix Point) is confirmed (German) coming in 2025.
Who's going to buy the PS5 Pro? Even my roommate that's obsessed with Sony sold his PS5 within a month of buying it as it just doesn't have any exclusives.

And is Kraken Point what the Z2E will be based on?
 
And is Kraken Point what the Z2E will be based on?
Z1 Extreme was simply the top Phoenix 8-core with different power curve.
Z1 non-Extreme was Phoenix2 aka Little Phoenix, based on a different die.

So it's likely that Z2 Extreme is 12-core Strix Point, full 16 CUs, and if there is a non-Extreme, it would be Kraken Point.

Little Phoenix = 2x Zen 4 + 4x Zen 4c cores, 4 CUs RDNA3, no NPU (Phoenix has one)
Kraken Point = 4x Zen 5 + 4x Zen 5c cores, 8 CUs RDNA3.5, 40+ TOPS NPU
 
Don’t know if it has been posted here before, but remember Intel’s Royal Core/Beast Lake project? That was originally headed by Jim Keller and aimed at delivering a new core with 2X IPC at lower power levels than Raptor Cove?

Well that whole project got canned recently, and apparently much of the team left Intel to start a new RISC V startup.

The resumes of the four founders are pretty impressive and who knows who else they’ve recruited.

IMG_8211.jpeg
IMG_8212.jpeg

BEST OF LUCK RISC V CHADS!
 
Don’t know if it has been posted here before, but remember Intel’s Royal Core/Beast Lake project? That was originally headed by Jim Keller and aimed at delivering a new core with 2X IPC at lower power levels than Raptor Cove?

Well that whole project got canned recently, and apparently much of the team left Intel to start a new RISC V startup.

The resumes of the four founders are pretty impressive and who knows who else they’ve recruited.
We're talking about leaked/rumored products getting cancelled, so it's important to take the salt. But yeah, I did hear about that one.

Pat abruptly pulled the plug on the project early this year. That's right. It was Pat apparently who decided to kill the Royal Core project that a team in Oregon was working on because... well, and this is what the person told me Pat said... "Intel doesn't need high-performance cores anymore" if all CPUs are going to do is connect GPUs.

Now, before you say that's entirely insane, consider that this is basically how Nvidia operates right now, and they're doing great. Nvidia doesn't need their Grace CPU cores to be competitive with Zen 5 in IPC. They just need the throughput to tie their whole package together, and that is what Pat was thinking. Royal Core was costing a lot of money, and for laptops, I think Pat assumed they could just brute force their way to efficiency with the most expensive nodes and advanced packaging. And then in server, they didn't need their cores to be good anymore, they're just going to be roads that connect AI chips.

However, here is where I think this is one of the dumbest decisions ever made in history. It's not that this argument doesn't hold water. It's that it's too late to make that argument for Intel. Pat has missed enough boats that I'm worried that in killing this Royal Core project, he has sunk the last ship before a big storm hits Intel. I mean, think about it. Pat bet on GPU a few years late, and then right when Arc came out there was a GPU crash and nobody cared. And during that time, he should have been betting on AI early, but he didn't, kept pouring money into Arc. And now in some last ditch money to get the money to accelerate AI development, it sounds like he's basically gutted Intel's best CPU team that was built by Jim Keller. Which is horrible, because if you've already missed the AI training boat, then what you should try to do, at least in my opinion, is that you should have the most performant cores on the market so that when the apps are built, after all of this training is done, at least then you have the most desirable CPUs when there is a wave of upgrades. But now it seems like Pat's killed that as well.

If you want public evidence of what I'm talking about, have you noticed recent headlines stating that there's a bunch of Intel CPU architects that have left Intel in Oregon? Who do you think those people were? Where do you think the Royal Core CPU design team was? It was there, and they left because they want to make high-performance cores, and they knew that they were going to be stuck making, like parts of an SoC, or some IP block for Panther Lake, and probably just a bunch of E-core products because Intel doesn't need high-performance cores anymore, and they're sick of it. They wanted to move past E-cores, not become stuck in just making E-cores... There were plans by this team to combine E- and P-cores eventually, and to only make high-performance cores at Intel again. And what this would be is really ultra big cores that can do more than just act like a single core.

Some aspects of Royal Core could survive, but previous leaks about Intel can't be trusted since the situation is so chaotic there.
 
As for Intel and chaos, it’s so fucking wild that they have more employees than AMD and Nvidia put together.

Don’t know what they’re all doing, but I could totally see the rumors about teams at Intel having five times the employees as equivalent teams with AMD.

Their marketing department is superb though.

I chuckled when someone asked if 5N4Y was doable, and the answer was: “Five nodes in four years? Yeah, marketing could totally pull that off!”
 
There are only so many ways to write a loop over an array or a hashtable lookup.
I think you mean there are only so many good ways to write those things. Having worked with a lot of junior developers in the past twenty years, I've come to the conclusion that the number of bad ways to do something is infinite.
 
Some aspects of Royal Core could survive, but previous leaks about Intel can't be trusted since the situation is so chaotic there.

Arc and AI are almost completely unconnected. I have to say this over and over again, but gaming GPUs are just not relevant to the AI market, meaning the people who actually make money on AI. AMD is now at a 20% market share in AI GPUs without giving two shits about how well ROCm works on gamer cards. Intel did pour money into AI in the form of both Ponte Vecchio and Gaudi. Falcon Shores is still planned.

One of Intel's biggest problem with its AI products against NVIDIA is availability, which again comes back to their foundry. Ponte Vecchio had some things going for it, but by the time it was actually available, both H100 and MI350 were out with much faster HBM3 memory. Same issue with Gaudi2...day late and a dollar short.

The second problem they've had is software. NVIDIA has been building the CUDA ecosystem for over a decade now. You can mark the day Intel actually started getting serious about software with its acquisition of Codeplay in 2022. But that is really, really recent. AMD was working out the problems with HIP and getting software vendors on board well before then.

Also, with Intel putting inferencing hardware on server CPUs as of SPR and ER, I'm not sure what this guy's even on about. Not all servers are going to have GPUs in them in the future, as much as NVIDIA would like that to be true. A $35,000 accelerator just doesn't make sense if inferencing is anything less than 75% of the workload of a server.

Contrarian take on the canceled tech by someone who claims to have been on the team:
1726089050964.png
 
Shush, women and Jews are great at computers. See Grace Hopper, who invented programming languages, or Ada Lovelace, who was the first programmer (for a theoretical machine, but still), or all the IBM secretaries who did the actual work making the mainframes run. And Israel seems to be the only part of Intel doing any actual work.
I can't let a mention of Ms. Lovelace go by without plugging this fun little book about her and Babbage:
1726085991020.png

Two-thirds comic, one third footnotes, it's wildly over the top and filled with historical trivia. Do you know as a child Ada Lovelace tried to design wings in the manner of Icarus, based on her observations of the wing types of birds? And that she was tied down as a child and forbidden poetry lest it excited her mind too much? I would love to be like her in some ways, but reading the book I realised I was far more like Babbage! :biggrin:

I think I'll go and read it again right now, actually. It's really quite astonishing the vision of these two people when it came to something which didn't even exist. It's like they observed the platonic form of the essence of computing. Way ahead of their time!
 
Arc and AI are almost completely unconnected. I have to say this over and over again, but gaming GPUs are just not relevant to the AI market, meaning the people who actually make money on AI. AMD is now at a 20% market share in AI GPUs without giving two shits about how well ROCm works on gamer cards. Intel did pour money into AI in the form of both Ponte Vecchio and Gaudi. Falcon Shores is still planned.
He mentioned Arc because it's an example of a horrible Intel decision based on timing, and it diverted resources into a black hole. Arc gaming GPUs could have worked, but because of delays they launched at around the worst time utterly imaginable, missing the GPU shortage and historically high pricing during the plandemic. The AXG group as a whole lost billions, and that included professional cards.

"And during that time, he should have been betting on AI early, but he didn't, [he] kept pouring money into Arc." I suppose that's debatable, needs to be elaborated on. Ponte Vecchio existed, only to flop for being a day late and dollar short, and maybe spending employee time and money on other things including Arc was part of the problem.

Beginning the retirement process of Ponte Vecchio before Falcon Shores arrives will give Intel more resources to focus on Falcon Shores and accelerating its production after debut. This should help Intel catch up to AMD and Nvidia, but it will still be behind the development curve since AMD and Nvidia are also underway, building next-generation HPC hardware.

Also, with Intel putting inferencing hardware on server CPUs as of SPR and ER, I'm not sure what this guy's even on about. Not all servers are going to have GPUs in them in the future, as much as NVIDIA would like that to be true. A $35,000 accelerator just doesn't make sense if inferencing is anything less than 75% of the workload of a server.
I think the market is busy ignoring the AI accelerators in Xeon Scalable. I believe AMD was also working on a similar idea for Epyc, swapping core chiplets for relatively powerful NPU chiplets, but it ended up being shitcanned.
 
  • Like
Reactions: Flaming Dumpster
And during that time, he should have been betting on AI early, but he didn't, [he] kept pouring money into Arc." I suppose that's debatable, needs to be elaborated on. Ponte Vecchio existed, only to flop for being a day late and dollar short, and maybe spending employee time and money on other things including Arc was part of the problem.

I misread that part. He's saying Pat put money into [gaming] GPUs rather than AI. Maybe true, but IMO PVC's problems weren't money. They were the foundry and the fact Intel hadn't ever made a datacenter GPU before. PVC was supposed to launch in 2021, and Sapphire Rapids was supposed to launch in 2019.

FWIW, the guys I know who have access to Aurora hate it and say it's a piece of shit.

I think the market is busy ignoring the AI accelerators in Xeon Scalable.

I'm not so sure about it. There are database applications and data analytics software that run significantly faster on CPUs with matrix extensions than CPUs without them. I know for transactions, fraud detection is being done in real-time on the mainframe, and IBM z doesn't support GPU at all.
 
Back