- Joined
- Nov 15, 2021
I noticed that when mulling over the 4070 for my PC build, the 12 gigs felt odd considering that there's a 16 gig option for the 4060. Personally debating on a 4070 if only for A.I. processing abilities, even if it's minimal, still want to make art and stuff.
I think it's because memory ended up being more expensive than NVIDIA expected. I'm pretty sure the 4060 has 4x2 GB GDDR6 chips, and the 4060 Ti has 4x4 GB GDDR6 chips...while the 4070 has 6x2 GB GDDR6X chips. Dollars to donuts says that going with 6x4 GB GDDR6X chips was too expensive to price the card competitively. AMD isn't using GDDR6X at all in their consumer cards. Going with GDDR6X for the 4090 and 4070 may have been a mistake on NVIDIA's part.
To be fair I can't blame them. I haven't heard many problems about 12th gen. It's smart to build off it
It's more that the Intel 4 process node (just a fancy term for a chip factory) has low yields and problems with clocking high.
As a welder, I'm sure you're quite familiar with how the same type of weld is weaker or stronger depending on the skill of the welder and how well the weld was executed. A similar thing holds true of fabricating microchips. This is what a microchip looks like up close:

The more perfectly and accurately those structures are carved out by the laser (which, like a high-grade weld, requires high-purity materials) the better they transfer current and dissipate heat according to the original engineering specification, and the larger they can be built. In other words, you can clock the chip higher and put more cores on it. Consequently, a large part of standing up a new process node is ensuring the entire supply chain and fabrication process are as free from defects and variation as possible. The longer the node is in service, the more defects are worked out of the process, the smaller you can make the features without breaking things, and the bigger & hotter you can make the chips.
For example, Intel was infamously stuck on their 14nm node for a very long time. When they launched it, the biggest, hottest chip they made was 4 cores with a max turbo of 3.8 GHz (i7-5775R). At the end of its life, they were making 8-core CPUs with a max turbo of 5.1 GHz (i9-11900k).
The credible rumor in the chip industry is that Intel scrapped plans to release Meteor Lake on the desktop due to yield issues. In other words, if they were to try to make a chip with as many cores that ran as fast as an i9-14900k, they'd have to throw out so many of them due to defects that they couldn't make money on the thing.

The more perfectly and accurately those structures are carved out by the laser (which, like a high-grade weld, requires high-purity materials) the better they transfer current and dissipate heat according to the original engineering specification, and the larger they can be built. In other words, you can clock the chip higher and put more cores on it. Consequently, a large part of standing up a new process node is ensuring the entire supply chain and fabrication process are as free from defects and variation as possible. The longer the node is in service, the more defects are worked out of the process, the smaller you can make the features without breaking things, and the bigger & hotter you can make the chips.
For example, Intel was infamously stuck on their 14nm node for a very long time. When they launched it, the biggest, hottest chip they made was 4 cores with a max turbo of 3.8 GHz (i7-5775R). At the end of its life, they were making 8-core CPUs with a max turbo of 5.1 GHz (i9-11900k).
The credible rumor in the chip industry is that Intel scrapped plans to release Meteor Lake on the desktop due to yield issues. In other words, if they were to try to make a chip with as many cores that ran as fast as an i9-14900k, they'd have to throw out so many of them due to defects that they couldn't make money on the thing.
Last edited: