GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
APU might be an option, but no idea how they fare in terms of emulating recent gens.
I don't know what those gens are off the top of my head.
Sixth generation = Dreamcast, PS2, GameCube, Xbox
Seventh generation = Xbox 360, PSTriple, Wii


AMD's Phoenix/Hawk Point is probably the APU to look at, since it's in ROG Ally X, a lot of $300-500 mini PCs, and the latest desktop APUs (8700G/8600G). It seems like an 8945HS should be capable of emulating all of these including PS3/WiiU and some Switch. Xbox emulation was neglected last time I checked because nobody cares.

We may finally see Strix Point desktop APUs in Q4 2025.

But you could probably throw some trash low profile GPU into an old quad-core office PC and emulate sixth gen consoles. @Archeops can get a second opinion in the emulation thread if he needs it.
 
I don't know what those gens are off the top of my head.
same, and I haven't dabbled with emulation for years (besides switch out of curiosity, thanks to nintendo's antics causing a streisand effect).

I do remember requirements can vary wildly, depending how optimized and accurate the emulator is, so I'd say it really comes down to each specific system and use case.
otoh there should be plenty of tests and videos with all kinds of hardware.
 
A Ryzen 5700G is enough to comfortably emulate PS3. ShadStation (PS4 emulation) would probably not be feasible although all you'd probably really need to add is a dGPU and you can run that too.

And obviously pretty much any new mini PC with an AMD APU is going to be able to emulate up to PS3. Ryzen is preferred for PS3 because of AVX-512 support which RPCS3 leverages heavily. If you don't care about PS3, you can try one of the Intel APUs with Xe cores but I'm less familiar with those.
 
The only 8GB 5060ti review at the moment. Planned Obsolescence the GPU, $400.

I watched snippets of this, since it's long. For some reason, he doesn't test any games at all at medium textures. Since textures are the biggest VRAM hog, that's what anyone with an 8 GB GPU should be adjusting if they have frame rate problems.
 
I watched snippets of this, since it's long. For some reason, he doesn't test any games at all at medium textures. Since textures are the biggest VRAM hog, that's what anyone with an 8 GB GPU should be adjusting if they have frame rate problems.
True, but I’d make the argument that if you are paying for a new GPU in 2025, you shouldn’t have to turn down textures if you are playing at 1080p, at least for games that are essentially 3+ years old.

Nvidia knows this is a bullshit product, hence they wouldn’t even provide samples for review. It wouldn’t be so bad if it was priced at $299 or less.
 
True, but I’d make the argument that if you are paying for a new GPU in 2025, you shouldn’t have to turn down textures if you are playing at 1080p, at least for games that are essentially 3+ years old.

Nvidia knows this is a bullshit product, hence they wouldn’t even provide samples for review. It wouldn’t be so bad if it was priced at $299 or less.
Third world e-sports GPU.
 
>$379 MSRP
>third world
>your fault for games running like shit on it

Glad our resident Jensen knob gobblers still have the most retarded takes that you could make to share with us.
 
>$379 MSRP
>third world
>your fault for games running like shit on it

Glad our resident Jensen knob gobblers still have the most retarded takes that you could make to share with us.
I said I liked ray tracing, I never said I liked nVidia. The 9070xt is a better purchase for 1440p. How did you make that retarded leap of logic? Anyway, The 5060ti 8gb is an abomination.
 
True, but I’d make the argument that if you are paying for a new GPU in 2025, you shouldn’t have to turn down textures if you are playing at 1080p, at least for games that are essentially 3+ years old.

I would argue that if you are buying the lowest-end GPU available, to the point of halving the VRAM to save whatever scalpers will end up charging, you should not expect to be able to run the most demanding games at the most demanding settings. I would also argue that if you are technologically literate enough to be buying and installing your own GPU, you should be aware that 8 GB hasn't been enough for maximum texture settings in games for several years now.

Also, VRAM has little to do with screen resolution any more. A 4K frame takes just 32 MB. You'll need more than that due to the layers deferred renderers generate, but still, it's not a lot. In the couple games I just tested, dropping screen resolution by 75% (1440p to 720p) only reduced VRAM usage by about 10%. You have to drop asset fidelity, too.
 
That is reasonable, and 8 GB can be fine. I'd probably get a 9040 8GB or something like it. What's unreasonable is an 8 GB GPU costing $300+. And we already know Jensen can pull a rabbit out of his jacket once Samsung/Micron/Hynix have made enough 3 GB GDDR7 modules. It looks like the initial 50 series buyers will be shat in the face by Super replacements, and then again by 60 series made on a new process node (giving tangible uplifts from 40 series).

Nvidia doesn't owe anybody a good gaming GPU though. They will ride the AI basilisk into the sunset.
 
I watched snippets of this, since it's long. For some reason, he doesn't test any games at all at medium textures. Since textures are the biggest VRAM hog, that's what anyone with an 8 GB GPU should be adjusting if they have frame rate problems.
Dude this is a card that is as fast as the high end from 4 years ago which doesn't have the same problems with playing at high settings, and as the video shows the 16gb is just faster in every situation except one game where textures don't load on the 8gbcard because of the lack of vram. When the 9060xt 8gb comes out it'll be the same POS as this but at least they wont charge nearly 400$ for it
 
That is reasonable, and 8 GB can be fine. I'd probably get a 9040 8GB or something like it. What's unreasonable is an 8 GB GPU costing $300+. And we already know Jensen can pull a rabbit out of his jacket once Samsung/Micron/Hynix have made enough 3 GB GDDR7 modules. It looks like the initial 50 series buyers will be shat in the face by Super replacements, and then again by 60 series made on a new process node (giving tangible uplifts from 40 series).

Nvidia doesn't owe anybody a good gaming GPU though. They will ride the AI basilisk into the sunset.
Let's be honest and admit, the 5080 and 5090 are just the 4080 ti and 4090 ti.
 
Let's be honest and admit, the 5080 and 5090 are just the 4080 ti and 4090 ti.
5090 is more than a "4090 Ti", but only because Nvidia increased the die area by a whopping 23%. The result is still unimpressive (relatively speaking) but better than nothing.

5080 and 4080/Super have the same die area: ~378mm^2.
 
5090 is more than a "4090 Ti", but only because Nvidia increased the die area by a whopping 23%. The result is still unimpressive (relatively speaking) but better than nothing.

5080 and 4080/Super have the same die area: ~378mm^2.
Okay makes sense, from what I'm understanding getting to 1-2nm is proving a nightmare even for TSMC, so this feels like more of a refresh because of the lack of a new node.
 
Okay makes sense, from what I'm understanding getting to 1-2nm is proving a nightmare even for TSMC, so this feels like more of a refresh because of the lack of a new node.
50 series and 40 series use the same custom TSMC N4 node ("4N"), a member of the 5nm family of nodes (Wikipedia lists the slightly tweaked "4NP" which is correct for the datacenter Blackwell products, but the consumer GPUs are using the exact same node).

TSMC's 3nm nodes such as N3P and N3X are available, being used right now, and should have plenty of capacity if Nvidia wants to use that in 2026. Probably offering typical -30% power reduction or +10-15% performance, and +70% logic density. This is TSMC's last FinFET node and will be long-lived in the industry because of that.

TSMC is moving to GAAFETs with the 2nm nodes. I think these will be available by H2 2026, with AMD apparently using N2X for 12-core Zen 6 chiplets, for example.

So if Nvidia wants to sweep this under the rug, they will aim for 3nm by early-mid 2026. Nvidia may be using N3P for next-gen "Rubin", but it's possible for datacenter and consumer to diverge.

As far as 1-2nm goes, TSMC took the conservative approach and kicked backside power delivery out of N2P, putting it in a new "A16" node that should be available in 2027. I think "A14" is the full node under 2nm, and it's tied to the use of High-NA EUV tools, and will be available in 2028. Anything beyond that is up in the air.

(Nanometers and angustroms mean nothing, these are all marketing names with no specific relation to feature size.)
 
50 series and 40 series use the same custom TSMC N4 node ("4N"), a member of the 5nm family of nodes (Wikipedia lists the slightly tweaked "4NP" which is correct for the datacenter Blackwell products, but the consumer GPUs are using the exact same node).

TSMC's 3nm nodes such as N3P and N3X are available, being used right now, and should have plenty of capacity if Nvidia wants to use that in 2026. Probably offering typical -30% power reduction or +10-15% performance, and +70% logic density. This is TSMC's last FinFET node and will be long-lived in the industry because of that.

TSMC is moving to GAAFETs with the 2nm nodes. I think these will be available by H2 2026, with AMD apparently using N2X for 12-core Zen 6 chiplets, for example.

So if Nvidia wants to sweep this under the rug, they will aim for 3nm by early-mid 2026. Nvidia may be using N3P for next-gen "Rubin", but it's possible for datacenter and consumer to diverge.

As far as 1-2nm goes, TSMC took the conservative approach and kicked backside power delivery out of N2P, putting it in a new "A16" node that should be available in 2027. I think "A14" is the full node under 2nm, and it's tied to the use of High-NA EUV tools, and will be available in 2028. Anything beyond that is up in the air.

(Nanometers and angustroms mean nothing, these are all marketing names with no specific relation to feature size.)
Is Intel's 18a node actually going to be competitive with TSMC going forward?
 
  • Thunk-Provoking
Reactions: Betonhaus
Is Intel's 18a node actually going to be competitive with TSMC going forward?
Intel's 18A and TSMC's N2 process nodes compared: Intel is faster, but TSMC is denser
Backside Power Delivery Nears Production

In the best case scenario, I think it's superior to TSMC N2P/N2X, especially due to having backside power delivery (which I am reading is actually optional for 18A). Intel will finally pick up some customers like Broadcom, but TSMC will get lots of business as usual. TSMC will introduce backside power delivery with 16A, eliminating that advantage. Intel may bring in high-NA EUV earlier than TSMC, for its 14A node.

In the worst case scenario, I think the node's capabilities will be mostly the same, but delays delays delays are what hurts Intel. TSMC's N2 nodes are looking good, conservatively planned as usual, and won't be faced with delays. And you know what? Let's throw in another competitor, Japan's Rapidus:

Japan's Rapidus in talks with Apple, Google to mass-produce chips, Nikkei reports
TSMC aims to produce 2 nanometer products this year and although Rapidus will be two years behind that, the company believes it can make up the difference in manufacturing processes, the report said, citing Koike.
 
Dude this is a card that is as fast as the high end from 4 years ago which doesn't have the same problems with playing at high settings,

The high end card from 4 years ago has 24 GB of VRAM. If you buy an 8 GB card and run with assets designed for a 24 GB card, do not be shocked and appalled that your game is throttled by data moving frequently over the PCIe bus from main RAM to VRAM.

and as the video shows the 16gb is just faster in every situation except one game where textures don't load on the 8gbcard because of the lack of vram

The guy who made the video didn't test at settings that fit in 8 GB of VRAM. In every single example, he cranked up the overall fidelity too high to fit into the card's memory. He should have tested lower-res textures so viewers would know at what sorts of settings you'd expect this card to require to run well.
 
  • Like
Reactions: Vecr
Back