GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

cards are obviously mislabeled, with the 4060ti being a 4050, 4070 a 4060, 4080, eh, like a 4070ti maybe. This is made quite clear by the fact that Nvidia tried to sell the 4070ti as a 4080.
Yup, I was thinking people would be thrilled with this series if not for the combination of shifting the GPU cores down a sku while pushing the prices up. If the GPUs were being sold as what they were and priced accordingly there wouldn't be much to complain about
 
Something tells me the OEMs aren't paying $400 a pop for 4060 TIs though.

OEMs do pay lower prices than consumers.


Let's put it this way, if you're going to buy a new GPU in a box to put in your computer, the 4060 TI is a terrible buy at $400, which is the market for hardware reviews like that.

I have no interest in buying NVIDIA's latest cash grab, but if I were, video after video of benchmarks with DLSS disabled seem like they are entirely beside the point. The benchmarks I've seen so far for this thing have been retarded - the reviewer usually does everything with DLSS & FG disabled, and only turns them on for irrelevant 4K benchmarks where the card still struggles to stay over 45 fps.

Only simps defend gimped fucking hardware that barely competes with the last iteration just because of "MUH NVIDIA SOFTWARE SHIT"

Tensor cores are hardware. DLSS is the software that uses them. 4th gen tensor cores are significantly more powerful than 3rd gen, which is why DLSS3 is only available on 4000 series GPUs.

The point, which seems to be eluding you, is that the significantly larger, more complex 4060 Ti is not "gimped." 4060 has a lot more transistors than 3060, and those transistors are used for better inferencing, not more raw pixel throughput. DLSS2 was very successful for NVIDIA, and they're betting that the future of GPUs is ML-augmented rendering.

So turning off the inferencing to benchmark it is kind of missing the point, like disabling the battery in a hybrid vehicle and then complaining that you don't get good fuel efficiency.
 
Last edited:
The point, which seems to be eluding you, is that the significantly larger, more complex 4060 Ti is not "gimped." 4060 has a lot more transistors than 3060, and those transistors are used for better inferencing, not more raw pixel throughput. DLSS2 was very successful for NVIDIA, and they're betting that the future of GPUs is ML-augmented rendering.

So turning off the inferencing to benchmark it is kind of missing the point, like disabling the battery in a hybrid vehicle and then complaining that you don't get good fuel efficiency.
Whatever. Defend garbage. We can just agree to disagree on this.

This card is garbage at $400 outside of some extreme niche uses, maybe.
 
4060ti and 7600 are garbage, the 7600 is just less garbage IMO.
Wet fart of a generation.
I will say that at least the 7600 is always better than the 6600, and at a lower initial price point.

Yes, Nvidia really set the bar THAT low. Truly the Lamborghini of GPU makers....

*Edit* What's more funny is not even Digital Foundry of all places can defend the 4060ti. Really says it all.
 
  • Like
Reactions: Freedom Fighter
I will say that at least the 7600 is always better than the 6600, and at a lower initial price point.

Yes, Nvidia really set the bar THAT low. Truly the Lamborghini of GPU makers....

*Edit* What's more funny is not even Digital Foundry of all places can defend the 4060ti. Really says it all.
Nvidia probably doesn't care about the bad press from the 4060ti. Their stock just ballooned last week over AI stuff. Everyone there must be ecstatic.
 
OEMs do pay lower prices than consumers.




I have no interest in buying NVIDIA's latest cash grab, but if I were, video after video of benchmarks with DLSS disabled seem like they are entirely beside the point. The benchmarks I've seen so far for this thing have been retarded - the reviewer usually does everything with DLSS & FG disabled, and only turns them on for irrelevant 4K benchmarks where the card still struggles to stay over 45 fps.



Tensor cores are hardware. DLSS is the software that uses them. 4th gen tensor cores are significantly more powerful than 3rd gen, which is why DLSS3 is only available on 4000 series GPUs.

The point, which seems to be eluding you, is that the significantly larger, more complex 4060 Ti is not "gimped." 4060 has a lot more transistors than 3060, and those transistors are used for better inferencing, not more raw pixel throughput. DLSS2 was very successful for NVIDIA, and they're betting that the future of GPUs is ML-augmented rendering.

So turning off the inferencing to benchmark it is kind of missing the point, like disabling the battery in a hybrid vehicle and then complaining that you don't get good fuel efficiency.
The reason for turning off DLSS and/or Frame-gen in benchmarks is somewhat complicated as its not just an FPS-Booster like overclocking a GPU.
The problem with Frame-gen is that is that it does not do anything for the input latency and only improves the visual framerate.
Thus the "100FPS" of DLSS3/FG is just not objectively the same as even DLSS2 framerates. Its as "smooth" as 100 but "feels" like 50.
After all, framerates are not just about visual clarity but also responsiveness and input latency.

The other problem with all of these thingies, be that DLSS, FSR or XeSS (intel) is that they all influence visual quality and oftentimes quite dramatically.
So comparing FSR1 to DLSS2 framerates would (generally) be unfair for Nvidia as FSR1 produces more obvious visual artifacts.
But those are visual and thus subjective, so you cant correct the objective framerates by the subjective difference in image quality and get any kind of objective test data out of that. This gets even worse if you consider that different games have different levels of artifacts even with the exact same middleware.

Not to mention that the overall use someone gets out of them also depends entirely on how much of their time is spent playing a game that supports them.
If you only play recent AAA games then yeah its probably going to be RTX ON most of the time.
If you mainly play some 10 year old title that does not get updates anymore then its basically useless.
This also cannot be objectively represented in a test/comparison since its different for every user.


The only way to remotely objectively compare these GPUs across generations, manufacturers and games is to go to the lowest common denominator that will see all cards produce the same image quality instead of using a different software framework for each test.
Especially if said software frameworks have a bunch highly subjective differences that are just impossible to represent on an FPS bar chart.

Also, the software is generally taken into consideration in the (generally subjective) conclusion section at the end of these reviews.
So an Nvidia GPU that costs and performs the same as an AMD one would win that comparison/test.
Hell, people generally expect AMD to be 10%-20% cheaper because of this.
Its just no excuse for underpowered hardware sold at a too high price at the end of the day.

Also, the tensor cores are part of these tests every time a raytracing game is shown.
They are just highly situational and not some cheat code to fix underpowered hardware.

Lastly, the better car comparison would be comparing an ICE to a hybrid in a cross-country trip.
Sometimes the hybrid has access to a charge network and can use its electrical engine to get superior mileage. Its a decent car in this condition.
But on many less urban roads it runs out of battery and has to fall back to its inferior back-up engine and falls behind the ICE in mileage since it has to haul its heavy but empty battery around.
The ICE meanwhile works the same regardless of where its driving and thus is the superior choice, unless you primarily drive inside of bigger cities with available chargers.
 
The reason for turning off DLSS and/or Frame-gen in benchmarks is somewhat complicated as its not just an FPS-Booster like overclocking a GPU.

Turning it off means you're not reviewing the card as users are likely to use it, so it's not informative.

The problem with Frame-gen is that is that it does not do anything for the input latency and only improves the visual framerate.
Thus the "100FPS" of DLSS3/FG is just not objectively the same as even DLSS2 framerates. Its as "smooth" as 100 but "feels" like 50.

If this is true, then these gaming sites should do double-blind studies to back it up - meaning they should test and review this feature, not disable it. Since human reaction time is ~200ms (i.e. at 100 fps, 20 frames go by before what goes in your eye comes out your hand), this assumption may be wrong.

The other problem with all of these thingies, be that DLSS, FSR or XeSS (intel) is that they all influence visual quality and oftentimes quite dramatically.
So comparing FSR1 to DLSS2 framerates would (generally) be unfair for Nvidia as FSR1 produces more obvious visual artifacts.

This makes as much sense as disabling anti-aliasing and anisotropic filtering to do a review back when not every card did them equally well (or at all) because it's not "fair" to the older-style cards. The fact that FSR is so bad is a significant disadvantage for AMD cards. I've found it to basically not be usable. By contrast, I have a gaming laptop with a 3000 series GPU, and I generally use DLSS if it's available.

If you mainly play some 10 year old title that does not get updates anymore then its basically useless.

None of these reviewers are sperging about about which card breaks 400 fps in Counterstrike. They're running modern games, disabling core features, and complaining about the frame rates.

The only way to remotely objectively compare these GPUs across generations, manufacturers and games is to go to the lowest common denominator that will see all cards produce the same image quality instead of using a different software framework for each test.

When compute hardware undergoes a qualitative change from one generation to the next, ignoring the new feature set to focus on last-generation benchmarks is anything but objective. If we did that in CPU world (I've been benchmarking CPUs since Haswell), we'd still be doing only single-threaded benchmarks with no SIMD instructions.

This is the kind of comparison that's meaningful:

  • Resolution is at 1080p and 1440p. Nobody is buying 3060s or 4060s for 4K.
  • The impact of DLSS is the centerpiece of the comparison
 
Nvidia probably doesn't care about the bad press from the 4060ti. Their stock just ballooned last week over AI stuff. Everyone there must be ecstatic.
I know they don't care. I just wish supposed PC "enthusiasts" would stop gargling corporate balls.

Nvidia can do whatever. Just like I can choose to not buy their product. It's when the actual consumers start making excuses for the bullshit, or try to rationalize it...that's the issue. Now they sound like a bunch of console plebs celebrating weak hardware in favor of upscaling and shitty texture pop in "optimization".
 
The other problem with all of these thingies, be that DLSS, FSR or XeSS (intel) is that they all influence visual quality and oftentimes quite dramatically.
So comparing FSR1 to DLSS2 framerates would (generally) be unfair for Nvidia as FSR1 produces more obvious visual artifacts.
But those are visual and thus subjective, so you cant correct the objective framerates by the subjective difference in image quality and get any kind of objective test data out of that. This gets even worse if you consider that different games have different levels of artifacts even with the exact same middleware.
it comes down to the game and implementation. of course there will be effects from the process, but that's offset with the increase in performance.
for me it's the same how you'd turn down fidelity to get more frames, and if FSR allows me to use an older GPU for a 1-2 year longer without much impact it increases the value of said GPU.

also doesn't help that most comparisons are usually retarded screenshots of detail most people won't notice anyway, when FSR especially is more noticeable in motion and how it affects shaders. oh WoW, that 10px tree far away in the background is less crisp, better buy team green/red!
 
Nvidia is finally going all out with the "Be gone, gamers" attitude that they've clearly been shifting towards for a while now. First it was catering to the crypto crowd, and now it's moving onto enterprise.

Always knew granting such a scummy company an 80+% market share was going to bite PC builders in the ass someday. Now that company is openly telling them to bend over and that expecting at least lube is being "entitled". Nvidia could just stop making any and all "gaming" cards today and it'd barely affect them while leaving the whole PC crowd behind. This is the result of years upon years of simping.
 
Nvidia is finally going all out with the "Be gone, gamers" attitude that they've clearly been shifting towards for a while now. First it was catering to the crypto crowd, and now it's moving onto enterprise.

Always knew granting such a scummy company an 80+% market share was going to bite PC builders in the ass someday. Now that company is openly telling them to bend over and that expecting at least lube is being "entitled". Nvidia could just stop making any and all "gaming" cards today and it'd barely affect them while leaving the whole PC crowd behind. This is the result of years upon years of simping.
Man, I still remember when they were the underdog with some functioning, yet meh Riva cards competing with Voodoo. Barely a few years after they just assimilated 3DFX and rest is history basically. There was no crypto, no AI, no pink hair devs, no programmer socks devs, no "catering to a modern, diverse audience", just pure focus on Western male geeks - and we didn't even understood that at the time.
Good times.
 
You guys should watch the last vids from HBU and GN if you haven't already, they're quite good. About the new focus on AI from Nvidia, full of Jensen memes and idiocy, and just general gamer misfortune+horrifying future and prices ahead.
Keynotes are generally bullshit. Companies pay to get the spot, and they turn into exec circlejerking and investor pandering. This one was fairly cringe, too. Jensen was probably hitting the coke right before.
There was no crypto, no AI, no pink hair devs, no programmer socks devs, no "catering to a modern, diverse audience", just pure focus on Western male geeks - and we didn't even understood that at the time.
Jensen has a history of being egotistical that goes back to the 3dfx purchase, its not really news.

Anyways, ride the wave while it lasts, Nvidia doesn't have the stranglehold on AI that investors think it does. Sooner or later, the competitors will catch up by working together, and us normal folk will with with them since most of it will be standardized for interoperability.
 
AMD 7600 (The GPU, not the CPU) reviews dropped. This whole gen looks like a skip.

Looks not as bad as the 4060ti, but, that's really not saying much.
>AMD did something good for the consumer; i.e. last-minute price cut, making the card no more expensive than the GPU it's replacing, the RX 6600.
>frowny face of contempt in thumbnail; largely because AMD made a mess of its launch plans, annoying reviewers like the Youtuber in question.
I know soy redditor face expressions in thumbnails are liked by the Youtube algorithm but come on, it's more good news than bad news for the consumer i.e. the people watching his videos.
 
>AMD did something good for the consumer; i.e. last-minute price cut, making the card no more expensive than the GPU it's replacing, the RX 6600.
>frowny face of contempt in thumbnail; largely because AMD made a mess of its launch plans, annoying reviewers like the Youtuber in question.
RX 6600 has been on sale for more like $180-220. RX 6700 10 GB has been around $300, RX 6700 XT 12 GB around $320, and those are clearly superior options. The lack of movement in performance/$ factors more into the disappointment than the media communication stuff.

The price will get cut or be allowed to drift down to something more reasonable soon, and then people will like it more.

If we do see a reasonably priced 7600 XT 16 GB to counter the 4060 Ti 16 GB, that will get some serious attention, but that could still fall behind the 6700 XT 12 GB in most benchmarks.
 
  • Agree
Reactions: Bananadana
I just can't get my new AMD card to function properly, apparently that's a common problem.

If I have anything on my second monitor, the frame rate on my main screen drops to 15 or less. It feels like playing Skyrim on a PS3.

I have done everything, using Edge, turning hardware acceleration off, changing the refresh rate to 120 or 60, turning off freesync and so on.

I just don't know anymore what to do. This shit just fucking hates more than one monitor.

I really don't want to get super mad right now because I still believe I'm a fucking retard and I'm doing something wrong. But seriously, I'm very close to understand why AMD just doesn't fucking sell.
 
I just can't get my new AMD card to function properly, apparently that's a common problem.

If I have anything on my second monitor, the frame rate on my main screen drops to 15 or less. It feels like playing Skyrim on a PS3.

I have done everything, using Edge, turning hardware acceleration off, changing the refresh rate to 120 or 60, turning off freesync and so on.

I just don't know anymore what to do. This shit just fucking hates more than one monitor.

I really don't want to get super mad right now because I still believe I'm a fucking retard and I'm doing something wrong. But seriously, I'm very close to understand why AMD just doesn't fucking sell.
might depend on the batch of card, firmware, driver, port, heck PSU issues, none of that is limited to one vendor. that's just the fun of the PC platform.
 
Back