GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
How about, for once, you talk about the 2015-2016 era, Pascal GPU's like the 1060 and the 1080Ti, and games like The Witcher 3, GTA V and Metal Gear Solid V?

In order to have PS3 versions, GTAV and MGSV had to avoid doing anything that would stress 10-year-old hardware too hard. I don't talk about games like the recent Call of Duty titles for the same reason. They continue to have the PS4 as a target platform.

Witcher 3 comes out in 2015 and isn't an upgraded PS3 game, so let's look at that. The 900 series GeForces are out and are 1 year old. The 40 series was 1 year old in 2024, so let's look at some unoptimized slop from 2024, Helldivers II, which has no PS4 version.

We'll look at 1-year-old cards and 3-year-old cards. The 1-year-old cards for Witcher III are the 980 Ti (Titan is an odd card, basically a datacenter GPU in a desktop form factor, with no analog today) and the 960, and 3-year-old cards are the 680 and the 630 (600 series had a different numbering system, and the 690 was a dual-chip card, so I went with the top single-chip card for the high end, and for the low end, the same ratio of shader units as a 4090 vs a 460). The 1-year old cards for Helldivers II are the 4090 and 4060, while the 3-year-old cards are the 3060 and the 3090 Ti. Both of these cards had DLSS as marquee features.

Let's see how these games at 1080p ultra with no upscaling! Unfortunately, I couldn't find 1080p ultra for Witcher III on a 630, so we're looking at both games at low settings on the bottom end.

3-year entry
(low)
3-year performance
(ultra)
1-year entry
(ultra)
1-year performance
(ultra)
Witcher III10 fps25 fps33 fps58 fps
Helldivers II108 fps118 fps69 fps158 fps

Apparently, when people say they want "properly optimized" games, they mean they want the games of 2024 to be unplayable on a 3060 at any setting.
 
1080p ultra
So you're talking about something completely else, okay then.

Here's the thing: the only reason I can see you going on this autistic tirade of how 3 year old GPU's struggled to run games 3 years old newer than them while constantly refusing to acknowledge the Pascal GPU's even when directly asked is to make a disingenuous argument that video games on PC's were always unoptimized so anyone bitching about optimization is entitled and should buy the newest RTX GPU because "technology only goes forward". You've handwaved away the optimization argument with "you don't even know what that means" while not explaining what it actually means to prove the optimization point is wrong, and every time you bring up examples of how games were always unoptimized, you go for really old GPU's like the 630 or the GT 9800 as if nothing past that existed. That, or you're incredibly autistic, you've hyperfixated over an argument no one made and you keep arguing about something no one else does.

To reiterate: the argument everyone is making is that we already have good hardware that can deliver good visuals. It's the game devs that are wasting this potential and forcing everyone to keep buying newer hardware because of their own incompetence. That's what's everyone talking about, not how unoptimized games were X years ago on a GT 9800 or whatever the fuck. Modern iGPU's have more computing power than that, it doesn't matter for today's issues.

The reason I insist on you talking about the Pascal era is because I believe we've reached a great equilibrium of hardware power, visual fidelity and hardware demand, and I know we did because my 1060 was still a capable GPU years past it's prime. I played Snowrunner, a game from 2020, on a 1060, a GPU from 2016, at 1080p, on medium settings, with great visual quality and perfectly playable framerates that sure, weren't rock solid 60fps, but never dipped below 45fps, and I may be shooting too low there, and most of the time the dips weren't that noticeable, it was still a fluid experience. And it looked good without having to go for Ultra. The 1060 had the computing power to spare to not age that badly when games used it properly.
1742134260843.png
Transistor count kept growing year after year. Exponentially. Most graphs that represent it are linear, which is misguiding. A 600 series card didn't age the same way as a 10 series card because the 10 series card had that much more power. Game tech and hardware demand didn't grew exponentially the same way hardware did, therefore there is a cutoff point by which we already had a vast selection of hardware that had longer longevity and could be utilized for at least medium 1080p@60fps gaming if games were to properly utilize the hardware instead of wasting it on unoptimized factory belt slop.

Case in point: Tokyo Xtreme Racer. Released 2025, Unreal 5. Medium settings, 1080p, GTX 1070, >90FPS average, still looks good. About nine years in age difference between the game and the GPU and it still holds up. So it can be done. Optimization means getting the best possible visual result at the lowest possible hardware usage. Not throwing Lumen and Nanite on, hopscotching whatever the fuck and shipping it as-is, which is what everyone keeps pointing out. The industry is so far up it's own ass it cannot accept that they're in the wrong and have to gaslight everyone that it's the "entitled gamers" that are wrong and should buy a new GPU. From Nvidia of course, because ray tracing is the future without which good graphics are impossible now.

Ideally, I'd like to hear from you what you want to tell by constantly bringing up those performance comparisons that exclude hardware that's Pascal or newer. At this rate you could bring up 90's PC gaming and it would hold just as much merit to the argument that's being presented about what we have today.
 
While constantly refusing to acknowledge the Pascal GPU's
Pascal GPUs came out in 2016, one year after the Witcher III came out. A developer can't optimize his software for hardware that doesn't exist yet. Nobody complaining about optimization is saying, "Argh! This game that just came out probably won't run well on next year's GPUs!" Nobody says, "This game was so well optimized...I only had to wait a whole year for a GPU to come out that could handle it!"

But very well. To do an apples to apples comparison with the 40 series, which came out in 2022, we'll look at a game that came out in 2021. Hitman 3 seems like a reasonable choice to me.

1060 or 4060
1080p Ultra
1070 or 4070
1440p Ultra
1080 or 4080
4K Ultra
Witcher III44 fps56 fps50 fps
Hitman 3164 fps163 fps185 fps

I'm not looking at frame rates first and cherry-picking to make Witcher III look bad. I'm just choosing a game I recognize after googling "top games of 2021."

Seem you arent remembering the past very well.

Transistor count kept growing year after year. Exponentially. Most graphs that represent it are linear, which is misguiding.

Newer processors are actually array of chiplets made on multiple different processes. The only way they're able to pretend Moore's Law is still alive is by putting this massive array of chips on package the size of my hand on the same graph with single-chip CPUs that fit on my fingertip:

1742138270262.png

So they're putting things on that chart that are actually much larger than you can make with a single chip, while not including multi-chip architectures of the past (which were few and far between, but still existed). If you only chart single chips, you see the exponential trend break I believe some time in 2017, but I don't care to make the chart right now.

Since Moore's Law is an offhand remark, not a law of physics that is well-defined, it is ultimately a matter of opinion as to whether wiring chips together through the package instead of the PCB counts. In my opinion, there's no reason PCB fabrics should be excluded if we count package fabrics. So if the EPYC 9655 counts, then multi-socket CPUs should count as well, IIRC you could put 64 Power6 CPUs together in a coherent fabric.
 
Last edited:
Nobody complaining about optimization is saying, "Argh! This game that just came out probably won't run well on next year's GPUs!" Nobody says, "This game was so well optimized...I only had to wait a whole year for a GPU to come out that could handle it!"
Which is exactly what I said. I even highlighted that part in the post above in hopes your autism would be able to catch it but I guess it wasn't enough. So:

THE ARGUMENT EVERYONE IS MAKING IS THAT WE ALREADY HAVE GOOD HARDWARE THAT CAN DELIVER GOOD VISUALS. IT'S THE GAME DEVS THAT ARE WASTING THIS POTENTIAL AND FORCING EVERYONE TO KEEP BUYING NEWER HARDWARE BECAUSE OF THEIR OWN INCOMPETENCE

Is that clear enough? Or will you continue to argue about semantics that no one but you is arguing about? Or do we have to disclose we're talking about now and using the past as an example of why now is wrong, the past has shown that now should be better and we shouldn't accept it's current state?
Seem you arent remembering the past very well.
Oh I remember the past very well, because I played at 1080p at Medium on a 1060. But if you insist on doing your comparisons where you scale with the heaviest possible scenarios instead of the ones 99% of people played with then be my guest. No one played Witcher 3 at 1080p on Ultra on a 1060. Which I've already mentioned in the post you were replying to.
Newer processors
GPU's. We're talking about GPU's. They're still monolithic. Now you're suddenly switching to talking about CPU's. I shouldn't have ever mentioned Moore's Law since now you'll now hyperfixate on how ackshually we weren't exactly doubling transistor count every year so Moore's Law didn't apply for years. Even though that wasn't the core of my fucking argument. You'd know what the core of my argument was if you've actually read what I wrote instead of having your autism lock onto Moore's Law so I'm not even going to try to reiterate it.

I'm done. You're autistic. As in, properly on the fucking spectrum. Possibly with a hint of ADHD. I have no other explanation for whatever the fuck I'm witnessing here. I don't see a reason to argue about whatever it is that you're dreaming up in your mind, because you're sure as hell aren't arguing about what I'm saying.
 
Newer processors are actually array of chiplets made on multiple different processes. The only way they're able to pretend Moore's Law is still alive is by putting this massive array of chips on package the size of my hand on the same graph with single-chip CPUs that fit on my fingertip:
Nvidia hasn't moved to chiplets for gaming GPUs yet. I brought up how the 980 Ti and 4090 have roughly the same die size on their respective nodes, and the 4090 is ~5.5x faster (didn't look up games, just used TechPowerUp score) despite using some (much?) of the ~9.5x transistor budget for raytracing and AI instead of raster.

AMD's Zen chiplets are about 70-75mm^2. They could make a monolithic version of a 64-core if they wanted to, but it would have bad yields and cost more. They can cope with the downsides of these chiplets, and they are set to improve packaging with silicon bridge technology used in Strix Halo and with apparently all Zen 6 products (server/desktop/laptop).

Moore's Observation is still kind of alive for GPUs since they are massively parallel. RTX 6000 cards will probably be much better received than RTX 5000 since it will actually use a new node.
 
To reiterate: the argument everyone is making is that we already have good hardware that can deliver good visuals. It's the game devs that are wasting this potential and forcing everyone to keep buying newer hardware because of their own incompetence. That's what's everyone talking about, not how unoptimized games were X years ago on a GT 9800 or whatever the fuck. Modern iGPU's have more computing power than that, it doesn't matter for today's issues.
I can definitely tell the difference visually between games made 10 years ago and games made today. Maybe you should get your eyes checked.
 
Hitman 3 seems like a reasonable choice to me.
Doesn't that use the same engine as the previous 2 entries and is basically just the last one in an episodic series that started much earlier? Either way I don't think they're comparable since the engines that those games run on utilize the CPU differently. And TW3 was basically the most demanding game of its time bar none while Hitman wasn't.

And in terms of performance from the 2000s, keep in mind that resolution and fidelity was actively being pushed. From 2000 to 2012 or so, games went from 800x600 as standard to 1920x1080. From the early 2010s to now, we've been stuck at that same resolution with big publishers and studios having no desire to increase it.

But the point isn't even those things. Its that games (by virtue of modern engines and coding practices) look worse or barely better and perform worse. We basically need higher and higher end parts to render slightly better or uglier games with less fidelity.

Instead of performance overhead being used to pump better visuals, its being leaned on to support unsustainably fast development times enabled by unoptimized one-size-fits-all engine technologies,
 
  • Winner
Reactions: Slav Power
Instead of performance overhead being used to pump better visuals, its being leaned on to support unsustainably fast development times enabled by unoptimized one-size-fits-all engine technologies,
Except that also applies to the 2000s. Games in this period typically had a turnaround time of 18 months and they certainly used off-the-shelf engine slop like Gamebryo and RenderWare.

And in terms of performance from the 2000s, keep in mind that resolution and fidelity was actively being pushed. From 2000 to 2012 or so, games went from 800x600 as standard to 1920x1080. From the early 2010s to now, we've been stuck at that same resolution with big publishers and studios having no desire to increase it.
Maybe 1920x1080 on the high-end. The GeForce 500 series struggled at 1080p and even with the then-new GTX 680 not every game would run well at 1080p (keeping in mind that 'running well' means ~70 fps).

In fact, right now feels very much like 2010 with the release of the 500 series and it turning out to just be a refresh of Fermi and there being a new resolution that top-end cards can theoretically hit but most everyone sticks to the lower resolutions.

From the early 2010s to now, we've been stuck at that same resolution with big publishers and studios having no desire to increase it.
Most big-budget games nowadays target 4K. And even putting that aside, I don't know a single person still gaming at 1080p on PC. Everyone's moved on to at least 1440p.

PC has this nice feature called settings menu, look at all things you can tune to balance looks and performance:

View attachment 7098564

I was playing BF3 on i3 2100 and GTX 550Ti @1080p with mix of medium and high settings at ~60fps in 2012. It could do ultra, but not very well (obviously).
Which is kinda his point - even with midrange Turing cards, most new games run well above 60 fps as long as you tune the settings down towards medium or high.

EDIT: I think really the issue is vibes. Pascal was a sweet spot that spoiled a lot of consumers (even when it was new it was insane value compared to Maxwell). From 2020 onwards, it feels like we've been moving in the opposite direction of what the Pascal era was and even though the numbers show that performance tiers haven't changed much, it doesn't feel great to have something like the 4090/5090 hanging around that customers need to think about (almost like an inverse halo product).

As a lot of this thread shows, we're also chasing performance that doesn't really amount to much for most gamers. Black Myth Wukong looks incredible if you slow down and take in the scenery, but it's a fucking action game - how many people are going to stop and investigate the simulated lighting on foliage while they're dodgerolling around in Chink Souls?

And then you get into the absolutely abysmal state of entry-level. The 4060 is an okay card but $300 is not where entry-level needs to be. The lack of anything for people who simply don't have $300 to spend on a new card is frustrating if you're a kid or not from the first world. APUs were supposed to be the new entry-level but their pricing is just as bad.

This isn't even limited to just PC either. Consoles are also experiencing this hard. The PS5 has not sold as well as the PS4 (and most PS5 users are using it to play PS4 games) and the PS5 Pro is struggling to move inventory as most people don't care about AAA slop enough to spend $700 on a console to get slightly better looking AAA slop.
 
Last edited:
  • Agree
Reactions: The Ugly One
And then you get into the absolutely abysmal state of entry-level. The 4060 is an okay card but $300 is not where entry-level needs to be. The lack of anything for people who simply don't have $300 to spend on a new card is frustrating if you're a kid or not from the first world. APUs were supposed to be the new entry-level but their pricing is just as bad.
I see 8600G at $189 on Amazon/Newegg, 8700G at $255. Newegg is giving $5 off either with a code. 8600G without overclocking looks OK for 1080p LOW. What price makes the 8600G (which has more than enough CPU perf) attractive? 5600G prices bottomed out at around $100-120 with some rare $80 deals. There is a 5600GT successor at about $130 for AM4 but the Cezanne iGPU is still too weak. 8500G is a compromised 4 CU Phoenix2 APU and is way too high at $149, and the 8300G also has 4 CUs but hasn't seen a retail release.

As far as entry-level GPUs go, it's crazy to me that the RX 7600 is at $260+ (RX 6600 is at $200+ but is probably not in production). If AMD wanted to help the entry-level, they could make lots of RX 7600 (XT) on the old TSMC N6 node, and push those prices down to $150 (8 GB), $200 (16 GB). We'll see what happens with 9040/9050/9060 on Navi 44.

Rumors say an RTX 5050 desktop GPU will exist, with a slightly higher TDP than the 4060. I think the best you could hope for is for it to be slightly faster than the 4060 at $250... MSRP.

All prices mentioned are USD, non-Americans are further cursed. Maybe tariffs will change that, but probably not.
 
@The Ugly One, do you acknowledge that you can have two identical scenes running at wildly different framerates thanks to underlying technologies, techniques, and/or strategies involved in coding? Or do you think every scene is what it is and you just need to throw more hardware at it?
I really don't understand this fantastical world you've created where efficient code does not exist.

You can look at Battlefield 3 and Battlefield 2042 side by side. Where did the 10 years of GPU hardware improvements go?

You keep pointing to framerates of different cards of the eras to "prove" that the latest games always ran poorly. But it's a non argument. Imagine if Doom 3 released in 2003 and it looked exactly the same as Doom 1 while requiring the most expensive graphics card for an acceptible framerate. That is modern gaming.
 
THE ARGUMENT EVERYONE IS MAKING IS THAT WE ALREADY HAVE GOOD HARDWARE THAT CAN DELIVER GOOD VISUALS. IT'S THE GAME DEVS THAT ARE WASTING THIS POTENTIAL AND FORCING EVERYONE TO KEEP BUYING NEWER HARDWARE BECAUSE OF THEIR OWN INCOMPETENCE

And my argument is that the "wow factor" of a new graphics technology has little to do with its computational cost, and the reason that every new technology across every game across every studio seems to be less and less impressive is that we're in an era of diminshing returns. Satoru Iwata talked about this 25 years ago, he could already see that the PS2 -> PS3 jump wasn't quite as impressive as the PS1 -> PS2 jump, even if the objective compute power was another 10x or whatever.

You know who else thinks so? John Carmack. After megatexture, he realized the era of one man having a big impact on graphics with a new idea was just done forever. It's over. It's not coming back. Too much has already been done, and each marginal improvement costs more and more. It's the 80:20 rule applied to graphics.

Also, wee are in an era where games run smoother on older hardware than ever before. A shocking number of marquee titles still come out for the 11-year-old PS4. A huge range of games have 7, 8, or even 9-year-old GPUs as the min spec. You can run Battlefield 2042 on a 970 at over 60 fps, and that card was 7 years old at the time. At no time in history have gamers been able to get away with not upgrading their machines for so long.

But what the fuck do I know, I'm just a big fat idiot who spent all week optimizing GPU-accelerated physics code.

Oh I remember the past very well, because I played at 1080p at Medium on a 1060. But if you insist on doing your comparisons where you scale with the heaviest possible scenarios instead of the ones 99% of people played with then be my guest. No one played Witcher 3 at 1080p on Ultra on a 1060. Which I've already mentioned in the post you were replying to.

Okay, sure, then let's look at how games ran on 1-year newer GPUs in the Era of Hyper-Optimization versus today, when everything is unoptimized slop.

ERA OF HYPER-OPTIMIZATION
Witcher 3, 1080p Medium, 1060 GTX: 60 fps

ERA OF UNOPTIMIZED SLOP
Cyberpunk 2077, 1080p Medium, RTX 3060: 78 fps
Hitman 3, 1080p Medium, RTX 4060: 159 fps
Helldivers 2, 1080p Medium, RTX 4060: 87 fps

GPU's. We're talking about GPU's. They're still monolithic. Now you're suddenly switching to talking about CPU's.

I brought up CPUs because you showed a chart that had CPUs on it, and those charts always have EPYCs on the high end. GPUs haven't kept up with Moore's Law, either, because cutting-edge process nodes haven't, and the reason they haven't is each generation, certain chip features turn out to be un-shrinkable, or at least shrink less than expected. NVIDIA uses the exact same nodes everyone else does, and can't make a die any bigger than anyone else can. They got a bump at the end there because they switched to TSMC, who's a step ahead of Samsung.

1742158887172.png
 
Last edited:
  • Like
Reactions: seri0us
You know who else thinks so? John Carmack.
None of these people you are invoking actually agree with you and your statements are not logically downstream of their statements

But what the fuck do I know, I'm just a big fat idiot who spent all week optimizing GPU-accelerated physics code.
We have no reason to believe you are any good at this
 
  • Mad at the Internet
Reactions: The Ugly One
I need to replace my island of misfit toys spare parts rig that I use as a stream/steamlink/browsing/light computing machine. Basically want to accomplish the same things plus a little more power in a smaller form factor so I can move it into my entertainment center. I think I've narrowed down to a 7840HS ryzen mini-PC. Any opinions on brands/what to avoid?
Dell Inspiron 14 2-in-1 (Open-Boxes): 14" FHD+ IPS Touch, Ryzen 5 8640HS, 8GB DDR5, 512GB SSD $289.99

Another potential option, a 6-core Hawk Point laptop. I have used laptops connected to TVs to play movies before, but never a 2-in-1. This one only has HDMI 1.4 which is not good for 4K (not sure about the DisplayPort), and it might break sooner than a mini PC (or have a battery-related issue) if you're handling it a lot.
 
Last edited:
EDIT: I think really the issue is vibes. Pascal was a sweet spot that spoiled a lot of consumers (even when it was new it was insane value compared to Maxwell). From 2020 onwards, it feels like we've been moving in the opposite direction of what the Pascal era was and even though the numbers show that performance tiers haven't changed much, it doesn't feel great to have something like the 4090/5090 hanging around that customers need to think about (almost like an inverse halo product).
The PS3 and PS4 generations were both aberrationally extended due to 1) 2008 2) Covid so you had Pascal basically have a ridiculously long glory time, as games didn't really shift. Now that we're seeing the generational jump finally happen, it's throwing people off.

It doesn't help TSMC demand is insane between AI, Professional Cards, The switch to PC from consoles all happening at once.
 
The PS3 and PS4 generations were both aberrationally extended due to 1) 2008 2) Covid so you had Pascal basically have a ridiculously long glory time, as games didn't really shift. Now that we're seeing the generational jump finally happen, it's throwing people off.

It doesn't help TSMC demand is insane between AI, Professional Cards, The switch to PC from consoles all happening at once.

The main reasons they were extended is graphics were already starting to look so good that the next gen just didn't light people on fire. Call of Duty already looked like this running on a GTX 970, roughly the same generation GPU as the PS4, at I believe 1080p medium settings (speaking of muh optimization, Infinity Ward completely rewrote the rendering pipeline for MW2019 to enable more efficient effects processing).


If you max out everything in this game at 1440p, it still only takes 5.2 GB of VRAM. The 4090 is about 21x more powerful than a GTX 970 for SIMD operations, though of course it has tensor ops as well. But 21x isn't as big as it sounds. 21x gets you only 4.5x the surface detail in each direction, vertices and texels/layers. That extra compute gets eaten up fast. 2x or 4x the texture resolution, increase the screen resolution, tesselate the world a little more, add a few effects layers, and you're done. Back when games looked like this:

1742185367636.png

then upping things 2x-4x in each direction and a few extra layers was a big fuckin' deal. We went from maybe 100,000 polygons per second and 1-2 layers on the N64 to a few million polygons per second and 8 layers on the Gamecube. It was mind-blowing. When your game looks like this (Battlefield 4, 2014):

1742185787781.png

...a 21x impovement in compute power buys you fuck all, all it gets you is marginal improvements, like "now the water reflects things that aren't on the screen." Because now the stuff that sticks out visually isn't how ugly and blurry everything is (because it's not), it's how lifeless and inert everything is. Cloth doesn't tear. Steel doesn't bend. Water doesn't flow and splash. A tank tearing across a muddy field does nothing at all to it. Game physics are horrible, rudimentary garbage. The problem is that, unlike everything everyone's focused on for the last 25 years, realistic physics can't be done with time-independent computations on polygons. You need time-dependent computations using polyhedra.

Light is really nice because it moves instantaneously fast. All light calculations are intrinsically static, meaning I don't have to have any kind of time history to get them right. Whether I render 10 fps or 100 fps, it doesn't change the lighting. With light, the previous frame doesn't affect the current frame.

Not so with physics. Physics need to be resolved in time as well as space, or they get completely, unavoidably fucked. I may only need 1 million polyhedra to resolve a car crashing into a concrete divider fairly realistically, but in order for it to not look completely retarded, I need to run the physics at about 1000 fps. I'm not talking engineering quality, I'm just talking, "realistic enough that the viewer doesn't think, 'wow, this looks like dog shit'." If you want realistic bubbles coming out of a deep sea diver's helmet, deforming, collapsing, and merging, then that's getting closer to 10,000 fps or even 100,000 fps just to not look like crap and have tons of terrible artifacts everywhere.

Well, in GPUs we're scoring about a 20x improvement every 8 years. I figure we need another 1000x-10,000x improvement in compute power to do anything meaningful in real time with the kinds of physics games currently lack, so that'd put us somewhere in the 2040s or 2050s when we hit the next major milestone in game visuals.
 
Last edited:
I had a GeForce4 MX (which was a rebadged GeForce 2) and would have died in shock if NVIDIA came out with a technology that allowed it to run new games when it was 2 years old at 800x600x60 fps. Now you have people with 7-year-old cards bitching that they can't get 120 fps at max settings and crying about the "good old days" when "everything was optimized."
People are judging how old games run through the scope of modern hardware. Old games ran and still do run like absolute shit for the visuals offered lol. Getting around 110-144 fps on spiderman remastered on my 7900xtx. Now, how much fps do you think I'm getting on the pc port of spiderman 3 the game from 2007? I'm getting similar if not worse performance despite the game looking 10x worse. Not even just an issue that plagues low budget games. Saints row 3 is legit dipping into the mid 50s at times on my system while gta 5 is chugging along at 120hz ez.
Not to say there aren't new games that run like shit for what is offered, saints row reboot comes to mind, but things weren't all sunshine and rainbows back in the day.
 
@The Ugly One
Intel finally launches XeSS 2 SDK, Unity and Unreal Engine 4/5 plugins also available
Unfortunately, it seems that Intel has not released XeSS 2 as open source. The algorithms still come as precompiled libraries, and despite promises dating back even before the Arc Alchemist launch (and the first XeSS), the company has never published any open-source upscaling technology to date. Perhaps launching XeSS 1 as open-source would save Intel’s face, but it looks like there are no such plans either.
I thought I heard some dooming about XeSS recently, but I forgot what it was about. Seems it's still alive, for now.

Most humble tech CEO.
 
Back