GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

@ZMOT As long as complete photorealism hasn't been achieved, the industry should push graphics forward using techniques such as ray tracing. Game developers will want to use it since it's easier to do lighting. Whether or not you want to buy into it is up to you, but your hand may be forced if new games start ditching rasterization entirely several years from now. If you don't care because new games suck, then you have no problem, but every new GPU, APU, and phone will have ray tracing acceleration anyway.

It won't require several orders of magnitude more performance. The AI denoising techniques used allow it to be done in real time in the first place. Rasterization and ray tracing performance will increase in parallel with new generations of GPUs until nobody cares about rasterization anymore.
It should also be noted that rasterization at this point is just hacks upon hacks upon hacks upon hacks. RT could eventually create a clean slate and in addition to that it will make it easier and cheaper for indies to crank out their own engine that implements things people expect like "decent lighting" in the form of radiosity and GI. There's probably some cool things that can be done that isn't just light and reflections and we will see what that is when developers start getting creative.

Right now RT hardware is sort of stupid to invest in but that was also true of first gen Transform & Lighting and the GeForce 3 with its shaders. A few games supported the latter at the time and absolutely nothing required it, putting it in the same situation as RT is now. T&L offloaded some things onto the graphics card instead of running it on the CPU but everything was designed to run smoothly on the CPU and the GeForce 256 itself was barely any faster than the high-end TNT2's it replaced. They were both kind of stupid but also necessary.

What is fascinating is that Unreal Engine 5 and its Lumen successor does not use or support hardware raytracing. At this point, at least.
 
Do you think we can expect to see continued growth in Vram as RT runs its course? To go from 8GB to 24GB seems pretty radical compared to prior generations, especially this quickly. Do you think by the end of the RT series we'll see the same x16 jump and have 128GB Vram as standard?

(On the consumer end, I know enterprise / professional play by their own rules)
 
Last edited:
  • Thunk-Provoking
Reactions: Smaug's Smokey Hole
I just inherited a Ryzen CPU computer from a friend who didn't want it anymore because it crashed every 3 seconds so he got something else. After some digging I found that it kept throwing a cache hierarchy error. I slightly overvolted it and it's been doing fine ever since. What's up with these things though?
 
I just inherited a Ryzen CPU computer from a friend who didn't want it anymore because it crashed every 3 seconds so he got something else. After some digging I found that it kept throwing a cache hierarchy error. I slightly overvolted it and it's been doing fine ever since. What's up with these things though?
Sounds like he either crapped out of the Great Silicon Lottery or he's running some crappy RAM and that's got something to do with it.
 
Do you think we can expect to see continued growth in Vram as RT runs its course? To go from 8GB to 24GB seems pretty radical compared to prior generations, especially this quickly. Do you think by the end of the RT series we'll see the same x16 jump and have 128GB Vram as standard?

(On the consumer end, I know enterprise / professional play by their own rules)
The RTX 3090 is basically the "Titan". Nvidia has a gaping hole between the 3080 Ti 12 GB and 3090, which AMD plugged with the RX 6800 with 16 GB VRAM, making that amount of VRAM "mainstream".

Given that HBM3 will allow up to 64 GB per package eventually, there are no real technical limitations here. It's all about cost and necessity. We'll definitely see consumer GPUs with 32 GB. I'm not sure much more is needed, even at 8K resolution. Maybe we'll see the SSG catch on in a consumer context instead:

AMD's SSG API allows application developers to let the GPU talk directly to the 2TB of on-board SSD storage, bypassing the CPU, PCIe bus, and system interface, which can create bottlenecks. AMD says the SSG portion of the card can read data at up to 8GB/s and write data at up to 6GB/s. That's a lot slower than HBM2, but with the High-Bandwidth Cache Controller (HBCC) and thanks to the direct access, it's still over twice as fast as talking to SSDs over the PCIe bus.

At some point I'd rather see discrete GPUs die in favor of super APUs with terabytes of universal memory stacked on them.
 
At some point I'd rather see discrete GPUs die in favor of super APUs with terabytes of universal memory stacked on them.
I think that is likely, but it will be part of the you will own nothing and like it, all pcs are thinclients future.

I say that as a paranoid person who is already setting up that exact thing only self hosted... the things we do to ourselves, just to prevent others from doing it first...
 
I think that is likely, but it will be part of the you will own nothing and like it, all pcs are thinclients future.

I say that as a paranoid person who is already setting up that exact thing only self hosted... the things we do to ourselves, just to prevent others from doing it first...
Unless you're gaymercattle it necessarily won't be that bad. Shitposting is very processing and bandwidth-lite, if it ever gets that bad it'll become economically viable to build and sell hobbyist/tinkerer/secure low-spec boxes kind of like raspberry pis and the like. The retro computing scene indicates there's already some appetite for it.
 
Unless you're gaymercattle it necessarily won't be that bad. Shitposting is very processing and bandwidth-lite, if it ever gets that bad it'll become economically viable to build and sell hobbyist/tinkerer/secure low-spec boxes kind of like raspberry pis and the like. The retro computing scene indicates there's already some appetite for it.
It goes back to Stallmans point, SaaS and Iaas are not freedom. When you are using someone elses hardware and/or software, which you have no right to audit, you have no power, and you are almost always being exploited in some way.
 
It goes back to Stallmans point, SaaS and Iaas are not freedom. When you are using someone elses hardware and/or software, which you have no right to audit, you have no power, and you are almost always being exploited in some way.
Read my post again. Raspberry Pis don't run VMs in a datacenter.
 
Ryzen 5 5600x, worth it in #thecurrentyear? Mainly for emulation/10-15 year old games.
If your thinking about emulating anything that has 3D graphics your going to need something that cost more than 300$ because most emulators that run 3D games are unstable and are prone to crash. Intel is the worst when it comes to emulation specify when it comes to rendering images and objects. Id recommend getting a 2070 or a GPU that has more than 6 VRAM and some AMD CPU that has come out in the last 5 years and you should be fine
 
Last edited:
Ryzen 5 5600x, worth it in #thecurrentyear? Mainly for emulation/10-15 year old games.
You're going to have to be more specific.

Certain emulators and even certain games work better than others. ReDream is solid. Dolphin has a good reputation. Playstation 1 seems solid as well. And anything before that seems to be largely figured out.

There's a YouTube channel ETA Prime that reviews single board computers and small form factor PCs, mostly focused on Emulation. Might be worth searching the channel for computers similar to what you want and seeing how it handles emulation.
 
@ZMOT As long as complete photorealism hasn't been achieved, the industry should push graphics forward using techniques such as ray tracing. Game developers will want to use it since it's easier to do lighting. Whether or not you want to buy into it is up to you, but your hand may be forced if new games start ditching rasterization entirely several years from now. If you don't care because new games suck, then you have no problem, but every new GPU, APU, and phone will have ray tracing acceleration anyway.

It won't require several orders of magnitude more performance. The AI denoising techniques used allow it to be done in real time in the first place. Rasterization and ray tracing performance will increase in parallel with new generations of GPUs until nobody cares about rasterization anymore.
except "easier to do" means fuck all if said devs suck. there's a reason their games suck, and it's not the engine or the lighting. that's like timmy tencent trying to convince people it's better when devs (not the publishers ofc) get a bigger cut because that will totally mean better games.
or let me put it this way: web development has never been easier - yet we're buried under bloated websites and node shit that even creeps unto the desktop now. any gain raytracing brings will be squandered, not to mention a proprietary algorithm to make up power that isn't there means a competitor's entry will be even harder and you'll always have to use said vendor. you might be happy to pay the nvidia tax every new gpu and for a feature that doesn't look marginally different (not even better, depending on the use case), I'm not.

It should also be noted that rasterization at this point is just hacks upon hacks upon hacks upon hacks. RT could eventually create a clean slate and in addition to that it will make it easier and cheaper for indies to crank out their own engine that implements things people expect like "decent lighting" in the form of radiosity and GI. There's probably some cool things that can be done that isn't just light and reflections and we will see what that is when developers start getting creative.
the hack is the AI you have to use properly to not end up with a slideshow or worse output than rasterization, that argument only holds up for pure RT. most indies lack the resources already to do a proper 3d engine, and you don't really need RT for pixel platformer #2328; there is simply no way they'll suddenly be able to compete with companies that poured billions into R&D for something that only works with their own chipsets properly.

Right now RT hardware is sort of stupid to invest in but that was also true of first gen Transform & Lighting and the GeForce 3 with its shaders. A few games supported the latter at the time and absolutely nothing required it, putting it in the same situation as RT is now. T&L offloaded some things onto the graphics card instead of running it on the CPU but everything was designed to run smoothly on the CPU and the GeForce 256 itself was barely any faster than the high-end TNT2's it replaced. They were both kind of stupid but also necessary.
and don't forget before that we had every vendor running it's own thing, and that was with several more than 2 around. T&L was a logical progression, RT is "why don't we do it exactly like it works in nature, no matter the cost or point". there's a reason we still use vectors and not voxels for example.
 
Last edited:
  • Like
Reactions: Brain Problems
Anyone on Nvidia try the latest drivers yet? I've been on 472 for a long time because of a black flicker and was wondering if it was finally time for me to come out of my cave.
 
I was thinking of getting a decent card for compute since I do blender on the side and since I am tired of dealing with AMD drivers I was looking at NVIDIA gpus, the 3060 seems good enough to get the job done and surely faster than my current vega 56, but I realized that the quadro series exists which I know fuckall about, are they worth it? I don't want to spend boatloads of money and I don't need cutting edge shit, I need something that doesn't take years per single frame, and the AI denoiser is also nice which is a plus.
 
Back