GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
The 3600X is a 65W or 70W part I think. At stock most high-end Intel and AMD CPU's are rated with a 95W TDP, some a bit above that but they're usually in the "gotta go fast!" benchmarking territory and priced as such. It's when overclocking the power curve reaches for the sky, a 30% boost in base frequency might lead to a 150% increase in power consumption.
If you're doing all core OC, then yeah they can still jump a quite a bit. That method of boosting is pretty much dead on Zen 2. Now you just turn on PBO and it gives you the best boost for the number of cores your task likes to use. AutoOC +200mhz will also spike the power, but for me that actually lowered performance as that also applies it across all cores, even if you're not using them.

If you DO wanna go the route of all core OC and you're gonna use them, you still have to be rolling with a 3900x or higher to cross that 200w barrier. You're also getting some serious multicore work done in that case lol.


Now if we're talking Intel, yeah that's a a different story. Still the old school overclocking method that while fun, does seem wasteful nowadays. No single core OC, so if you want that 5ghz, you gotta OC the whole thing which easily approaches 300w of power.
Ice Lake was supposed to bring the power eff back in line, but it came with reduced performance compared to 14nm. So they had to juice it outside of it's eff band and now it's in the same boat along with having fab issues.

All in all, overclocking as we know it is sort of dead. Both AMD and Intel now give you the bleeding edge out of the box (9900KF with boost of 4.9), and with OC you MIGHT eek out another 200mhz for a spike in power draw. I run a custom loop so I still do it for fun, but for anyone actually concerned about TDP, take solace in knowing that you're really not missing anything. Though now we're at the point where some of these cpus will require high end air cooling or even watercooling to hit that out of the box boost (9900kf, 3900x, 3950x, etc.)

PS, I know this was for gpus, but I find cpus much more interesting lol.
 
  • Like
Reactions: Smaug's Smokey Hole
PS, I know this was for gpus, but I find cpus much more interesting lol.

No problems, I find them interesting as well, I just wish Cyrix was still involved with some weird garbage for grandma computers.

With GPUs there have been more companies involved with strange ideas over smaller amounts of time and going from being the king to being a bum can happen in the blink of an eye, it used to be a real roller coaster.

These days GPUs have stagnated quite a bit when it comes to adding features, for natural reasons, after Direct3D 11.X things looked to be pretty done. I got a lowkey feeling that over the last few years NVidia have scrambled to push people to aim for 4k and/or 144hz. With a midrange gpu 1080p have been "solved" for the last couple of years. RTX is godsend because it does not run that well and if PS5/XBSX have some degree of useful raytracing support those will be ported back into PC titles, creating a market for it there as well.
 
No problems, I find them interesting as well, I just wish Cyrix was still involved with some weird garbage for grandma computers.

With GPUs there have been more companies involved with strange ideas over smaller amounts of time and going from being the king to being a bum can happen in the blink of an eye, it used to be a real roller coaster.

These days GPUs have stagnated quite a bit when it comes to adding features, for natural reasons, after Direct3D 11.X things looked to be pretty done. I got a lowkey feeling that over the last few years NVidia have scrambled to push people to aim for 4k and/or 144hz. With a midrange gpu 1080p have been "solved" for the last couple of years. RTX is godsend because it does not run that well and if PS5/XBSX have some degree of useful raytracing support those will be ported back into PC titles, creating a market for it there as well.
Oh no doubt. There's been a huge push to frame 1080p, especially 60 fps, as "low end". Can't sell new vidya cards if ppl are okay with just 1080p.

It's a great time to build a PC, imo. Cheap video cards, cheap ryzens, cheap ram, cheap mobos that actually work, and ever falling ssd prices.
 
It's a great time to build a PC, imo. Cheap video cards, cheap ryzens, cheap ram, cheap mobos that actually work, and ever falling ssd prices.

Agreed, even older things works. A desktop computer with 5 year old Intel CPU is fine, playing recent games on a desktop in 2000 built around a 1995 CPU would have been insanity. The first gen i7/i5/i3 was released ten years ago in 2009, it can still be used. Using a ten year old processor in 2009 though...
Even GPUs start to exhibit similar behavior. The GTX 970 is five years old, performs close to a 1060 and many people still sit on 1060s/970s because both are perfectly decent 1080p cards even in recent games.

Things are better than ever, good things are cheap and things work. It's a bit boring. The only classic agony and ecstasy left is buying grey-market accessories built on unspecified VIA chipsets that probably shouldn't have left China.
 
Last edited:
Agreed, even older things works. A desktop computer with 5 year old Intel CPU is fine, playing recent games on a desktop in 2000 built around a 1995 CPU would have been insanity. The first gen i7/i5/i3 was released ten years ago in 2009, it can still be used. Using a ten year old processor in 2009 though...
Even GPUs start to exhibit similar behavior. The GTX 970 is five years old, performs close to a 1060 and many people still sit on 1060s/970s because both are perfectly decent 1080p cards even in recent games.

Things are better than ever. It's a bit boring. The only old-school agony and ecstasy left is buying grey-market accessories built on unspecified VIA chipsets that probably shouldn't have left China.
Man, don't remind me how long ago Core i came out. Feels like just yesterday I had my trusty oc champion, q6600, and looking at the 3570k as an upgrade. I was so put off by having to pay extra to oc that I went bulldozer for cheap. Stuck with that until the r5 1600 came out.
 
  • Like
Reactions: Smaug's Smokey Hole
So double post, but screw it cuz' nobody likes to talk about hardware apparently.

The 5500xt is a bust. You get rx580 performance for $200. The speculation is that it's only really there for OEMs atm, and that they're just clearing out old Polaris stock before reducing the price, hopefully. At that price point you can start looking at a 1660 Super.

Next up is the 5600xt. Some numbers leaked out, so take it with a grain of salt:


Now ideally these would be the $200 card, maybe a bit more. Some people speculate $250, which puts it within spitting distance of the 5700. Also not a good price point to be at. Could be that when this is launched, the 5500xt will be reduced to a more sensible price.


Now for a personal review of the 5700xt. I got a reference Asrock and it was a great card. I do watercooling for fun, so of course I threw a block on it. The major downside is yes, the driver issues can be real if you're not lucky. As soon as I updated to the 2020 adrenaline drivers, whenever I put the pc to sleep, all I would get upon waking it up was a black screen that required a hard reset. Changing any OC settings in Wattman or MSI afterburner would cause my main screen to go blue, which could be fixed if I changed the screen res back and forth.

I thought shit, is it the card? So I ran DDU and grabbed the original 2019 drivers I was using and everything went back to normal. Since then I decided to go with a 1080ti, but I'm planning on putting the 5700xt in my wife's PC and see if it behaves better in her rig.



On the CPU front, things are much more interesting. Ryzen 3000 dominates across the board, and 4000 has no intentions of changing that. Intel is grasping at straws trying to get every last drop of inefficient power out of those 14nm chips.
Intel-Core-X-Series-Family-and-Pricing.png
Look at those tdp numbers! The prices are also something to note. When Threadripper 3 came out, Intel saw the writing on the board and slashed prices across their HEDT segment in half.
Power-Consumption-Overclocked-1.png
Look at that power draw. The stuff of legends right there. I like TheFPSReview, but they only had a 3900 to compare it to. Wasn't too fair, but it did come close in single thread tests. Other reviewers had the 3950x on hand, and while it can beat the 3950x, you have to run it OC'd to the max where it easily draws 2x the power.
960x0.jpg
The single praise for this CPU I can see is that it lowered the entry cost for cutting edge HEDT. You can buy an older Threadripper 2, but they aren't too great for gaming (cuz that's what ppl buy these for, right?). Threadripper 3 starts around $1500, but completely mops these cpus up. So it DOES fill a very specific niche, just bring the cooling to support it :D

Am I an AMD fanboy? For CPUs? Yeah, probably. Ryzen is the most exciting cpu of the past...decade? It also lets me laugh at Intel a bit with the amount of grasping they're having to do. As a parting gift, here's an "insider leak" from Intel. Take it with a huge tablespoon of salt, the guy's been right sometimes, with increasing accuracy over the last few years.

 
  • Like
Reactions: Smaug's Smokey Hole
Speaking of drivers I just found a Matrox M3D driver disk, it was the PowerVR PCX2 card that they released. The drivers actually worked which can't be said for later Matrox drivers.

It was a good, cheap, card, not as fast as a Voodoo but it could run in higher resolutions and 32bit color. Running at 1024x768 and 32bpp made Unreal look incredible. It also made it unplayable.
 
Can't comment much on drivers. I don't update often. Some say this is bad, but haven't seen it do harm to my system.

Rays are cast towards reflective surfaces visible from the player’s camera, rather than from the player’s camera across the entire scene. This greatly reduces the number of rays that have to be cast, which increases performance."

This isn't ray tracing, but raycasting and PBR.

raycasting is a performance hack that sort of works backwards to raytracing. Raytracing is essentially simulating light ,starting at a light source ,hitting shit in the scene and illuminating/bouncing around, possibly bouncing off away from the screen and ultimately doing a lot of work to never be seen.
Raycasting starts at the camera/eye/each pixel on the screen and travels outward toward the scene to ensure every bit of work used is actually going to be put to use. '
This is late, but the statement quoted above specifically the part where it says rays start at the light source is wrong.

Even Chaos Group, the makers of Vray, which is the ray tracer everyone and their mother uses for architecture/interiors, states that:
Ray tracing is a technique for generating an image by tracing paths of light from the camera through pixels in an image plane and simulating the effects of its encounters with virtual objects. To create different effects, different rays are traced.
It always starts from the camera because those are the rays that are seen by the camera and thus, the ones that factor into the render. AFAIK, the ray terminates at the light source. Do note that even though ray tracing is considered more physically accurate than other types of rendering, even among ray tracers, some are more accurate than others. Simulating light behavior means more than just tracing rays.

Basically, not all ray tracers are created the same, some offer more physical accuracy at the cost of more render time (Maxwell render). Some offer a good mix of realism, ease of use, as well as tunability for advanced users (Vray), and some offer massive scalability (enough for Hollywood tier productions) and simpler controls at the cost of more render time (Arnold).

Raycasting scales i think linear to screen resolution in terms of compute, raytracing scales based on (in addition to resolution) scene complexity in terms of geometry and the properties of the stuff in the scene (reflective, semitransparent (sub surface scattering), etc)

This part is correct. Again, it can vary from scene to scene. One model that is subdivided to millions of polygons but have little to no reflection may render faster than 2 shiny metal surfaces facing each other even if those metals only have a few thousand polys making them up.

Also, just to add more info, blurry reflections and semitransparent stuff will render slower than sharp, mirror like reflections and fully transparent objects. This because the blur requires the engine to take more samples of that area. Of course, with the emergence of AI denoising, this may well become a thing of the past.
 
  • Like
Reactions: Smaug's Smokey Hole
edit- nvm, saw the highlighted bits in the quote. ah, so raytracing still starts at the camera. that makes off screen/non screen space reflections seem even more crazy.
 
Last edited:
edit- nvm, saw the highlighted bits in the quote. ah, so raytracing still starts at the camera. that makes off screen/non screen space reflections seem even more crazy.

Starting at the light-source is what cascading shadow maps do in deferred renderers. It's a neat technique.

I still think Nvidia is doing some low-level fuckery* with their raytracing but I haven't looked into it, traditional raytracing and rasterization is just so different from each other.

*not as bad as what AMD pulled did with their Crytek demo

@JoseRaulChupacabra I wondered where Mental Ray was in your list, looked it up on Wikipedia and :(
 
Oh no doubt. There's been a huge push to frame 1080p, especially 60 fps, as "low end". Can't sell new vidya cards if ppl are okay with just 1080p.

It's a great time to build a PC, imo. Cheap video cards, cheap ryzens, cheap ram, cheap mobos that actually work, and ever falling ssd prices.
Yep. I got a 2700x, cheap X470 and a used mined RX580 for my new build for under $400. It really was a great time to buy in November, and still is now.
 
*not as bad as what AMD pulled did with their Crytek demo

What did and pull with the crytek demo?
It may be what you're talking about, I remember seeing a video about a crytek engine that has a "raytraced-ish" approximation that uses I think voxel based lighting that worked on any modernish gpu. it seemed like it was a clever hack to cheaply approximate lighting/reflections in a similarish way to raytracing but you could scale the detail (and performance) based on the resolution of the voxel grid.
 
nvidia drivers without the telemetry (make sure to review the readme because you have to uninstall the DCH drivers and that's a pain in the ass)

 
What did and pull with the crytek demo?
It may be what you're talking about, I remember seeing a video about a crytek engine that has a "raytraced-ish" approximation that uses I think voxel based lighting that worked on any modernish gpu. it seemed like it was a clever hack to cheaply approximate lighting/reflections in a similarish way to raytracing but you could scale the detail (and performance) based on the resolution of the voxel grid.

They combined lots of techniques but what really stuck out to me was that the majority of their raytracing was reflections on planar surfaces like puddles of water, that's super easy to do and was common in shooters in the late 90's on DX6 level hardware(just play the intro level to Unreal). Nvidia impressed by having accurate reflections on non-planar surfaces, like the stormtrooper.
 
  • Informative
Reactions: Jewelsmakerguy
One has to wonder, as GPUs become more capable, would we, some years from now, see Jen Hsun Huang or Lisa Su holding up a GPU on stage proclaiming that their chip is better because it does unbiased? Not unlike some years back when unbiased was used as a marketing term?
 
  • Thunk-Provoking
Reactions: Smaug's Smokey Hole
I asked this in the No Stupid Questions thread but I was wondering if I could get a straight answer here.

I have an Nvidia graphics card with my build. It works really well, but some games aren't well optimized for it. REmake 2 for example is completely broken with Nvidia unless you use older graphics drivers from 2018, which isn't something I want to do for just one game. At first I was planning on building an AMD build at some point in the future, but then I wondered if I could just buy an AMD card for my current build and switch between the two so that I can cover my bases and not spend another $1500.

My question is, is it possible/okay to have two GPUs in one build and have them work individually? I'm not looking for Crossfire/SLI, I'm just wondering if it's a good idea to have both in the machine so that I can switch on the fly in case a game isn't optimized for one of them.
 
Can someone explain what makes a good GPU vs a bad one? In terms of cores, RAM, clock speeds, and anything that sets them apart from CPUs.

I sort of understand what they are - parallel multi-core processers that are optimized for simpler operations than a CPU - but I was a console fag growing up and never needed to learn.
 
  • Like
Reactions: GoGoNoJo
Gamers Nexus made a video on Intels attempt to make a GPU card it's one of their iGPU modules on a card, which according to Intel was made for development houses, all though there might be a much more power customer product in the near future according to intel.
The average FPS on Destiny 2 on low setting was about 37 frames per second with input lag precived to be more than 100ms
 
Back