Nvidia RTX series

Doc Cassidy

Notorious Bum Driller
True & Honest Fan
kiwifarms.net
Joined
Dec 16, 2014
Nvidia will launch its next-generation GeForce RTX 20-series graphics cards on September 20, starting with the RTX 2080 and RTX 2080 Ti, followed by the RTX 2070 in October. These will use the new Turing architecture, which boasts more cores than the previous generation Pascal architecture, along with significant updates that should deliver more performance per core. Also included are new technologies to enable real-time ray tracing in games, and deep learning Tensor cores.

There's a lot to cover, and Nvidia is rightly calling this the most significant generational upgrade to its GPU since the first CUDA cores in 2006. Turing promises better performance than existing GPUs, and has the potential to fundamentally change what we expect from graphics. Here's everything you need to know about the RTX 2080 Ti, RTX 2080, and RTX 2070, the Turing architecture, pricing, specs, and more.

Pricing and release dates for the GeForce RTX series
Nvidia has only announced three GeForce RTX models so far. We don't know when or even if lower tier cards will exist. Most likely, but they may not arrive until 2019. Here are the launch dates and prices so far:

GeForce RTX 2080 Ti Founders Edition: $1,199, September 20
GeForce RTX 2080 Ti Reference: $999, September 20?
GeForce RTX 2080 Founders Edition: $799, September 20
GeForce RTX 2080 Reference: $699, September 20?
GeForce RTX 2070 Founders Edition: $599, 'October'
GeForce RTX 2070 Reference: $499, 'October'?
It's not all good news for the RTX 20-series, as pricing for all three classes of GPU has increased substantially. Call it a lack of competition (AMD's GPUs already struggle to compete against the 10-series parts), or the cryptocurrency bubble bursting (there are reportedly a lot of 10-series graphics cards left to sell), or just plain greed. The bottom line is that launch prices on the Founders Edition cards are up to 50 percent higher than the outgoing 10-series parts.

Pre-orders are available, and while we don't generally recommend buying expensive hardware before independent reviews have been published, many places offering pre-orders are currently sold out. What's worse is we don't even know if the lower 'reference' prices will be seen at launch, or if they're merely recommendations. Based on past experience, we expect Founders Edition and factory overclocked cards priced similarly to the FE to be the main option for the first month or two.

The RTX 2070 launch date hasn't been firmly set by Nvidia yet, with only a statement of October 2018. Given the likely demand for the higher end 2080 parts, we anticipate late October. Again, prices will probably be higher for the first month or two. Then again, with Black Friday and the holiday shopping season going on, we might get a few surprises.

GeForce RTX specifications
Nvidia unveiled many core details of the Turing architecture at SIGGRAPH, and followed up by announcing the below specs for the GeForce RTX graphics cards. After much speculation, we now know what to expect. Mostly.
ThbNgirBgT7dTcD4DRbR5b-650-80.png

The number of CUDA cores in each model has increased by 15-20 percent across the line, though clockspeeds have dropped slightly as well. In theoretical TFLOPS (that's trillions of floating-point operations per second), the GeForce RTX cards are 14-19 percent faster than the GTX 10-series.

Nvidia equips all the new models with 14 GT/s GDDR6, improving bandwidth by anywhere from 27 percent (RTX 2080 Ti) to as much as 75 percent (RTX 2070). That's assuming there aren't any other tweaks to the memory subsystem, like the improved compression technologies and tiled rendering in Pascal.

Along with faster cores and memory, the Turing architecture adds Tensor cores for deep learning and RT cores for real-time ray tracing. Both have the potential to dramatically change what we can expect from future games in terms of graphics.
7VBuh8Z3R5TQ2DxMU7Zhm5-650-80.jpg

Turing architecture and performance expectations
While we have the numbers for the CUDA cores, GDDR6, Tensor cores, and RT cores, there's a lot more going on with the GeForce RTX and Turing architecture. We've provided a deep dive into the Turing architecture elsewhere, which we'll update with additional details closer to launch, but here's the short summary.

Nvidia has reworked the SMs (streaming multiprocessors) and trimmed things down from 128 CUDA cores per SM to 64 CUDA cores. The Pascal GP100 and Volta GV100 also use 64 CUDA cores per SM, so Nvidia has now standardized on a new ratio of CUDA cores per SM. Each SM now includes eight Tensor cores and an unspecified number of RT cores, plus texturing units (which we assume to be half as many as in Pascal). The SM is the fundamental building block for Turing, and can be replicated as needed.

For traditional games, the CUDA cores are the heart of the Turing architecture. Nvidia has made at least one big change relative to Pascal, with each SM able to simultaneously issue both floating-point (FP) and integer (INT) operations—and likely Tensor and RT operations as well. Nvidia said this makes the new CUDA cores "1.5 times faster" than the previous generation.

That might be marketing, but Nvidia's preview benchmarks suggest an average performance increase of around 50 percent for the RTX 2080 over the GTX 1080. Combined with the increase in CUDA core counts and the higher bandwidth of GDDR6, in GPU-limited benchmarks it's not unreasonable to expect 50-75 percent more performance from the GeForce RTX models compared to the previous generation parts.

All Turing GPUs announced so far will be manufactured using TSMC's 12nm FinFET process. The TU102 used in the RTX 2080 Ti has 18.6 billion transistors and measures 754mm2. That's a huge chip, far larger than the GP102 used in the GTX 1080 Ti (471mm2 and 11.8 billion transistors) and only slightly smaller than the Volta GV100. While the full TU102 has up to 72 SMs and a 384-bit GDDR6 interface, the RTX 2080 Ti disables four SMs and one of the 32-bit GDDR6 channels. That leaves room for a future RTX Titan, naturally.

The TU104 trims the SM counts and memory interface by a third, giving a maximum of 48 SMs and a 256-bit interface. The RTX 2080 disables two SMs while the RTX 2070 disables 12 SMs, but both keep the full 256-bit GDDR6 configuration. Nvidia has not revealed die size or transistor count for the TU104, but it should fall in the 500-550 mm2 range, with around 12-13 billion transistors. Again, that's a substantially larger chip than the GP104 used in the GTX 1080/1070.

TSMC's 12nm process is a refinement of the existing 16nm process, perhaps more marketing than a true die shrink. Optimizations to the process technology help improve clockspeeds, chip density, and power use—the holy trinity of faster, smaller, and cooler running chips. TSMC's 12nm FinFET process is also mature at this point, with good yields, allowing Nvidia to create such large GPU designs.

Looking forward, TSMC is readying its 7nm process for full production, and we should see it in a limited fashion by the end of the year (eg, for AMD's Vega 7nm professional GPUs). Don't be surprised if late 2019 sees the introduction of a die shrink of Turing, bringing sizes down to more manageable levels.

What the RT cores and ray-tracing mean for games
Why is ray-tracing such a big deal, and what does it mean for games? We wrote this primer on ray-tracing when Microsoft unveiled its DirectX Ray Tracing (DXR) API. DXR hasn't reached the final public revision yet, but that's expected to happen around the time GeForce RTX cards begin shipping. Nvidia clearly had a lot of input on DXR, and while initial demonstrations like the above Star Wars clip used a DGX-Station with four GV100 GPUs to achieve 'cinematic' 24fps results, Turing is clearly what Nvidia was aiming for.

Not only can a single Turing GPU run the same demonstration as the DGX-Station—which only costs $60,000 if you're wondering—but it can do so at 60fps. That's because the RT cores in Turing are roughly ten times faster for ray tracing than using compute shaders to accomplish the same work. However, doing full ray tracing for real-time games is still a bit impractical.

Nvidia instead suggests using the RT cores for hybrid rendering. Traditional rasterization used for geometry and textures, while ray tracing provides lighting and shadows, reflections, ambient occlusion, and other effects. At least 11 games have announced support for Nvidia's RTX ray tracing. Here's the current list:

Assetto Corsa Competizione from Kunos Simulazioni/505 Games
Atomic Heart from Mundfish
Battlefield V from EA/DICE
Control from Remedy Entertainment/505 Games
Enlisted from Gaijin Entertainment/Darkflow Software
Justice from NetEase
JX3 from Kingsoft
MechWarrior 5: Mercenaries from Piranha Games
Metro Exodus from 4A Games
ProjectDH from Nexon’s devCAT Studio
Shadow of the Tomb Raider from Square Enix
Several of these games should release in 2018, while others are coming in 2019. Shadow of the Tomb Raider will apparently launch without RTX effects enabled, with a post-launch patch adding the feature. Given the September 14 release date, one week before the official launch of the GeForce RTX cards, plus waiting for Windows 10 Redstone 5 and the full DXR API (currently prepping for arrival in September or October), that shouldn't be much of a concern. Getting games that support brand new hardware features within weeks of the hardware launch is still much faster than the usual rate of adoption.

How machine learning and the Tensor cores affect graphics
If you're thinking the Tensor cores are pointless when it comes to 'real' graphics work, you're wrong. Deep learning and AI are revolutionizing many industries, and games are another potential market. But how can the Tensor cores help with graphics?

Nvidia has specifically talked about DLSS, Deep Learning Super Sampling, a new AI-based anti-aliasing algorithm that can offer improved image quality compared to other AA algorithms like TAA (Temporal Anti-Aliasing). The idea is to train a neural network with high quality AA images as the 'ground truth' model—the desired result. Once trained, DLSS can provide real-time enhancements like the removal of jaggies, plus it can also combine resolution upscaling with anti-aliasing. Nvidia hasn't fully disclosed how DLSS is being implemented, but upscaling 1080p to 4k seems likely.
New Nvidia cards are coming out soon and ray tracing is the big new thing. They're also really fucking expensive holy shit.
GeForce RTX 2080 Ti Founders Edition: $1,199
GeForce RTX 2080 Ti Reference: $999,
GeForce RTX 2080 Founders Edition: $799
GeForce RTX 2080 Reference: $699
GeForce RTX 2070 Founders Edition: $599,
GeForce RTX 2070 Reference: $499,
 
I'm excited for the new ray-tracing tech, not only to play games with but because I'm a DirectX programmer too.

But god dammit is it just me or did computer prices dip in the 2000s and now they're rising back up to costing as much as a new car again? By 2050 a new PC will cost a teacher's yearly salary at this rate.
 
If you're just gaming you're gonna want to wait for actual benchmarks before deciding to upgrade.

Ray tracing aside, if you're into deep learning with tensor cores this might be an attractive option considering nvidia's volta series cards are much more expensive.

I recently bought a 10-series card because of a massive price drop and I want to wait and see if I made a bad decision or not.
 
I'm excited for the new ray-tracing tech, not only to play games with but because I'm a DirectX programmer too.

But god dammit is it just me or did computer prices dip in the 2000s and now they're rising back up to costing as much as a new car again? By 2050 a new PC will cost a teacher's yearly salary at this rate.
All due to lack of competition or collusion. RAM makers colluded to raise DRAM prices, HDD makers kept prices high after the flood to take advantage of abnormally high demand and make $$$ off of old shit. Nvidia is jacking up GPU prices because AMD cant do shit.

It sucks, because now we FINALLY have CPU competition, but everything else costs an arm and a leg. I'm still sitting on my old 480/3570k because I cant fathom spending so much cash on a PC upgrade.
 
I'm excited for the new ray-tracing tech, not only to play games with but because I'm a DirectX programmer too.

But god dammit is it just me or did computer prices dip in the 2000s and now they're rising back up to costing as much as a new car again? By 2050 a new PC will cost a teacher's yearly salary at this rate.
They're not as bad as they were in the 90s when UNIX workstations were expensive proprietary computers that could cost as much as a nice luxury car or even a house, but stagnation and price fixing has become a thing. DRAM price fixing happened again with prices only subsiding when governments started investigating, GPU prices got jacked up because of the mining craze with supply being redirected to computer companies, and SSD prices only recently began to freefall. HDD prices thankfully are dipping too. Intel sat around and did little with CPUs because they were so sure they cornered the market that they were blindsided hard by Ryzen.

$700 can get you a well performing laptop brand new and if you look on eBay you can find old business laptops or workstations for little.

It's still not as bad as smartphone prices, that's for sure.
 
Isn't ray-tracing how Wolfenstein did it?

I can't remember what the docs say specifically, but ray tracing has been a known technique for a long time, it's just we never had the right processor designs to support it outside the CPU (so we can't do anything impressive with it because it's too slow). These new designs natively accelerate ray tracing and take full advantage of the technique.
 
  • Informative
Reactions: Long time lurker
Given the performance it's not that bad and it's a cheaper way to use raytracing when you can't afford a Quadro. I'm getting two myself.

I probably wont upgrade again until Intel's cards come out in 2020 and only if they still use the old software on them or something similar.


I'm excited for the new ray-tracing tech, not only to play games with but because I'm a DirectX programmer too.

But god dammit is it just me or did computer prices dip in the 2000s and now they're rising back up to costing as much as a new car again? By 2050 a new PC will cost a teacher's yearly salary at this rate.
You don't even want to know how much my workstation is lol
 
  • Horrifying
Reactions: Gator Young Henning
Any word on when benchmarks/ game play will be released? Excited to see how how games are going to utilize the new tech.

On a side note: Nvidia not releasing benchmarks has gotten me thinking that maybe these new cards won't be able to deliver high fps (60+ fps) with ray tracing enabled because of how new the technology is/ how difficult real time ray tracing actually is. If that's the case then we might get to see a lot of salt from people who pre-ordered these cards.
 
Any word on when benchmarks/ game play will be released? Excited to see how how games are going to utilize the new tech.

On a side note: Nvidia not releasing benchmarks has gotten me thinking that maybe these new cards won't be able to deliver high fps (60+ fps) with ray tracing enabled because of how new the technology is/ how difficult real time ray tracing actually is. If that's the case then we might get to see a lot of salt from people who pre-ordered these cards.
Should be just before release. There's NDA's on the cards. One reason why you'll see bs articles telling you that the cards are the same or worse than the 10 series. Check out Videocardz though. They'll be the first ones with benchmarks.

Also they have review embargos too

I should mention there's a reason why they're expensive. check out the stats
GTX 1080Ti vs RTX 2080 Ti
3584 Cuda Cores vs 4352 Cuda cores
11GB's of GDDR5X Ram vs 11GB's of GDDR 6
4.84GB's Bandwidth vs 616GB's of Bandwidth

https://videocardz.com/77369/nvidia-geforce-rtx-2080-ti-features-4352-cuda-cores
 
Last edited by a moderator:
I smell a conspiracy with the (((part makers))) to increase the computer prices, so we have to buy terribly built Dell/HP/Acer/etc built PCs.

Still doe, imagine the FPS on that thing. Can't imagine any games in years that will need that kind of power.
 
I smell a conspiracy with the (((part makers))) to increase the computer prices, so we have to buy terribly built Dell/HP/Acer/etc built PCs.

Still doe, imagine the FPS on that thing. Can't imagine any games in years that will need that kind of power.

Should be noted this isn't going to do much for older games, the ray tracing is up to the game to do, it's just now the game can tell the card to do it. it can create amazing lifelike graphics that previous cards can't, but older games are going to be stuck with their old algorithms and graphics frameworks. The reason DirectX 12 and Vulkan are so powerful is because they're even lower-level and do less for the programmer so the developer can utilize the card's raw hardware even more than before. The downside to that is, you need to be extremely skilled at that kind of programming. I can only imagine how many more shitty laggy games EA is going to shit out because they won't hire anyone who actually knows how to do DirectX 12/ray tracing.
 
  • Agree
Reactions: Long time lurker
Damn, 4 gigs to 616 gigs is a hell of a jump. Impressive.
I meant 484. The architecture inside is similar to a Quadro like last time, just with less RAM and a few things removed to make getting a Quadro a better deal for content creators (like game developers), while Titans and Ti's have everything needed to play games and not do game development.

I'm hoping that the GDDR6 will allow for better OC than GDDR5X. New PCI should be coming out soon too which should increase the performance of these too. No more limited amount of power going to your card/s. Nvidia has also supposedly fixed their problems with shortages so hopefully no need to worry about that.
Should be noted this isn't going to do much for older games, the ray tracing is up to the game to do, it's just now the game can tell the card to do it. it can create amazing lifelike graphics that previous cards can't, but older games are going to be stuck with their old algorithms and graphics frameworks. The reason DirectX 12 and Vulkan are so powerful is because they're even lower-level and do less for the programmer so the developer can utilize the card's raw hardware even more than before. The downside to that is, you need to be extremely skilled at that kind of programming. I can only imagine how many more shitty laggy games EA is going to shit out because they won't hire anyone who actually knows how to do DirectX 12/ray tracing.
Well before these cards companies like EA would use Enlighten, to know how that ran lighting just look at games like Battlefield 3-4, Battlefront (I think) and Dying Light also used it I think. This should decrease some issues with lighting in games.

Also, if they can't hire someone to use Blender they wont have need to use ray tracing. People have been using it for awhile.
 
  • Like
Reactions: The Fool
New PCI should be coming out soon too which should increase the performance of these too. No more limited amount of power going to your card/s.
I thought the current gen3 PCI-e is extremely hard to saturate even if the bandwidth is halved by a companion card or incorrectly placed NVME card. I also thought the power problem was mostly an AIB partner problem where Gigabyte and a few others cut a lot of corners on the buck converter designs.
 
Current cards can only draw 375w of power max. 300 is from each 8-pin connector and the remaining 75 is coming from the PCIe slot. New PCIe should be getting rid of that limitation. They're also focusing on low power consumption.Less power video cards need the higher you can overclock them. Currently PCIe 4.0 will only allow 75w still, PCIe 5 is expected to be more and coming out at the same time.
 
Back