GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

I bought one of the MeLes a couple of weeks ago to use as a HTPC; it's fine, but your point about not having expandable storage is correct.
They have a wider model, the MeLE Quieter 4C, that fits an M.2 2280. It's just puzzling to me that it looks like it would fit an M.2 2280 drive because of the length, but it doesn't. If you don't care about upgrading the storage, it's all good, but I was surprised that Minisforum essentially copied it with this flaw.

MeLE PCG02 Pro is 0.1781 liters. I have a ~10 liter system doing TV duty with a DVD drive in it, but if I wanted to go smaller I'd go down to the 1-2 liter territory occupied by OEM SFF PCs.
 
  • Informative
Reactions: Ibanez RG 350EX
It's only happened twice but it seems half the time when a new driver comes out for my GPU ( RTX 3080 ) it takes that opportunity to fuck up all my files and I have to nuke my drivers and reinstall them all again. This time around I'm always doing custom installation and avoiding Geforce experience like the plague. Weird thing now V-sync actually works as intended now so I guess my dumbass had some bad files or something stowed away on my pc.
 
Since there was so much caterwauling over the 4060's bandwidth, and my 6700XT appears to be entirely incapable of actually using 12 GB, I thought it would be useful to look at the RAM vs bandwidth ratio. It shows how many fps you could achieve if (a) the RAM is full and (b) everything in memory is required exactly once for every frame. Obviously, this limiting condition never happens in real games, but it's still a decent indicator of how bandwidth-starved a card might be. These are just eight random cards I looked at that were positioned against each other.

CardBW/RAM ratio
4060 Ti36
7600 XT18
307056
6700 XT32
409043
7900 XTX40
207056
5700 XT56

What you see is that, with the exception of the 2070 and 5700 XT, which had relatively fast memory, AMD cards tend to be bandwidth-starved vs their NVIDIA counterparts. I have it on good authority from a friend who tests this sort of thing that across their entire product line, AMD's memory controllers underperform, so everything they make typically underperforms its on-paper specs by about 15%-20%. So while a lot of AMD cards have more memory, they can't use it effectively. The 7600 XT is maybe the most egregious example of this. It's a 16 GB card with less bandwidth than a previous-gen card with 12 GB that already seemed a little starved.
 
And yet, you can still find some performance uplifts from the 7600 to the 7600 XT as a result of the doubled VRAM (it's also clocked about 4% higher). Probably not enough to justify buying it over other options, including the 6700 XT which was around the same price before stock disappeared, but it will age better than the 7600. Also, texture degradation doesn't show up in benchmark charts:

The Last of Us Part I, on the other hand, does suffer a performance hit when running out of VRAM. At 1080p, the 7600 XT is 20% faster than the 7600 and a massive 58% faster when comparing 1% lows. Therefore, the increased VRAM capacity is of significant benefit here, even at 1080p.
Halo Infinite is an interesting title to include in this review because it's a great example of the difficulty in testing VRAM capacity. For instance, using the ultra quality preset requires more than 8 GB of VRAM. However, the 7600 XT is just 2% faster than the 7600. In this game, insufficient memory doesn't impact performance, but results in missing textures, a difference we've demonstrated between 8 GB and 16 GB GPUs in the past.
Now, it's time to review the 12-game average data, starting with the 1080p results. Here, the Radeon 7600 XT was, on average, 8% faster than the standard RX 7600 and 6% faster than the RTX 4060. This is roughly what we would expect to see across a range of games.
At 1440p, the 7600 XT was just 5% faster than the 7600 when comparing average frame rates, but a much more substantial 19% margin can be seen when comparing 1% lows. This is an impressive set of results for the 7600 XT, though it was also 12% slower than the 6700 XT.

Your calculation ignores Infinity Cache, although Nvidia has their own version using larger L2 caches so it might be a wash.

Asianometry's comment sections seem to gather a lot of people from the industries he talks about, HDDs in this case:
 
And yet, you can still find some performance uplifts from the 7600 to the 7600 XT as a result of the doubled VRAM

Because 8xPCIe 4.0 is only 32 GB/s. There's a massive penalty if you have to move any data at all from main RAM to VRAM. It's easy to calculate, too. At 60 fps, you can move a maxium of 546 MB before dropping frames. Real-world effects, per the article, depend on what the software is programmed to do if you run out of VRAM. At least it rarely throws an exception these days.

Your calculation ignores Infinity Cache,

It's a calculation based on accessing each byte of RAM only once, so cache doesn't come into play.

You seem to be assuming I'm saying something I'm not. All I'm saying is what you can do with data is a function of your processor's ability to ingest the data. If you have X GB of data and Y GB/s bandwidth, then you can process all of that data at a rate no faster than X/Y operations per second. It applies to AI inferencing, physics simulations, database accesses, and everything else. The corollary is that if you're sustaining more than X/Y operations per second, you're processing less than X GB.
 
I think chopping the 4060 off at the knees on bandwidth was the insult to the injury that it was already quite expensive for what it was
 
I think chopping the 4060 off at the knees on bandwidth was the insult to the injury that it was already quite expensive for what it was
But dude, you can just like, turn niggercattle upscaling on and it makes everything all good. Sure, prices are going up and you're getting less hardware on a card, but just turn on these settings that may or may not work with the game you want to play, or might even work like shit on a game that does use it.

Think about the investors. They need more profit margins! Less is more only applies to the consumer.
 
the RTX 4060 is a great card for power efficiency, actually the way to go for a lot of mid range builds, admitting this as a old school stubborn radeon fanboy
 

Looking like a good idea to avoid ASUS unless you like great service where you are charged $3,000 for a repair on a $3,000 card.
Well, guess I'm going with that MSI 4070 super then, damn. Fuck you ASUS, not paying for that shit
 
  • Agree
Reactions: seri0us
Nobody cares. 4060 is gimped shit with a premium price tag and was panned during reviews for a reason.
Yes nobody on Reddit cares about actual data. They just care about on-paper specs and benchmarks where a primary feature and selling point is disabled entirely. But this is a thread for enthusiasts, not retards who barely know how computer works.

Isn’t the 4060 actually selling quite well?
 
Yes nobody on Reddit cares about actual data. They just care about on-paper specs and benchmarks where a primary feature and selling point is disabled entirely. But this is a thread for enthusiasts, not retards who barely know how computer works.

Isn’t the 4060 actually selling quite well?
Honestly? Wouldn't know. I never kept tabs on GPU sales because those same retards will buy whatever on credit and then rationalize it.
 
  • Agree
Reactions: George Lucas
They just care about on-paper specs and benchmarks where a primary feature and selling point is disabled entirely
DLSS is just an improved version of running a lower resolution and stretching it onto your screen. If you have to interpolate up from a lower resolution to hit a decent 1080p framerate that doesn't count as actual 1080p
 
DLSS is just an improved version of running a lower resolution and stretching it onto your screen. If you have to interpolate up from a lower resolution to hit a decent 1080p framerate that doesn't count as actual 1080p
Which is fine. Personally I have nothing against the idea of upscaling. I just don't like it being considered a substitute for actual power under the hood because it's not always going to work well, or at all.

And then expected to pay as if it's some real stuff.
 
DLSS is just an improved version of running a lower resolution and stretching it onto your screen. If you have to interpolate up from a lower resolution to hit a decent 1080p framerate that doesn't count as actual 1080p
Traditional upsampling uses a discrete interpolation method. AI upsampling relies on a database to inject information before applying discrete and stochastic methods. Traditional upsampling does not add information to the signal, but AI upsampling does (whether that information is relevant or useful is a matter of debate). Consider the egregious example of Samsung essentially copy-pasting an image of the moon onto photographs. That rendered image certainly has more pixels of information than the original (in this case at the expense of eliminating nearly all of the original information). Whether that’s what the user wants is a matter of opinion. Regardless, valuing DLSS is not as simple as considering it a typical upscaler.

I should add that in the context of signal processing, video game rendering has much less bandwidth than a real life video, raytracing be damned. Thus you’ll find I’m much more optimistic about AI adding value to rendered images like video games, artwork, electronic music, and even poetry, then video, natural audio, or prose.
 
Back