GPUs & CPUs & Enthusiast hardware: Questions, Discussion and fanboy slap-fights - Nvidia & AMD & Intel - Separe but Equal. Intel rides in the back of the bus.

whats the price over the last chips? if their performance is the same using less power seems like a nice deal to me
They all debuted at about $50 less than the original MSRP of their last-gen counterparts. However, last-gen Ryzens have gotten price cuts. Right now on Amazon, the 9600x and 9700x are about $80 more than what you can buy a 7600x and 7700x for respectively.

On the other hand, the 9600x is also worth talking about here because reviewers didn't focus on it, but it had pretty substantial gains over the 7600x. I feel like once the 9600x comes down in price, it's going to absolutely dominate the low-end budget PC market.
 
Last edited:
There's too much weird stuff surrounding the launch to lay the blame solely on dumb reviewers or gaymer fanboys. You can find unflattering comparisons to the previous Ryzen 7 7700 eliminating the alleged perf/efficiency gains. Could be Windows problems, BIOS, memory, whatever. There's always some variations among reviews, and that is more likely to cancel out smaller gains. But AMD claimed a substantial +16% average IPC uplift, more than Zen 2 or Zen 4, and if people aren't feeling that, something is wrong.
Eh, there's something a bit screwy somewhere and I alluded to that in my post. But I maintain the lion's share of the issues are gamers not understanding they're not the centre of the Universe. Gamer's Nexus were charitable enough to give it a "meh" and that it was impressive but they basically didn't care. That kind of sums things up, imo. I care. Data centres care. Laptop manufacturers care (about the mobile version). Gamers don't. And for the +15%, it exists in some areas but for the vast majority of PC gamers, the GPU is the limiting factor. If your main game is Dwarf Fortress then sure. But for the mainstream, it's all about details and FPS and that's on the GPU. CPU hasn't been a bottleneck on that for a while. I predict when the 7900X / 7950X and the X3D versions appear, there'll be a bunch of "now this is the real launch!" sort of reviews.

But I think it's pretty cool. I just wish motherboards were a bit a cheaper.
 
How effectively do AMD gpus utilize 12/16 GBs of vram compared to Nvidia?
This is an extremely complicated question and you'll get different answers in this thread. The simple answer is that extra never hurts but effectively utilizing that much VRAM usually requires pushing resolutions that the lower-tier AMD cards can't do.

Is it better than nvidia still trying to charge a premium for an 8GB card in 2024? Yeah probably.

Is it going to make cards like the 7600 XT significantly more future proof than the 4060? Youtube people think that but I highly doubt it.
 
I've had people legit say the 9600X is a price hike because "It's 65w tdp, its not an X part"

Faggots seem to miss that the 5600X was also lowered to 65w, but they weren't screeching about that.
 
Faggots seem to miss that the 5600X was also lowered to 65w, but they weren't screeching about that.
A supposed difference is that you could OC the 5600X significantly, but it did very little for performance outside of a few benchmarks. I guess the 9600X/9700X are different.
 
A supposed difference is that you could OC the 5600X significantly, but it did very little for performance outside of a few benchmarks. I guess the 9600X/9700X are different.
I have not seen any 9600X oc tests yet, but the 9700X has a lot of room under the hood. You can pretty much double the power draw.
 
question, if you have a Linux setup with two 1080p monitors and you like playing good games (not AAA slop that runs at 5fps on a 4090 and has a microtransaction every five minutes) and aren't too attached to all the special effects but would like to avoid highly visible limitations, whats a good target for CPU and GPU
 
question, if you have a Linux setup with two 1080p monitors and you like playing good games (not AAA slop that runs at 5fps on a 4090 and has a microtransaction every five minutes) and aren't too attached to all the special effects but would like to avoid highly visible limitations, whats a good target for CPU and GPU
What's your budget?

Really almost anything will drive 1080p nowadays.

What fps goal are you looking for?
 
question, if you have a Linux setup with two 1080p monitors and you like playing good games (not AAA slop that runs at 5fps on a 4090 and has a microtransaction every five minutes) and aren't too attached to all the special effects but would like to avoid highly visible limitations, whats a good target for CPU and GPU
5700X3D and a 6700 XT. Even these parts are probably way overkill for what you want to do but they're probably the cheapest bang for buck while still giving you headroom to slap in a better GPU down the line if you want more fancy ray-tracing features. AM4 is a dead-end platform but building on it can save you like $100 and it's not like the last AM4 CPUs are going to be obsolete anytime soon.
 
Chips and Cheese: AMD’s Strix Point: Zen 5 Hits Mobile (archive)
AMD takes the opposite approach. Zen 5 seems to be designed with a strong focus on SMT. Certain parts of the core like the renamer and decoders need two threads active to achieve their full potential. AMD still emphasizes single threaded performance of course, but they’re chasing multithreaded performance with a combination of density optimized physical implementations and SMT, rather than different core architectures. That strategy does come with some advantages, like keeping AVX-512 enabled across the company’s CPU lineup. Maybe AMD is wise to twist the knife while Intel inexplicably refuses to add AVX-512 support to its E-Core line.
An article for the autisms going over the Zen 5/5c CPU cores of Strix Point, not the iGPU or NPU yet. Note the rather high cross-CCX latency:
Cache coherency operations like lock cmpxchg see higher than expected latency when crossing cluster boundaries. This core to core latency measurement usually returns numbers under 100 ns for desktop CPUs, even when crossing die boundaries. The Ryzen AI 9 HX 370 uses a monolithic setup, so higher latency here is a bit surprising.
strix_c2c.png
 
More on the Mac side of things, but rumours are increasing about a redesigned Mac mini, significant because it would be the first significant change to the body since 2010.

It's an interesting thing to see, because Mac minis are also used in a few pretty decent size data centres in racks of 100 to 1000, and although they wouldn't all be replaced immediately, having to redesign a rack to hold a smaller model...

Likely to be a M4 in it, and there's a few new model numbers that have appeared recently, likely ahead of an October event.
 
More on the Mac side of things, but rumours are increasing about a redesigned Mac mini, significant because it would be the first significant change to the body since 2010.
I don't have any skin in the game, but I have thought that if I buy Apple Silicon, it would be a $500-600 Mac mini. Making it smaller and presumably having less ports/functionality makes it less attractive to me. I think many small systems would benefit from going a little larger. The dumb RPi form factor should die in favor of Pico-ITX, and mini-ITX is a good choice.

Also, Apple is allergic to RAM. They should make 12 GB the minimum instead of 8 GB.
 
I don't have any skin in the game, but I have thought that if I buy Apple Silicon, it would be a $500-600 Mac mini. Making it smaller and presumably having less ports/functionality makes it less attractive to me. I think many small systems would benefit from going a little larger. The dumb RPi form factor should die in favor of Pico-ITX, and mini-ITX is a good choice.

Also, Apple is allergic to RAM. They should make 12 GB the minimum instead of 8 GB.
I don't know why they don't rip the band aid off and go for 32 gigs of ram. I've built one rig with less than 32, and I've upgraded my laptop to its max of 16 gigs of DDR4. In either case, I've found both to be pushing the edge in terms of usability. 32 gigs is a happy medium if you aren't going for 64 GB (like I did for my personal rig). There is no such thing as too much RAM.
 
Also, Apple is allergic to RAM. They should make 12 GB the minimum instead of 8 GB.
I agree, but Apple computers with low RAM are generally less unusable than ones running Windows. MacOS is really good at picking things to swap out, and the SSDs are fast enough that the performance impact of switching back to a swapped out application usually is hidden behind the desktop switching animation. An 8GB Mac Mini is perfectly usable for normal home/media consumption/office tasks, an 8GB Dell Optiplex Micro (which is still like three times the volume of a Mac Mini) can barely open three browser tabs simultaneously.
 
Also, Apple is allergic to RAM. They should make 12 GB the minimum instead of 8 GB.

I don't know why they don't rip the band aid off and go for 32 gigs of ram.

The cost of fabricating a chip grows like the square of its area. I can't say how things work at Apple, but there's could be a pretty strict limit on die space & transistor budget when designing silicon.
 
I'm pretty sure it's because they're using 512-bit wide LPDDR5X, which isn't cheap. Apple are notorious for being stingy about their profit margin, when their RAM costs ten times more per chip than the RAM that goes into PCs, executives are going to push back. It's to the credit of the Apple engineers that they're sticking with the wide buses and high-quality dies, convincing corporate that this is an expense they absolutely cannot cut back on can't be easy.
 
Back