Apple Thread - The most overrated technology brand?

What killed Steve Jobs?

  • Pancreatic Cancer

    Votes: 60 12.2%
  • AIDS from having gay sex with Tim Cook

    Votes: 431 87.8%

  • Total voters
    491
Yeah, I think it might be. There's speculation that the 2 Thunderbolt ports are limited by the M1 as well.

For a first generation, low power processor though, considering Apple's selling a 2.66-3 GHz processor (based on Anandtech's A14 benchmarks, and active cores), that outperforms any 2020 i9, that has integrated graphics on par with a GeForce 1060, in the lower end Macs, which in return receive over 15 hours battery life.

I think we can safely say 'Holy shit, what the fuck is the iMac going to have?'
 
Benchmarks on Apple's site showing 5x-6x better graphics performance? Shadow of the Tomb Raider, 1920x1060, medium graphics. If that holds up, that puts it dangerously, dangerously close to a nVidia 1060 6 GB.
that outperforms any 2020 i9
ARM iGPU vs. an Nvidia 1060 discrete graphics card, or ARM cores vs. x86-64 cores isn't really an apples-to-apples comparison.

which in return receive over 15 hours battery life.
Pretty much every ultrabook in that price range has around that much advertised battery life, i.e they can easily run on battery all day, so this is nothing new.

For a first generation, low power processor though, considering Apple's selling a 2.66-3 GHz processor...
I think we can safely say 'Holy shit, what the fuck is the iMac going to have?'
Unfortunately the ARM platform has its own limitations. Namely, it scales horribly with power, so there's almost nothing to gain except in the low power segment. And considering that they're already on the 5 nm process... yeah. It's over for ARM Macs as soon as it started.

I believe Apple's going to go with Ryzen and Radeon GPUs on the Mac Pro and possibly the higher tier iMac too because of this. ARM simply isn't efficient for fast and prolonged computation, by design.
 
Last edited:
ARM iGPU vs. an Nvidia 1060 discrete graphics card, or ARM cores vs. x86-64 cores isn't really an apples-to-apples comparison.
Why not? They can complete the same tasks with a certain quantifiable level of performance that can be compared.

Pretty much every ultrabook in that price range has around that much advertised battery life, i.e they can easily run on battery all day, so this is nothing new.
The MacBook Pro isn't an ultrabook, though. Now you're the one not comparing apples to apples. :)
 
The MacBook Pro isn't an ultrabook, though. Now you're the one not comparing apples to apples. :)
It is essentially an ultrabook with beefed up fans, though. M1 has a TDP of 10W.

Why not? They can complete the same tasks with a certain quantifiable level of performance that can be compared.
And MBP users get mad when they're asked why they paid €2000 for a facebook machine. 😂
 
ARM iGPU vs. an Nvidia 1060 discrete graphics card, or ARM cores vs. x86-64 cores isn't really an apples-to-apples comparison.


Pretty much every ultrabook in that price range has around that much advertised battery life, i.e they can easily run on battery all day, so this is nothing new.



Unfortunately the ARM platform has its own limitations. Namely, it scales horribly with power, so there's almost nothing to gain except in the low power segment. And considering that they're already on the 5 nm process... yeah. It's over for ARM Macs as soon as it started.

I believe Apple's going to go with Ryzen and Radeon GPUs on the Mac Pro and possibly the higher tier iMac too because of this. ARM simply isn't efficient for fast and prolonged computation, by design.

In fairness, no, we shouldn't be able to compare integrated graphics to a albeit older, but still well used and popular discreet card. But the fine print's saying that the graphics performance boost was calculated based on Shadow of the Tomb Raider, running on 1920x1080 at medium settings, which according to some quick searching, requires a 1060 to do.

And indeed, I'd say that the 15 hour battery life's also being reasonably hit. But for a laptop to be running SoTR on medium, that gets over 15 hour battery life? That really shrinks the market. While we can't take the benchmarks as gospel, we have to assume they're fairly realistic, or else Apple would get crucified by techies.

Now, we don't know what Apple'll do with the bigger models. 16 GB RAM's fine for the smaller stuff, but for the 24 inch iMac, 16 inch MBP? Nah, that's pretty small. And 16 GB max in a MacPro would be humiliating.

I'd say that the M1 has issues and flaws, but for all the crap they get, Apple aren't idiots. They wouldn't announce a swap to their own processors if they weren't absolutely sure they could iron those out, and maybe give Intel a kick or two.
 
In fairness, no, we shouldn't be able to compare integrated graphics to a albeit older, but still well used and popular discreet card. But the fine print's saying that the graphics performance boost was calculated based on Shadow of the Tomb Raider, running on 1920x1080 at medium settings, which according to some quick searching, requires a 1060 to do.
The reason I argued a desktop GPU and an integrated one can't be compared is because GPU performance depends significantly on the thermals. Yes, it might be able to "run" that game but at what fps, and more importantly for how long until it heats up?

My post was mainly concerned about the limitations of the ARM platform rather than the purported performance though. They could make a CPU that matches a Ryzen 9 with an iGPU that matches a 2080 and it wouldn't mean much because you're locked to whatever ARM ported games/apps there are on the walled garden app store, if you want to run an old app, too bad, x86 emulation performance is laughably bad (it loses to the i3 on the prev gen macbook air).

Ok, you have native iOS development but; no docker, no virtualbox, no bootcamp, no IDEs, seriously who is this Macbook "Pro" for?
 
For thermals, I don't know. I can speak of anecdotal evidence of Minis being quiet af, but mine's the 2014 model, which has a different fan from the 2018/M1, and the 2014's integrated graphics would... probably be able to load the main menu of SoTR, and that would be it.

As for games, Apple's been pretty vocal that if it's on the Mac App Store, it'll work natively. Emulation's harder to say, we've only got Apple's word for it.

Docker's been mentioned by Apple during WWDC to be coming to the M1, but the dev kits couldn't run it because of the A12Z not supporting virtualisation. Virtualbox is a bit of an unknown, although Parallels has said to expect an announcement soon for Parallels Desktop.

Bootcamp, I'm not sure for, and IDEs, I don't know enough about them.
--
Now, we have a crapton of M1 Macs appearing on Geekbench, both the Air and the Pro, and a few Minis, and... they're ranging from 1650 to 1720 single core score, and 6500 to 7400 multicore.

The iMac 27 inch with a i9-10910 clocks at 1250 single core, with quite a few other models hanging around that area. For multicore, the M1s are about 500 points short of a base MacPro.
 
They could make a CPU that matches a Ryzen 9 with an iGPU that matches a 2080 and it wouldn't mean much because you're locked to whatever ARM ported games/apps there are on the walled garden app store, if you want to run an old app, too bad, x86 emulation performance is laughably bad (it loses to the i3 on the prev gen macbook air).

Ok, you have native iOS development but; no docker, no virtualbox, no bootcamp, no IDEs, seriously who is this Macbook "Pro" for?

Macs are not limited to the App Store like iOS devices are. You're still free to just download and run stuff from the Internet or even physical media, and in some cases that's the recommended, or even only, way to install certain programs that require capabilities that aren't permitted to App Store apps (though these limitations aren't as bad as they used to be).

Regarding the lack of software, yes, it's going to be an issue at first, but that's why the Rosetta 2 compatibility layer is there. Time will tell how well it works, but if it works as well as Rosetta 1 for running PPC programs on early Intel Macs, it'll be good enough. As for IDEs, aside from Xcode, that developer segment they had during the last video featured Cabel Sasser of Panic, whose Nova code editor I spend several hours a day in making a living, so I assume a Silicon version is on the way if it hasn't landed already. Homebrew is working towards a Silicon-compatible release and MacPorts works already, so there's VIM and Emacs and pretty much any other FOSS POSIX project you can think of. As for NetBeans and IDEA and the other terrible Java-based ones that some people love for some reason, some company I've never heard of called Azul has created Silicon ports of OpenJDK, so those are feasible in the future. So this problem is being solved, bit by bit.
 
The reason I argued a desktop GPU and an integrated one can't be compared is because GPU performance depends significantly on the thermals. Yes, it might be able to "run" that game but at what fps, and more importantly for how long until it heats up?
Their GPU is inspired by IMG Tech's PVR architecture in a way that is not patent infringing. PowerVR and its hardware tile-based deferred rendering was always interesting because it performed very well with very little and was cheap. It was what made the Dreamcast pretty bonkers and very inexpensive when it launched.

I dug up this from Anandtech on their last PC card, the very cheap and spec-wise puny Kyro II. It's interesting to see how their tech stacked up against other cards at the time.

Old Anandtech said:
kyro2_fill.gif

One tool that the Serious Sam engine possesses is the ability to measure the actual fill rate of a card. This has been something that we previously had not been able to do, meaning that we relied simply on manufacturers numbers. As the above shows, those numbers are extremely misleading.

Remember how we mentioned before that when overdraw is taken into account, the effective fill rate of a conventional video card is essentially the theoretical fill rate divided by the overdraw amount? Well, the Serious Sam tests show exactly this. Take the GeForce2 Ultra, for example. Theoretically, this card has a 1000 megapixel per second fill rate given its clock speed and rendering pipe. What we see in actuality, however, is that the GeForce2 Ultra is only able to fill 375 megapixels per second. This means that given the synthetic Serious Sam fill rate tests, the GeForce2 Ultra is only 37.5% effective. One can attribute this to overdraw as well a memory bandwidth limitations


The Kyro II, on the other hand, features what many would consider a lowly 350 megapixel per second fill rate. However, when the tests are run, the Kyro II scores a fill rate that is only 22 megapixels per second less than the GeForce2 Ultra. Coming out at 352.89 megapixels per second, the Kyro II's effective fill rate matches its theoretical fill rate, something we cannot say about any other card on the market. According to the Serious Sam benchmarks, the Kyro II is actually 100% efficient.

Another cherry picked benchmark, the $150 Kyro II vs the $340 GF2 Ultra running at almost HD in 2001. It didn't have 230mhz DDR memory, it had 175mhz SDR, 15m transistors instead of 25m, the core at 4 watts could probably run without a fan or heatsink.
kyro2_ss2.gif


IMG kept iterating on their TBDR tech even if they never released anything on PC again and when the time came it was the perfect fit for the iPhone, Apple then continued to license the new version until they suddenly had their own very nice GPU that worked the same way. It's a good fit for Apples own OS and their own graphics API because an alternative way of rendering using conventional APIs had its problems on the PC.

Without the thermal restraints of being in a phone/pad the iGPU in the new Macs could exceed (my) expectations. Mash a heatpipe on that shit and it could be pretty fast.
 
A heads up that Apple's servers are coming back on after being hit by a crapton of shit, which took down verifying apps, Apple Pay and Apple Card, and iMessage and Apple Maps.

In the midst of it, iOS, iPadOS and tvOS betas came out briefly before being pulled.

Which added support for the Luna and DualSense controllers.
 
Hopefully these new arm macs are good.
Life on my 2014 mini is suffering.
x86 was a mistake.
 
Back