Strawman.
But I get ya. You know what you are talking about. I don't.
Currently we are at a state where a whole core of i7-4790K is needed to emulate a 20 year old DSP chip such as the 563xx DSP in real time. That CPU has a lot more cores, but then things like concurrency come in to it. And as you probably know, multi-core and parallelism isn't the best fit for those old architectures by virtue of the fact that there wasn't really any multi-core around back then as we know it today, they are trying to emulate single core architectures.
Answer: Depends on the machine being emulated. older systems relied more on trapped writes to shared memory, and less on interrupts (Many older machines didn't have enough interrupts). Software on older machines were also coded in plain C or assembly, often directly against the hardware interface...
www.quora.com
Every day new optimizations are made though, but still you will need a pretty decent CPU with fast single core performance. It's a bit like DAWs that utilize multi-core architecture, but depending on how you use the program, usually a single track with lots of VST plugins will need faster single core speed as opposed to multi-core capability for better ad-hoc performance. Having a Hex-Core chip is nice, but if your single core performance is sub-par, compared to a lesser core but faster GHz chip, then performance will suffer in that particular scenario. Whole arguments are made over this by better minds than me that actually program DSP, so again, take what I say with a grain of salt.
Some programmers of certain DAWs optimise for certain chips with all those lovely extra instructions that can be utilized such as AVX and whatnot. So they not only take advantage of raw DSP processing on the chip die, which will benefit VST performance, they make better use of parallelism with regards to the GUI so things like re-drawing the screen doesn't hog up too much runtime. And as it turns out, re-drawing the Gui is quite an important feature of a DAW, that has latency measured in milliseconds. And with an average human being being able discern discrepancies of around 10ms in audio latency, that synchronization between eye and ear is really important. It's real time feedback.
Even guitarists on stage can get thrown off by having too long a cable that introduces latency in the range of 20ms. They can't play in time. Well not as accurately. The whole feedback cycle of hearing and then transferring that to motor coordination and function gets messed up. I'm not saying anything here that is not accepted wisdom, both by guitarists and physicists. How long does your guitar lead need to be before you get out of whack? 40-50 feet?
Very often it isn't possible to better that latency time due to the laws of physics. But where it can be bettered and where it can be brought in to that sub 10ms domain, well, the end user will have a much better experience. In a way, the use of DAWs is just another computer game infrastructure.
I say all this humbly. I am not a C++ programmer. You obviously are. Feel free to mock me or rip apart my argument.
If I might just ask one question though?
Do you think that Near didn't realise all this?
Do you not accept that ok, maybe, his code might not have been the most efficient or optimised and he might have had better algorithms he could have used for sorting or for whatever, but that in that quick and dirty implementation of his, he still was a true hacker?
Because that is one well known definition of the word. And please consider what else I said: not many get to that level of being able to crack and reverse code whilst still being able to write it. That is unusual. Would you not agree? The fact he could code at all, being a reverse engineer and cracker at heart was what made me see beauty in his code. Inefficient? I couldn't say - you are the expert here.
Without putting you on the spot, could you give some quick and easy examples, even one, of where he was severely lacking and incompetent? This is a genuine request by the way. I want to learn.
It's kind of akin to what I also talked about before, about how the best crackers have the worst websites, because they loath the LAMP stack (as Near did - I gave examples of this) but they can hack something up in an afternoon, then it's back to internalising mnemonics for Assembler.
The thing is, emulation, just as virtualisation is heavy on the chip. And certain chips that do not contain certain instructions will be very poor at virtualisation (see VT-X).
https://en.wikipedia.org/wiki/X86_virtualization - even if they will be able to perform it at all.
And I am sure you know this as well - it is common wisdom even among fuckwits like me.
I really would be interested to hear your arguments.
One thing I learned from hanging around with hackers and crackers: each has their own specialized field, to the point of the other being almost ignorant in basic stuff. But does the other hacker/cracker scoff at his fellow cracker/hacker? No he does not. He just laughs. Because he recognized in him something he knows he has in himself: absolute ignorance, blinded by the light, just never thought of it. Then over beers, The hacker/cracker proceeds to educate the cracker/hacker about where he has gone wrong. Pens come out. Not swords. Both laugh. How could I have been so stupid/ignorant. Taps on shoulders: It's ok. We are all ignorant. More beers...
If I may quote that wonderful enigma that is Near/Byuu -
We all stand on the shoulders of giants. I really hope he didn't kill himself.
Also another thing I have found from my time with these people: they love to teach and to tease.
Is there anything you can offer a humble noob such as me, garakfan69?
If not, no matter; back to shitposting.