Has the popularity of general purpose computing hurt tech?

Anti Snigger

h̸͋̉̈́́̐́͑̇̅̄͛́̀̿̏̅̅̀̆̎͛̆̀̑̈́͊̐̈́͒̔͒͛̍͑̉͂̏̅̈̔̒̕̚͘̕͘͘̚
True & Honest Fan
kiwifarms.net
Joined
Mar 28, 2023
I understand the appeal from a business perspective, a one size fits all piece of hardware is easier to mass produce, and from the consumer side, being able to reasonably run any application on an arbitrary computer is valuable. However, I'm wondering if allowing the "problem space" to be so broad is resulting in a jack of all trades master of none type situation. A fantastic example is task scheduling and caching. Were a CPU manufacturer to have a more reliable view of how the chip would be used, I feel that the use of cache and the ability to branch predict would drastically improve. Furthermore, as a developer, knowing the specifics of hardware would allow you to exploit quirks of the system in order to further optimize a codebase.

I think it may be worth considering whether we should return to more specialized hardware for some performance critical applications, rather than using kitchen sink franken-PCs.
Thoughts?
 
Last edited:
It probably doesn't matter terribly much because people are writing their software in JS and Python. They aren't hurting for performance that much, and whenever they do need more computational capability they seem to expand horizontally.
 
So much R&D had been poured into general purpose computing processors that they have incredible performance in most tasks we run on them.

I wonder if we were to go the route of specialized hardware for certain use cases, how much resources would be needed to make them outperform the ridiculously overpowered general purpose hardware. Even if it would require less than what was put into the general purpose stuff, would the market size of each specialized applications be big enough to make it worthwhile?

In the end, we are currently very successful at fulfilling our needs with our current technology.
 
Even if it would require less than what was put into the general purpose stuff, would the market size of each specialized applications be big enough to make it worthwhile?
If I understand correctly, this exists. It's called an ASIC.
An application-specific integrated circuit (ASIC /ˈeɪsɪk/) is an integrated circuit (IC) chip customized for a particular use, rather than intended for general-purpose use, such as a chip designed to run in a digital voice recorder or a high-efficiency video codec. Application-specific standard product chips are intermediate between ASICs and industry standard integrated circuits like the 7400 series or the 4000 series.[1] ASIC chips are typically fabricated using metal–oxide–semiconductor (MOS) technology, as MOS integrated circuit chips.
 
If I understand correctly, this exists. It's called an ASIC.
I guess I should clarify, I don't mean having a 1:1 ratio of hardware:software configurations, more in between what we have now and that somewhere
 
  • Thunk-Provoking
Reactions: Vecr and Lord Xenu
I guess I should clarify, I don't mean having a 1:1 ratio of hardware:software configurations, more in between what we have now and that somewhere
That kind of sounds like this:
An application-specific standard product or ASSP is an integrated circuit that implements a specific function that appeals to a wide market. As opposed to ASICs that combine a collection of functions and are designed by or for one customer, ASSPs are available as off-the-shelf components. ASSPs are used in all industries, from automotive to communications.[citation needed] As a general rule, if you can find a design in a data book, then it is probably not an ASIC, but there are some exceptions.[clarification needed]

For example, two ICs that might or might not be considered ASICs are a controller chip for a PC and a chip for a modem. Both of these examples are specific to an application (which is typical of an ASIC) but are sold to many different system vendors (which is typical of standard parts). ASICs such as these are sometimes called application-specific standard products (ASSPs).

Examples of ASSPs are encoding/decoding chip, Ethernet network interface controller chip, etc.
Edit: maybe not, I misunderstood ASSP. I have never heard of that prior to looking at the ASIC article. But I would say that an FPGA might be something along the line of what you are considering. I think the FPGA cost is too high right now, but its coming down
A field-programmable gate array (FPGA) is a type of integrated circuit that can be programmed or reprogrammed after manufacturing. It consists of an array of programmable logic block and interconnects that can be configured to perform various digital functions. FPGAs are commonly used in applications where flexibility, speed, and parallel processing capabilities are required, such as in telecommunications, automotive, aerospace, and industrial sectors.

FPGA configuration is generally specified using a hardware description language (HDL), similar to that used for an application-specific integrated circuit (ASIC). Circuit diagrams were previously used to specify the configuration.
 
Last edited:
FPGAs are even more generalist than what mass consumer hardware is. It needs to handle all the logic cases by making their logic gates programmable. That means they are even more expensive than what we have today. FPGAs today are made up of arrays of prefabricated blocks where it is replicated over and over, that's the benchmark on how complex an FPGA can be configured. Most can't go as fast as (clock speed wise) consumer CPUs (which are mostly ASIC designs today), but it sure can reconfigure itself to do whatever you need, like a massively parallel signal processing accelerator (most high end Software Defined Radios do that today). Power efficiency for FPGAs are typically very poor, and usually isn't a concern as much as logic configuration flexibility and cycle accuracy.

Programming the logic blocks take time, so no, its not something you can swap in and out like a pagefile when the task scheduler says so.

ASICs are basically custom semiconductors, it far outstrips anything else in terms of power consumption and logic density. The problem is that making a custom chip is hard for anything high performance, especially if you need to deal with custom mask layouts/rules, RF electronics and all the chemicals needed to do the silicon doping/etching. So it pretty much means it is only worth it if you design something general purpose enough to sell billions to recoup your design and equipment costs.

The middle of the road approach are cell libraries, where pre-designed ASIC logic elements can be placed and routed for whatever purpose you want. You lose the flexibility, power efficienciency and density of ASICs but it is far easier and cheaper to design and validate. It is best used in designs where performance isn't a demand, like FPGAs.

As for why density is important, where TSMC and Intel are trying to go to the insane lengths where extreme UV lasers aren't enough, is to reduce the overall size of the entire design. You want your design to be as physically small as possible so you can fabricate as many as possible on a silicon waffer. The smaller the design, the better the fabrication yield is due to the smaller chance some dust particle floating about on the assembly line destroying a whole batch of chips. But at this stage, extreme UV laser diffraction during manufacturing is as likely to destroy the chip through manufacturing flaws as random quantum interaction effects.
 
  • Informative
Reactions: Lord Xenu
Back