Programming thread

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Which makes list iteration, map iteration, tree iteration, etc look the same. That's why I agree on the "computer science level" (you have an abstract container and want to go through its contents) and disagree on the "engineering level" (wtf why are you trying to make 3 totally different things look the same). I like that old John Carmack quote about him preferring to be constantly aware of the full horror of what he's doing
EDIT: sorry I'll shut up about coding now
Lets take this to over here.

Yeah, seems I misunderstood quite what you mean. And I still disagree with making "totally different things look the same". If you don't care about the underlying container and just want to transform all elements, map is perfectly fine. If the container actually matters, of course a bespoke function has to be used. But, IME, the underlying collection container matters surprisingly few times. BUT tbf, I work mostly in webdev crap for business applications.
 
webdev crap
I've also done a lot of webdev crap, enough PHP for two and a half nigger.


disagree with making "totally different things look the same".
I guess it's hard to discuss without a real-life context. For instance, harddisks, USB drives and network drives are quite different but my OS abstracts them into a common filesystem, neat! But then other abstractions I see make me think of a bicycle with axles that accept anything from wheelbarrow wheels to tractor wheels.

My OP was:
In fact, higher-level languages are usually a pain to read (but easier to write which is why they're so popular)
It's a clumsy statement, and tbh I think by "higher-level" I meant (Modern) C++, Java, Python and thereabout, and you replied about F#/ML which I'm not familiar with. Readability is also weird because it depends somehat on what you're really asking. For instance, with Python, readabilty is ok if you don't have too many nested list comprehensions, not too much ravioli code, etc, but just don't ask how it performs because no one can tell. It might seem weird to include performance in readability, but the reason you're reading it might be to understand it well enough make some informed big refactoring decisions, and it's fair to factor in performance if it means anything in the big picture.
 
I suck at programming and have no clue if this is real or not. Can someone more knowledgeable then me explain if this guy actually found the NSA(aka Kike) backdoor to intel CPUs or if its just a nothing burger of a video for clickbait. I can't honestly tell who is jewing who right now.

Preserve Tube

he talks slow and uses assembly
calls out everyone as glowies
1752124061567.webp
this guy is on the trajectory of having somewhat the power level of terry davis
i await for his slur slinging and 2 liter guzzling arc

on a serious note i think this guys kind of weird, came out of nowhere onto youtube with clickbait garbage for his get rich quick/ get smart quick books & courses...
 
Last edited by a moderator:
note: i didn't actually watch the video because i am prodigiously lazy and the equally lazy op didn't post a local archive
I suck at programming and have no clue if this is real or not. Can someone more knowledgeable then me explain if this guy actually found the NSA(aka Kike) backdoor to intel CPUs or if its just a nothing burger of a video for clickbait. I can't honestly tell who is jewing who right now.
intel cpus (and all other modern cpu varieties) are known to have various kinds of glowing firmware inside
i have no idea exactly how many glowing backdoors they have into your computer or just how treacherous they are
finding out how the backdoors work is a solid "big if true" thing
he talks slow and uses assembly
the comment about aes makes me think this video has something to do with the (merited) fear of minorities after dark intel csprng accelerators that often crops up
came out of nowhere onto youtube with clickbait garbage for his get rich quick/ get smart quick books & courses...
personally i think he's just clickbaiting
 
I suck at programming and have no clue if this is real or not. Can someone more knowledgeable then me explain if this guy actually found the NSA(aka Kike) backdoor to intel CPUs or if its just a nothing burger of a video for clickbait.
I think he's just some kind of retard.
It looks like he's not the only one who couldn't get the Intel built-in instruction to match reference values from specs:
https://stackoverflow.com/questions...skeygenassist-si128-to-match-reference-values

The example everyone's using (the video guy, the SO guy, and the Intel spec) is the sample 128-bit key from the NIST spec: https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.197.pdf (page 32)

I'm pretty sure the glowies wouldn't make their secret backdoor work on the one single example key that everyone is going to try first. More likely the StackOverflow reply commenter is right and there's some kind of mixup with endianness here.
 
I think he's just some kind of retard.
It looks like he's not the only one who couldn't get the Intel built-in instruction to match reference values from specs:
https://stackoverflow.com/questions...skeygenassist-si128-to-match-reference-values

The example everyone's using (the video guy, the SO guy, and the Intel spec) is the sample 128-bit key from the NIST spec: https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.197.pdf (page 32)

I'm pretty sure the glowies wouldn't make their secret backdoor work on the one single example key that everyone is going to try first. More likely the StackOverflow reply commenter is right and there's some kind of mixup with endianness here.
wait that's somehow even more retarded than i thought
yeah no glowie would ever do something so ridiculously obvious
i also feel like backdooring a symmetric cipher in any way is incredibly brazen and easy to discover, and the nsa are not very keen on making their backdoors visible to everyone
their favorite tactic for backdooring cryptosystems seems to be releasing flawed ciphers and pushing them really hard as industry standards
 
I hope that at least some of you have updated your skill sets. Yikes, this is scary for those people who get left behind.

he sells a cloud/ai related course

i dunno if hes that trustworthy

EDIT: when you first see the website it will only show you that you can get a free roadmap on your email, the mail you receive will contain this link, which will download the attached pdf
1752180114303.webp
after that it links you to the "limited spots available" course (archive), although the pricing is walled behind LIMITED AVAILABILITY! "strategy" phone call
so i feel like hes generating an artificial problem to later sell you the solution
 

Attachments

Last edited:
he sells a cloud/ai related course
https://www.cloudengineeracademy.io/[/UR i dunno if hes that trustworthy
Like many people who work in tech, he’s trying to find his way in the new world with its new realities. But one thing he’s definitely correct about is the large number of coders with older skill sets being laid off.

What if true AGI can never come into existence due to some heretofore unseen reason? An enormous amount of capital will have been spent to arrive at that answer. Do they go ahead and brand it as True AI anyways like the mobile carriers did when selling the public on the supposed benefits of 5G?
 
I remain pretty skeptical of AI replacing human coders outright. About a month or so ago, I tried helping someone on Discord out with a storefront page they just vibe coded from scratch. I am hardly a JavaScript expert but very quickly realized that the thing was broken beyond fixing. There was no reference to any backend at all (just to start with). I referred this person to fairly inexpensive services like Shopify that can make a functioning, secure online store for users. Companies and other organizations that are counting on wholesale replacement of human workers with AI have a really nasty reckoning waiting for them.
 
I remain pretty skeptical of AI replacing human coders outright. About a month or so ago, I tried helping someone on Discord out with a storefront page they just vibe coded from scratch. I am hardly a JavaScript expert but very quickly realized that the thing was broken beyond fixing. There was no reference to any backend at all (just to start with). I referred this person to fairly inexpensive services like Shopify that can make a functioning, secure online store for users. Companies and other organizations that are counting on wholesale replacement of human workers with AI have a really nasty reckoning waiting for them.
llms have that classic problem of looking really good in a demo but when you try to make them work in practice you get assraped into oblivion
we are probably exactly as close to artificial general intelligence as we were in 1960
 
Like many people who work in tech, he’s trying to find his way in the new world with its new realities. But one thing he’s definitely correct about is the large number of coders with older skill sets being laid off.
Futureproof your workplace by writing everything in obfuscated Lisp with heavy macro usage and only using self-made, in-house data formats and configs. I guess taking money from stupid, impressionable people as an industry shyster/talking head is also an option, your video being one example, but it's too mainstream.
 
AI code doesn't scale, at all. Without meticulous prompting about internal API and conventions, you're going to end up writing more code gluing shit together than anything else. It's been trained on a mishmash of codebases and more importantly, on forum questions and answers where people are typically strongly encouraged to write minimal snippets.
I also take great issue with how LLMs operate on this assumption that everything is sequential in nature. Software naturally is structured like a tree, not a sequence. It's sort of like those clickbait math problem posts that go viral where the ambiguity boils down to them trying to write math on a single line.
I think LLMs have some value in natural language processing and generation, but I'm thoroughly convinced that the transformer being O(n^2) on the vocabulary is a gigantic waste now, and will be looked upon in the coming decades as a laughably inefficient stopgap.

I would be willing to bet obscene amounts of money on OpenAI's new models just being larger and having more layers of tooling and indexing involved, rather than any actual fundamental advances. It's just them piling more of the same shit onto an increasingly tall tower. If you don't believe me, feel free to look at the shit they have open sourced.
GPT-1
GPT-2
GPT-3 (There's no code in here, I think it's just funny to post it to mock that fact)

These massive, baked in models that have to be re-tuned per task are utterly retarded. I wish people would revisit older AI tech like using self-modifying LISPs as opposed to bringing GPUs close to their melting points.
 
I think LLMs have some value in natural language processing and generation, but I'm thoroughly convinced that the transformer being O(n^2) on the vocabulary is a gigantic waste now, and will be looked upon in the coming decades as a laughably inefficient stopgap.
even the biggest llm haters will admit they are pretty good for processing natural languages since nlp is bullshit and llms are very good at that
I would be willing to bet obscene amounts of money on OpenAI's new models just being larger and having more layers of tooling and indexing involved, rather than any actual fundamental advances. It's just them piling more of the same shit onto an increasingly tall tower. If you don't believe me, feel free to look at the shit they have open sourced.
yeah that's actually been their cope for quite a while: "bro just make the model 10x bigger and use 100x more data it will be 0.1x more smart and we will have agi Any Day Now"
These massive, baked in models that have to be re-tuned per task are utterly retarded. I wish people would revisit older AI tech like using self-modifying LISPs as opposed to bringing GPUs close to their melting points.
it seems many deep learning researchers these days are trying to fuse modern deep learning into traditional symbolic ai in various ways
these systems are always pretty specific though (a notable symbolic ai weakness) so they don't get huge amounts of attention (you cannot make the god-tier protein folding system do customer support)
 
yeah that's actually been their cope for quite a while: "bro just make the model 10x bigger and use 100x more data it will be 0.1x more smart and we will have agi Any Day Now"
It's not just the parameter count, they're increasingly resorting on stored and indexed text samples.
Their GPT search engine works by instead of having the crawler count backlinks, it vectorizes the content of the page and adds metadata, which it then retrieves.
LLM Indexes are fucking wild, they just rely on querying an LLM legitimately 10+ times per text sample, and/or groups of text samples, and embedding normally computed data as retrieval metadata.
I shit you not, when you ask something like chatgpt or claude a question, there are easily 100 LLM generations that happen before your response is sent to you.
 
Btw, when I say text sample, I mean on average chunks of ~512 tokens with ~32 tokens of overlap, not whole documents. So the request count scales up.
 
llms have that classic problem of looking really good in a demo but when you try to make them work in practice you get assraped into oblivion
we are probably exactly as close to artificial general intelligence as we were in 1960
I don't know about that. I've read an awful lot about the subject matter and its history. I read pretty much all of Mind as Machine, a comprehensive history of cognitive science in two volumes, ~1600 pages altogether. When I emailed the author, the researcher Margaret Boden, about how recursion need not gobble up more memory with each call because of the possibility of tail recursion, she was very grateful and mistook me for a doctor. (That was a very flattering moment.)

Point is, I know a lot about this subject and have thought much about it. AI has improved to a staggering degree and not only because of hardware speedups. LLMs can be very frustrating and wrong but they became mostly coherent over multiple paragraphs in a few short years (even if claiming cucked bullshit like how DeepSeek says China is not harvesting organs from its minorities). Anyone can install the free and open source chess engine Stockfish that plays a far better game than any human on very modest consumer hardware, whereas when Deep Blue beat Garry Kasparov, it required a supercomputer. Maybe most importantly, embodied AI (robots) have become increasingly capable of handling the highly complex and uncertain nature of our physical world:
Having said that, I do think AI is oversold. We are heading maybe not for an AI winter, like in the past, but an AI autumn. There is a hype bubble that is going to burst in a spectacular fashion (but I hope it was enough to convince the world that nuclear power should be the backbone of our power system).
Futureproof your workplace by writing everything in obfuscated Lisp with heavy macro usage and only using self-made, in-house data formats and configs.
I think Raku (formerly Perl 6) would be another fantastic option
AI code doesn't scale, at all. Without meticulous prompting about internal API and conventions, you're going to end up writing more code gluing shit together than anything else. It's been trained on a mishmash of codebases and more importantly, on forum questions and answers where people are typically strongly encouraged to write minimal snippets.
LLMs' strength is pulling obscure shit from all corners of the Internet and writing very insightful small amounts of code that can easily be inspected by a competent human user without being an obtuse rude piece of shit like on Stack Overflow and other Stack Exchange sites. For the foreseeable future, that is the only kind of code I will let them write for me.
I also take great issue with how LLMs operate on this assumption that everything is sequential in nature. Software naturally is structured like a tree, not a sequence.
I haven't done any deep learning yet. "Glorified Markov chain" is too extreme but do they really have no recursive structures?
I think LLMs have some value in natural language processing and generation, but I'm thoroughly convinced that the transformer being O(n^2) on the vocabulary is a gigantic waste now, and will be looked upon in the coming decades as a laughably inefficient stopgap.
What type of complexity do you think the state of the art could evolve to in the future?
These massive, baked in models that have to be re-tuned per task are utterly retarded. I wish people would revisit older AI tech like using self-modifying LISPs as opposed to bringing GPUs close to their melting points.
Both this post and at least one after it mentioned the merits of symbolic vs. sub-symbolic (which includes neural networks) AI. I think you guys might be interested in the following book:
Author speculates about fusing the two aforementioned approaches to AI. As I understand it, that's already how the most capable artificially intelligent vehicles and other robots already do things.

EDIT: book came out over 10 years ago ... damn I'm old
 
Last edited:
I haven't done any deep learning yet. "Glorified Markov chain" is too extreme but do they really have no recursive structures?
Transformers, and the RNNs they replaced, have a fundamental input and output unit of token sequences. When you give an LLM JSON as input, it's not """interpreting""" it as an AST or something with depth and structure, it's interpreting it as a sequence of tokens. This sounds pedantic but it effectively erases much of the built in structure of the data. Imagine if instead of handing you a math as a graph, I gave it to you as a list of nodes with linkages.
What type of complexity do you think the state of the art could evolve to in the future?
Complexity isn't always a good thing. For me, I'd like to see models become more modular. People are making models that are wide as an ocean but deep as a puddle. For example, were you to be trying to make an MLP to fit an equation like (x + 1)^2 - 1, you could train it on the full equation, or you could have model A do (x+1), model 2 do x^2, and model 3 do x - 1, then chain them. Obviously this toy example isn't motivating, but consider much more complex domains that have well defined subproblems. Any problem that can be decomposed into multiple subproblems for which you can get training data ought to be considered for that fragmentation.
Both this post and at least one after it mentioned the merits of symbolic vs. sub-symbolic (which includes neural networks) AI. I think you guys might be interested in the following book:
https://www.amazon.com/Master-Algorithm-Ultimate-Learning-Machine/dp/0465065708 https://annas-archive.org/md5/2260bb7a6f41254a90a908d7b75a978a Author speculates about fusing the two aforementioned approaches to AI. As I understand it, that's already how the most capable artificially intelligent vehicles already do things.
Looks neat, I'll check this out.
LLMs' strength is pulling obscure shit from all corners of the Internet and writing very insightful small amounts of code that can easily be inspected by a competent human user without being an obtuse rude piece of shit like on Stack Overflow and other Stack Exchange sites. For the foreseeable future, that is the only kind of code I will let them write for me.
When I used copilot, I used the 5 second rule. If it took me more than 5 seconds to understand something it wanted to add, I wouldn't add it. For stuff like GPT, I only ask it concept questions, and I often obfuscate details or ask it to generate in a different language or even pseudocode. I never copy paste outside of really rapid "I need this chunk to work now" scenarios, because I want to be able to understand what I'm writing.
 
Back