Programming thread

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
peter-griffin-python-meme-among-us.jpg
 
No you can put the script anywhere in the tree but the root can address any node, nodes below the root cant address the root. I'm gonna guess you're addressing the wrong node but think you're addressing another one, that's why it only works when your script is attached to it but fails silently otherwise. Use % unique names in the tree for your node and address using that
I think I figured out what my problem was. I am new to c# and I realized that I probably have to add something like

C#:
Node optionbuttion = New Node();
optionbuttion.GetNode(nodepath)Select(int);
 
600-6005090_7190485-pink-wojak-screaming-clipart.png
>Using VS Code
>Pc starts running really fucking slow
>Pull up task manager
>Console window host using 70% of my cpu
>Try saving and closing down everything
>Its still using up all my cpu
>Look up solution online
>Saaar you have to right click then click right where it says end task to ending the task saar
>Try ending task
>Screen goes black
>MFW
 
I thought it would be a swastika but it's just some pajeet letter. Got stuck for a moment on not(), that was clever.
It's the Amogus letter
doge-among-us.png
View attachment 6554128
>Using VS Code
>Pc starts running really fucking slow
>Pull up task manager
>Console window host using 70% of my cpu
>Try saving and closing down everything
>Its still using up all my cpu
>Look up solution online
>Saaar you have to right click then click right where it says end task to ending the task saar
>Try ending task
>Screen goes black
>MFW
That's why I use Vim
 
Funny enough, I wrote a method called poz my viewport.

C#:
public OptionButton _SizeOption()
{
return GetNode<OptionButton>("%SizeOption");
}

public Vector2I pozMyViewport(int x, int y)
{
var ScreenSize = DisplayServer.WindowGetSize();

int xPoz = (ScreenSize[0] - x)/2;
int yPoz = (Screensize[1] - y)/2;

return new Vector2I(xPoz, YPoz);
}
int[] xDim = [640,1280,1920];
int[] yDim = [360,720,1080];
public void adjustResolution()
{
 var s = _SizeOption().GetSelected();
 Vector2I newSize = new Vector2I(xDim[s], yDim[s]);

DisplayServer.WindowSetSize(newSize);
DisplayServer.WindowSetPosition(pozMyViewport(xDim[s], yDim[s]));
}

Typed this on my phone from memory so it might have a few errors.
 
Funny enough, I wrote a method called poz my viewport.

C#:
public OptionButton _SizeOption()
{
return GetNode<OptionButton>("%SizeOption");
}

public Vector2I pozMyViewport(int x, int y)
{
var ScreenSize = DisplayServer.WindowGetSize();

int xPoz = (ScreenSize[0] - x)/2;
int yPoz = (Screensize[1] - y)/2;

return new Vector2I(xPoz, YPoz);
}
int[] xDim = [640,1280,1920];
int[] yDim = [360,720,1080];
public void adjustResolution()
{
 var s = _SizeOption().GetSelected();
 Vector2I newSize = new Vector2I(xDim[s], yDim[s]);

DisplayServer.WindowSetSize(newSize);
DisplayServer.WindowSetPosition(pozMyViewport(xDim[s], yDim[s]));
}

Typed this on my phone from memory so it might have a few errors.
That's a different sort of bug-chasing, surely
 

Im still new to C# and I realized that its actually similar to the way I usually write Javascript, but way more robust.

I posted on here talking about the way I set up a javascript project I've been working on and my idea was to set up objects for different purposes and then call those objects when you need to use it.

I should have just started writing C# right then and there because thats literally what I've already been doing but I was using a shittier language.

Still, I'm happy I learned Javascript first. I think its more accessible to beginners and I'm shocked to see that almost all the shit I learned from writing Javascript translates over to writing in C#.

I'm gonna read more about the nuances of using classes, but I figure I can embed classes inside the root nodes script like this

C#:
public partial class Play : Node2D
{
public class Player : Sprite2D
{
//All the code for my player node goes here
}

}

And it will make everything way more organized. I hate when my scripts look like a wall of text. I try to make it as meticulously organized as possible, it helps to keep things simple.
 
I really think this to be a misnomer, outside of academic discussions. The machine code is low level, as are thin layers above it, but past incredibly small languages like Forth it becomes a pissing match.

The most important things for a programmer to learn, and which most fail to do, is to learn how to learn and to learn how to think. A calculator with the ability to store short programs and therefore automate basic tasks would be better than half the shit I see people recommend. Throwing newcomers into the unacceptable hellscape of modern programming isn't the way to do it, unless someone wants to scare them off programming forever.
I hate the "every language above machine code is high level" argument. C/C++ code when read it's easy to map the lines to assembler code. I understand compilers will generate what they want when optimising.

Having a language with pointers and manual memory management + alignment and so forth is a HUGE difference between a language like Java

C is the first language you want to learn, you get to understand how the system works and it also becomes very easy to learn other languages as they all borrow very heavily from C.

Honestly, if newcomers get filtered by pointers early it saves them time with the fact they stop pursuing programming and software from having a shit dev. Win win
 
Last edited:
Still, I'm happy I learned Javascript first. I think its more accessible to beginners and I'm shocked to see that almost all the shit I learned from writing Javascript translates over to writing in C#.
I think I probably talked in here earlier about how most programming languages are descendants of ALGOL and there is a ton that will carry over from one fairly common language to the next. With JavaScript, to my mind, there are (at least) two very fundamental differences that stand out, weak typing and the traditional rejection of a class-based object system in favor of a prototype-based object inspired by Self, a language developed by mainly by Sun Microsystems as a successor to Smalltalk. In the latter case, maybe someone could school me, but I'm not really convinced that prototypes are vastly better than classes. And in case you didn't know what I'm talking about already:
I've read that the class syntax that's been in JavaScript since the ES6 standard should be abandoned altogether but I'm fortunately not so involved in unswell JavaScript faggotry currently to evaluate these claims. What I have a bug up my ass about is the weak typing. This meme mostly portrays the situation well:
javascript-weak-typing-meme.png
A few of the examples are just due to the floating point standard. 0.1 + 0.2 != 0.3 is just as true in Python because it's inherently difficult to represent decimal numbers in pure binary. On a related note, if you want to handle currency accurately in any programming language, you need to use a dedicated decimal data type or use integer arithmetic on cents / pennies and then convert to dollars or Euros or whatever at the last possible moment. (I even remember solving British math puzzles from well before 1970 when they finally did decimalization where I had to convert pounds (20 shillings) and shillings (12 pence) to pence to make the algebra work and the New York Stock Exchange once reported eighths of a dollar into the 80s, which you'll know if you watch Wall Street.)

Anyway, even after that last detour, JavaScript's weak typing is total faggotry. I'm sort of indifferent between static and dynamic typing but I really hate weak typing. Python seems to exhibit weak typing in a few instances, like summing bool values and dividing by len() to come up with a proportion of which outcomes were True over how many total attempts, but if you look under the hood, it's just magic methods rather than nebulously forcing one type of data to become another. JavaScript and Perl, venerable web programming languages, both made the Faustian pact of making things a tiny bit more convenient temporarily only to fuck things up years later. That's why === (triple equal sign) exists in JavaScript and not in a good way like it exists in Ruby. It wasn't thought that JavaScript would ever be used for anything other than tasks like simple form validation in the early- to mid-90s and now we are continuing to pay the price of temporary convenience.

C is the first language you want to learn, you get to understand how the system works and it also becomes very easy to learn other languages as they all borrow very heavily from C.
Sorry, nah. The value of C and assembly language(s) isn't lost on me but consider all of the people who want to acquire data science knowledge who are coming from social / behavioral sciences and/or a medical background or even other scientific or mathematical backgrounds. Should they immediately have to deal with memory management or should they be let to hit the bricks running with tools like Python and/or R? It isn't the 80s or 90s anymore and they should be able to dive right into visualizing and modeling data without having to worry about malloc() and free(). Deeper understanding of the Blessed Machine can come later.
 
Last edited by a moderator:
What's the best way to dive headfirst into learning cooooooooooding?

(I want to learn all the common C languages)
 
This looks like a lot of fun, but how does typing all of those crazy symbols work?
I've explained that here:
Someone wrote an APL mode for GNU Emacs, gnu-apl-mode, which I use. The input method uses a period as the prefix key. That may seem inconvenient, but it's not when the entire program fits on a few lines anyway.
Something else nice about how the entire program fits on a few lines occurs to me when I consider alternative implementation strategies. It may happen that an APL program can't reasonably be optimized for some superior implementation, but rewriting the program is fine when it's a single line, isn't it?
I hate the "every language above machine code is high level" argument.
Why, because it's true? I mentioned Forth as an example, which truly does eschew many abstractions, but most languages would fall into the middling category, low level enough to be a pain in the ass and not high level enough to be worth a fuck.
C/C++ code when read it's easy to map the lines to assembler code.
It's certainly easy for one to fool himself into thinking he can do that.
I understand compilers will generate what they want when optimising.
Yes, like that.
Having a language with pointers and manual memory management + alignment and so forth is a HUGE difference between a language like Java
Yes, and for almost everything it's a huge mistake.
C is the first language you want to learn, you get to understand how the system works and it also becomes very easy to learn other languages as they all borrow very heavily from C.
This is lunacy. It's good for tricking oneself into believing he knows how the system works, and it's excellent if someone wants to learn many languages that are all the same besides minor differences.
they all borrow very heavily from C
This is a lie. Many of the languages I enjoy existed beforehand. Of course, I guess this statement is true if one ignores every instance in which it's incorrect. The will to rewrite history is strong with every cult.
Honestly, if newcomers get filtered by pointers early it saves them time with the fact they stop pursuing programming and software from having a shit dev.
While I agree the skill level for professional programming should be raised, that includes filtering the midwits who think unnecessary hardship to be good. Half the complications people have with pointers in that language is the unbelievably shitty syntax.
 
an APL mode for GNU Emacs
Say no more. I'm sold.

Of course, I guess this statement is true if one ignores every instance in which it's incorrect.
:winner:

unnecessary hardship
That's most programming tbf.

In my experience, it's the mechanics behind pointers (memory addressing, dereferencing, etc.) that trip most students up. It takes a certain mindset to work with, and most people simply aren't well equipped to work within that mindset. It's arguably a good filter to weed out the people who don't take the time to practice and learn this shit. You can abstract away the idea of pointers entirely, and most languages these days do, but we're starting to see the deleterious effects of permitting this sort of intellectual laziness where people don't care to learn how things work and what the best practices are by extension. If you have no clue how memory management works, memory is effectively a magic resource that appears out of thin air and manages itself.

So I guess my point is: how much of this hardship is truly unnecessary in the long run? You're still gaining valuable technical experience by working through it.
 
That's most programming tbf.
:drink:
In my experience, it's the mechanics behind pointers (memory addressing, dereferencing, etc.) that trip most students up. It takes a certain mindset to work with, and most people simply aren't well equipped to work within that mindset.
The fundamental concept here is indirection, not pointers. Someone who can understand indirection can understand pointers, unique database keys likely implemented as pointers, and references.
we're starting to see the deleterious effects of permitting this sort of intellectual laziness where people don't care to learn how things work and what the best practices are by extension
People will write Fortran in any language.
So I guess my point is: how much of this hardship is truly unnecessary in the long run?
I can't think of a single good reason for someone to learn the C language for these things over a machine code. The machine code is simpler, and really does lack training wheels. Someone who wants to learn about the machine implementation of concepts like indirection should write a simple machine code program, if he wants practical experience at all. I may not have a good perspective here, because I did all of this anyway, but I can easily envision someone just as competent who didn't need it.
 
The fundamental concept here is indirection, not pointers. Someone who can understand indirection can understand pointers, unique database keys likely implemented as pointers, and references.
Agreed. At the core of my argument is the eternal debate around the Fundamental Theorem of Software Engineering, but centered around education rather than practicality and efficiency. This further generalizes to any sort of conceptual abstraction, not limited to CS.

We see the concept of atrophy a lot in nature—use it or lose it. It's arguably a result of natural tendencies that lean towards energy efficiency. I question if the same sort of thing happens cognitively if we start abstracting too much away. Kids these days have a very fragmented understanding of how filesystems work, largely thanks to the weird compartmentalized way iOS handles inter-app userspace file sharing (which didn't even exist for many years). If that abstraction system goes tits up, or they have to deal with something less abstracted, they're basically useless at troubleshooting because they lack any of the fundamentals.

The historical approach to dealing with such a can of nuanced worms is simply never to open it—after all, humans are terrible with nuance and thinking at scale—but sadly we (mankind) can't do that here. We're soon going to be forced to have some very tough conversations about the generations of lazy, wholly incompetent retards we're raising. I say this not with an answer in mind; merely as a huge concern of mine.

People will write Fortran in any language.
:drink:

I can't think of a single good reason for someone to learn the C language for these things over a machine code. The machine code is simpler, and really does lack training wheels. Someone who wants to learn about the machine implementation of concepts like indirection should write a simple machine code program, if he wants practical experience at all. I may not have a good perspective here, because I did all of this anyway, but I can easily envision someone just as competent who didn't need it.
I agree 100%, in principle. x86 is pretty high-level, relatively speaking. The thing is you and I aren't wired like most people. There are guys like us who naturally gravitate towards the deeper theoretical stuff, and there are guys who simply want to write better, more speedier programs, who need a practical framework to apply this conceptual stuff without needing our autistic levels of patience. C (on a Unix system, as the Lord intended) serves as a remarkably solid compromise here.
 
Last edited:
Back