Programming thread

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
If I could learn Python in a couple nights as a kid with this tutorial years ago, anyone can. Each video is 5mins.
Just make sure to install Python 3 instead of Python 2, and use "print" with brackets.
Code:
So instead of:
print "test"

Do:
print("test")
And use "input" instead of "raw_input" too.
There's also range vs. xrange and a bunch of other stuff to worry about. I'm glad this tutorial helped you but frankly @Slav Power would be better served by more up-to-date material
 
People complain about it being slow but that is fine for nearly all cases where you'd want to use it. But the Python 2/3 thing and there being like 10 different versions with a clusterfuck of trying to get different stuff to work together is actually the worst part.
Makes me wonder if the Perl method of just slowly breaking shit from 5.6 to 5.10 would've been better than the giant 2->3 leap Python went through.
 
Makes me wonder if the Perl method of just slowly breaking shit from 5.6 to 5.10 would've been better than the giant 2->3 leap Python went through.
Did they actually break anything?
I thought they were keeping backwards compatibility with existing code, and made developers add a version directive at the top of each file of source code that wants to use one of the newer language versions.
So you'd only need a single installation of the Perl interpreter, and if you run a .pl script with it, it automatically reads from the script itself what language mode it wants to be executed in.

Whereas the Python guys said "Fuck it, the python3 interpreter is only gonna be able to run Python version 3 scripts", which means that if you have both older and newer .py scripts you need to install both python2 and python3 on the same system and deal with the mess of ensuring that each script is run with the right one.
 
Last edited:
You guys know any good, simple and engaging tutorial sets for getting the hang of Python and Lua my autistic ADHD ass won't drop in a few minutes? I kinda want to learn both, Python for writing up quick dumb shit as an alternative or addition to Batch and AutoHotkey and Lua for mainly getting the hang of why ComputerCraft was such a big deal, and maybe making some cool shit with it, like automating factories and whatnot.
My suggestion is to start with Making Games with Python and PyGame
I don't know what learn type you are: Reading stuff, watching videos or hands on.

But learning Python and programming by making a game is the most straight forward and non-boring way. If you actually want theory I guess "Learn to think like a computer scientist" or whatever it's called.

Then, when you got the hang of Python, I'd suggest installing the LÖVE engine and porting what you made in in Python to Lua this way.

My reasoning: Python has the bigger community, Lua is a nics language but it's an embeddable language and not that straightforward to use directly as Python. The courses are worse than for Python and less libraries available.
 
The motivation for the change in both was switching string handling to be more unicode friendly, both in the language and in the interpreter.

Perl devs love backwards compatibility so what they did after changing the interpreter to use UTF-8 internally was for the interpreter to just guess if a given input or output was a UTF-8 string or a ASCII string, and then it would set a flag on the string as unicode or not. As you can imagine, this broke a lot of shit since its guesses weren't always right. So developers had to go in and manually specify when I/O was UTF-8 or convert strings to UTF-8. This actually went fairly fine.

Meanwhile in Python land they just broke everything going from 2 to 3. There was no soft transition period. Some devs didn't want to bother rewriting their code. Others were stuck waiting for the maintainers of their dependencies to rewrite their code so they could release a Python 3 compatible version. It sucked and also a lot of biostats stuff still uses Python 2
 
Makes me wonder if the Perl method of just slowly breaking shit from 5.6 to 5.10 would've been better than the giant 2->3 leap Python went through.
Python has both problems of slow changes and sudden breaking changes. There is the Python 2 to Python 3 breaking change and like 10 different versions of Python 3 that are slightly different.

I think big large changes are good but you should only do them after a really long time and you tell everyone YEARS in advance and you make sure you have the big stuff that would convince people to change done and ready to go rather than being released 5 versions down the line.

There was a lot of drama and anger over this stuff when it happened.

Whereas the Python guys said "Fuck it, the python3 interpreter is only gonna be able to run Python version 3 scripts", which means that if you have both older and newer .py scripts you need to install both python2 and python3 on the same system and deal with the mess of ensuring that each script is run with the right one.
This would help but I'd say the worst problem is getting packages to work together. I can't tell you about the nightmare of trying to get CUDA to work with Tensorflow.
 
I'm not giving ClosedAI my phone number, they can suck it.

As for concepts programming, I have an idea of it. Tried learning a bit of C and a bit of C++, I write up macros in AutoHotkey, so the concepts of functions, variables, order of operation and stuff like that isn't exactly alien to me. The thing is that I want to learn Python as it seems to be the easiest and most practical thing for me to learn, so for example I can write a practical CLI program that does what I want.

I think that's the best way for me to learn programming, I have a specific thing I'd like to do, maybe I've already done it in Batch or AutoHotkey, and I'd like to try and port it to something like Python, learn how it works and how it can improve it compared to what I've already done. It's easier than making something from scratch, and it's a bit more motivating when you already know what you want to make. I'd also prefer to avoid stuff like readily available packages that do everything for you, keeping dependencies to an absolute minimum.

As for the Python 2/3 discourse, it reminds me of how AutoHotkey abandoned v1 versions in the name of v2 versions. v1 was a mess of two syntaxes, and v2 has unified it, and also changed some functions and added new ones. v1 scripts aren't compatible with v2 scripts and vice versa, but it is possible to have both versions installed and working. I ended up rewriting my macros to v2 since it wasn't too hard and there isn't much sense in staying on old versions.
 
lua is both simple and powerful. It's also really fast. I vastly prefer it to python personally. I'd focus on lua 5.1. You are really on the right path already, IMO it's easiest to learn a programming language by actually using it to solve real problems, even if they might be in a game. I was never a friend of reading programming handbooks back to back or doing "tutorials". If you try to solve specific problems, you learn the most, because you actually need and want it, so the motivation is completely different. It's like learning to speak a language, you can learn and be proficient but if you're not using it you won't get far and probably forget half after a few months. The documentation is ample and accessible, the basics are covered in the official manual. I don't really feel like you need a lot more to get into lua. You'll know when you want to implement more advanced concepts if you get that far. The syntax is really simple, but don't be fooled, it's a powerful language. (to shoot yourself in the foot with, with that I mean fun things like mistyping a variable name just quietly creating a new variable, you might wanna look for a linter)

--
I did a very minimal bare metal forth for a self-built Z180-based system and while looking for something else, stumbled across esp32 forth for the (far more capable) microcontroller of the same name. Even though it's a built-on abstraction and not really bare metal I had a lot of fun in just how accessible it makes the microcontroller. Just connect it via serial/usb and go, can even save your programs directly to the onboard flash, all that work is done for you in the C kernel and exported via simple words. I've spent the last few days porting my vt/ansi stuff and a rudimentary text editor from the Z180 to make the REPL of the esp32 forth more comfortable to use (yes, this all happens on the fly from inside the enviroment) and add some more comfort functions. I kinda hate to admit it because I dropped both cash and time on developing that Z180 but the esp32 is of course far more impressive hardware-wise, even if there are a ton more perpiherals and the registers are a lot more complex so direct bitbanging things is not as simple and it's easier to add such stuff via the C kernel and link them to words. It would be very cool to build a forth machine that's just a bunch of distributed micrcontrollers dynamically reprogramming themselves (propagating code from a "master controller") according to task, kinda like what Moore thought of.(yes, I'm aware of GreenArray)

I know forth is probably not garnering tons of interest here but it's a cool and unique language. There's the classic Starting Forth, but a book I can also recommend is Thinking Forth, I feel it's one of these books you can universally profit from as a programmer even if not interested in Forth per se.

EDIT: fixed link
 
Last edited:
You guys know any good, simple and engaging tutorial sets for getting the hang of Python and Lua my autistic ADHD ass won't drop in a few minutes? I kinda want to learn both, Python for writing up quick dumb shit as an alternative or addition to Batch and AutoHotkey and Lua for mainly getting the hang of why ComputerCraft was such a big deal, and maybe making some cool shit with it, like automating factories and whatnot.
This one used to be good (dunno about now, they redesigned website, hope they didn't fuck with the content)

Are you Russian? If so, take any intro to Python, literally any, then do https://stepik.org/course/217/promo and https://stepik.org/course/1547/promo
There's a Coursera version for the language-challenged with moar handholding and moar content https://www.coursera.org/specializations/data-structures-algorithms
 
You guys know any good, simple and engaging tutorial sets for getting the hang of Python and Lua my autistic ADHD ass won't drop in a few minutes? I kinda want to learn both, Python for writing up quick dumb shit as an alternative or addition to Batch and AutoHotkey and Lua for mainly getting the hang of why ComputerCraft was such a big deal, and maybe making some cool shit with it, like automating factories and whatnot.
I see https://automatetheboringstuff.com/ get suggested a lot for Python. I already had a decent grasp of programming overall before learning Python, so I just used the standard documentation. But I have heard a lot of good things about this online book, and it looks good to me at a glance. It's free to read in full online, and the video version is a paid Udemy thing iirc.
 
Last edited:
Memory management techniques all have trade-offs between efficiency, security, and ease of use. Languages that use garbage collection basically handle freeing allocated memory for you when it goes out of scope, but they are inefficient compared to well written C code where everything is done manually but in a hopefully sane way. Each are better for certain tasks, depending on resource requirements and typical use case.
To expand on this, I would suggest that projects above a certain size and level of complexity should default to garbage collection, and only go manual memory allocation if you have a specific, articulatable reason.

When object lifetimes get too complicated, it becomes a lot harder to write manual memory management code that hugs those object lifetimes properly. Keeping them around only as long as they need but no longer.

So you'll see C/C++ programmers just throw their hands up and allocate everything at the toplevel (or at a much higher level than necessary).

So they don't have memory leaks, but the C program develops a memory inefficiency unique to C programs; just keeping the process memory footprint unnecessarily high the whole time.

Garbage collection trades some CPU time for a lower memory footprint. OR if you take more risks with the memory management and try to manage the memory more accurately, instead you get huge risks with security. Many/most of the big news headline grabbing security bugs for most of my lifetime have been because of C programs mismanaging memory.
~ is the bitwise complement.

For example, let's take the number 88. In binary (to 8 bits), it is 01011000. To take the complement of it, we flip each bit (i.e. 01011000 -> 10100111). So if uint n = 88, ~n == 167 if I did the base conversion properly in my head.
Huh, now I just realized that I don't think I've ever specifically used that operator. Does it not flip the bits for the whole width of the uint? Which is probably like sizeof(uint) == 8 nowadays?
 
Garbage collection trades some CPU time for a lower memory footprint.
I think that for almost the majority of programs garbage collection is not (that) bad and it's a nice compromise between focusing on the actual program logic vs keep a record of all the allocated memory, where it has been allocated, who owns it, etc.
Of course, if it's for kernel development/microcontrollers then there is nothing better than C (no, I refuse to consider Rust as an alternative, Brendan Eich got involved in it: the same man that conceived JavaScript).
 
To expand on this, I would suggest that projects above a certain size and level of complexity should default to garbage collection, and only go manual memory allocation if you have a specific, articulatable reason.

When object lifetimes get too complicated, it becomes a lot harder to write manual memory management code that hugs those object lifetimes properly. Keeping them around only as long as they need but no longer.

So you'll see C/C++ programmers just throw their hands up and allocate everything at the toplevel (or at a much higher level than necessary).

So they don't have memory leaks, but the C program develops a memory inefficiency unique to C programs; just keeping the process memory footprint unnecessarily high the whole time.

Garbage collection trades some CPU time for a lower memory footprint. OR if you take more risks with the memory management and try to manage the memory more accurately, instead you get huge risks with security. Many/most of the big news headline grabbing security bugs for most of my lifetime have been because of C programs mismanaging memory.
These are really good points; I 100% agree. My chat client, for example, was written in Go because I would rather shoot myself than write all of it in C, as much as I love the language. I would have to worry about Windows compatibility (especially with whatever interface library I end up choosing) and handling all of that chat data without introducing vulnerabilities. I'm sure I could do it, but there's really no good reason for me to do such a thing. Rust probably would have been a decent compromise, but I wanted to enjoy the process of writing the program ;)

Huh, now I just realized that I don't think I've ever specifically used that operator. Does it not flip the bits for the whole width of the uint? Which is probably like sizeof(uint) == 8 nowadays?
Yeah, I do a lot of work with bits when writing console exploits and the like, but I can't recall ever using the operator even once. It's basically just a bitwise NOT, and I suppose there's a chance I used one of those when writing raw assembly.

It would flip every bit the type takes up; good catch. The type name was added in an edit, and I wasn't sure if writing out uint8_t would be confusing, but I wanted to make the distinction between signed and unsigned. I should fix it to be 100% clear.

Edit: Looks like I can't fix it. Oh well.
It is an important distinction though. Hopefully future readers will read up to here.
 
Last edited:
  • Agree
Reactions: Marvin
Yeah, I do a lot of work with bits when writing console exploits and the like, but I can't recall ever using the operator even once. It's basically just a bitwise NOT, and I suppose there's a chance I used one of those when writing raw assembly.
I think every instance I've used it has been in conjunction with a bitwise AND. So a &= ~b does the opposite of a |= b.
 
I hate to be that guy, but the ~ operator in C is confusing me. Apparently, it’s one’s complement but it’s also some kind of mask?
I can't recall ever using the operator even once.
A reasonably common use-case for ~ would be clearing a specific bit of a bit field, as the unsetBit member function does in this class:

C++:
class BitField
{
public:
    bool isBitSet  (int index) const { return (_bits & (1 << index)) != 0; }
    void setBit    (int index)       { _bits |= (1 << index); }
    void unsetBit  (int index)       { _bits &= ~(1 << index); }
    void toggleBit (int index)       { _bits ^= (1 << index); }

private:
    int _bits = 0;
};

In this example, index would be a number between 0 and 31, so the result of (1 << index) would look like this in binary:
Code:
1 << 0   ==   00000000 00000000 00000000 00000001
1 << 1   ==   00000000 00000000 00000000 00000010
1 << 2   ==   00000000 00000000 00000000 00000100
1 << 3   ==   00000000 00000000 00000000 00001000
...
1 << 31  ==   10000000 00000000 00000000 00000000
Each of these numbers would be called a "bitmask", because it is applied to the stored bit field using one of the bitwise AND/OR/XOR operators.

In the unsetBit case, the bitmask is inverted before it is applied:
Code:
  1 << 5  ==  00000000 00000000 00000000 00100000
~(1 << 5) ==  11111111 11111111 11111111 11011111
 
Last edited:
I'm frothing at the mouth for mdspan to actually become fully standardized, I've been wanting to make my own math library for fun and this will make tensors a breeze
 
Apparently, it’s one’s complement
The name is not particularly helpful in understanding what it actually is... the names "ones' complement" and "two's complement" are kind of terrible, and you'd probably be better off to just learn what they are and try to avoid thinking about whether those names actually mean anything. Particularly, if you know how to do a "ones' complement", you know absolutely nothing by extension about how to do a "two's complement", and vice versa.

Knuth, Donald E. "4.1. Positional Number Systems". The Art of Computer Programming, Volume 2: Seminumerical Algorithms (3rd ed.).
"A two's complement number is complemented with respect to a single power of 2, while a ones' complement number is complemented with respect to a long sequence of 1s."

In other words, they're completely different things, and the fact that their names are similar is more misleading than anything.
 
Back