Programming thread

GDB isn't too bad but printing to the terminal is all fucked up, GDB prints the string starting all the way to the left instead of buffering it for the (gdb) prompt (pic related)
This looks like a line ending issue to me, not a GDB problem. Check you're definitely writing a \n to stdout. If you can share your code, I can tell you if you're doing something wrong.

Kate got enshittified into some shit like Kdevelop/Kwrite/Kwhat-ever-the-fuck
I don't know what you mean, Kate's been the same for years. Kwrite is the cutdown version, and Kdevelop is an IDE using a similar interface. Kate should be fine for writing asm.
 
  • Like
Reactions: UERISIMILITUDO
This looks like a line ending issue to me, not a GDB problem.
Not at my PC but the string is
msg: "Hello, world!", 10
10 being decimal for \n
Using msg: 10, "Hello, world!", 10 puts the string under (gdb) as expected, but that's a janky work around and makes normal use of any program fucked up. Also, I'm not sure how it looks like a line ending error when you'd expect that to look like
Hello, world!(gdb)

I use codelldb in codium for C/C++, it should work with assembly as well
I skimmed through the GitHub and it says it has memory view but doesn't mention registers. Does it have register view?
 
Last edited:
  • Like
Reactions: UERISIMILITUDO
msg: "Hello, world!", 10
I don't know the precise dialect, but this looks sus. Is it putting a zero after the exclamation mark?

Also, I'm not sure how it looks like a line ending error when you'd expect that to look like
Hello, world!(gdb)
Just tested it, it looks like GDB always moves the prompt back to the left.

Here's a working version for GAS:
Code:
.intel_syntax noprefix
.data
str:
        .asciz "Hello world!\n"
str_end:
str_length:
        .quad str_end - str

.text
.global main
main:
        mov rdi, 1
        lea rsi, [rip+str]
        mov rdx, QWORD PTR [rip+str_length]
        mov rax, 1
        syscall

        xor rax, rax
        ret
 
I'm learning 64-bit assembly as a sort of morbid curiosity/hobby and am struggling to find a good environment. I had a pretty good environment setup a few years ago when I was doing 32-bit but those tools either don't exist anymore/severely changed/are abandoned by the devs and never got updated to 64-bit software. What I had was Kate which was a good basic IDE that supported asm syntax highlighting and a terminal where I could launch the Insight debugger. Kate got enshittified into some shit like Kdevelop/Kwrite/Kwhat-ever-the-fuck and Insight has been abandoned since 2009. What I'm using now is just a basic ass text editor with a custom xml theme for syntax highlighting and I have to use the GNU Debugger on the CLI. GDB isn't too bad but printing to the terminal is all fucked up, GDB prints the string starting all the way to the left instead of buffering it for the (gdb) prompt (pic related)
View attachment 6777148 - GDB trying to print Hello, world!

So, does anyone know a decent IDE or graphical front end for GDB that I can use specifically for assembly?
Check out radare2 or ghidra for debugging stuff
 
  • Informative
Reactions: Belisarius Cawl
If you look at *[str+13] it's going to be 0xA, or 10 decimal. It's just the number that represents \n in ASCII
Yeah, I know that. But if you put a string literal in your asm does it automatically append a zero, like C does? And how are you calculating your string size?
 
This is probably a stupid question for multiple reasons, but I don't know where to start looking for an answer. For context, I'm asking to improve my understanding of memory managenent, in the case I go through with writing a compiler, like I had mentioned upthread. In C++, destructors are called automatically when an object goes out of scope, right? RAII and all that.

How is the lifetime of an object determined, and why isn't the principle of RAII just applied to dynamic memory? I understand that this is probably a naive question, given that tracing GC seems to be the popular implementation of memory management, but why can't I just malloc some memory and expect the compiler to sweep it up when the memory goes out of scope? My assumption is that this gets into refcounting the moment that there's more than one reference to the memory, and refcounting introduces processing overhead, but couldn't that be avoided by using a smart pointer implementation that checks if the memory has been freed and throws a recoverable error if the resource is no longer available?

I feel like I'm missing something really obvious given that this isn't the norm. Like, is there any reason I can't just write C++ code as if I'm writing C code, but replace malloc() and free() with smart allocators? If so, why the proliferation of memory managed languages??
 
Actually, I think I got it wrong before.

We need a purely functional algorithm with Hindley-Milner typing (Ocaml):
Code:
type parity =
  | Even
  | Odd

let flip_parity = function
  | Even -> Odd
  | Odd  -> Even

let rec parity_of_int = function
  | 0            -> Even
  | 1            -> Odd
  | n when n < 0 -> parity_of_int (n * -1)
  | n            -> parity_of_int (n - 1) |> flip_parity
This is probably a stupid question for multiple reasons, but I don't know where to start looking for an answer. For context, I'm asking to improve my understanding of memory managenent, in the case I go through with writing a compiler, like I had mentioned upthread. In C++, destructors are called automatically when an object goes out of scope, right? RAII and all that.

How is the lifetime of an object determined, and why isn't the principle of RAII just applied to dynamic memory? I understand that this is probably a naive question, given that tracing GC seems to be the popular implementation of memory management, but why can't I just malloc some memory and expect the compiler to sweep it up when the memory goes out of scope? My assumption is that this gets into refcounting the moment that there's more than one reference to the memory, and refcounting introduces processing overhead, but couldn't that be avoided by using a smart pointer implementation that checks if the memory has been freed and throws a recoverable error if the resource is no longer available?

I feel like I'm missing something really obvious given that this isn't the norm. Like, is there any reason I can't just write C++ code as if I'm writing C code, but replace malloc() and free() with smart allocators? If so, why the proliferation of memory managed languages??
You're sorta onto the right track to with reference counting, although reference counting doesn't solve the problem if there are cycles in your objects. So A points to B, B points to C and C points to A. All three will maintain at least one reference, even if there are no outside references to any of them and no code could possibly manipulate them anymore.

At that point you need true garbage collection.
 
expect the compiler to sweep it up
Sweep it up janny compiler, sweep it up!
Like, is there any reason I can't just write C++ code as if I'm writing C code, but replace malloc() and free() with smart allocators? If so, why the proliferation of memory managed languages??
If you're gonna shit memory all over the place you might as well use a GC language like Go. If you're going to use C do it right and write a custom allocator for your use case and manage the lifetimes at the level of systems not individual allocations
 
Wrong, child :tomlinson:
View attachment 6736909

The right way is clear:
C-like:
private bool IsEven(int number){
    // TODO: Negative numbers
    switch (number) {
    case 1:
        return false;
    case 2:
        return true;
    case 3:
        return false;
    case 4:
        return true;
    case 5:
        return false;
    case 6:
        return true;
    case 7:
        return false;
    case 8:
        return true;
    case 9:
        return false;
    case 10:
        return true;
    case 11:
        return false;
    case 12:
        return true;
    case 13:
        return false;
    case 14:
        return true;
    case 15:
        return false;
    case 16:
        return true;
    ...
    }
}
This sort of innovation is why the pajeets are taking all the entry level programming jobs from us. Y'all need to step your game up.

(In reality, your intuition is correct. Simple predicates like this should essentially be descriptive wrappers for a straightforward boolean expression; no more than a 3 or 4 lines at most if you need to do any setup. In languages like C, you could even do return !(number % 2); since 0s are false.)

WRONG STALKER BABY CHILD THAT CODE IS SUPER NIGGERLICIOUS

C#:
public class EvenNumbers
{
public int[] numbers = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22] //and so on;

public bool isEven(int number)
{
 bool b = false;
 for(float i = 0; i < numbers.length; i++)
{
if(numbers[i] % 2 == 0 && number == numbers[i])
{
b = true;
break;
}
}
return b;
}

}

C#:
EvenNumbers nigger = new EvenNumbers();

Wrote this on my phone but it should work thr best.

That's a perfect case for C++ constexpr and templates.
C++:
template<typename T>
  requires(std::is_integral_v<T>)
class TableOfEvenNumbers {
public:
  constexpr static bool isEven(T number) {
    return std::find(arr.begin(), arr.end(), number) != arr.end();
  }

private:
  constexpr static Limit = std::numeric_limit<T>::value;
  using ArrayType = std::array<T, Limit / 2>;
  static ArrayType generateTableOfEvenNumbers() {
    for (int i = 2; i <= Limit; i += 2)
    {
      arr[i/2] = i;
    }
  }
  constexpr static inline ArrayType arr = generateTableOfEvenNumbers();
}

And now you can just
C++:
TableOfEvenNumbers<Int>::isEven(x);

Idk it this would compile, but with a little bit of work it should.
Small brain: Use modulo
Medium brain: Write a recursive function
Big brain: Create a giant if/else chain or switch statement
Galaxy brain: Use reflection to generate a giant if/else chain or switch statement from INT_MIN to INT_MAX
 
but why can't I just malloc some memory and expect the compiler to sweep it up when the memory goes out of scope?
Wellllll... https://www.man7.org/linux/man-pages/man3/alloca.3.html
(do not do this)

It's very common to transfer malloc'ed pointers across scopes by returning them, so I would think "RAII-like" mallocs would usually not be what you want.

There's nothing stopping you from rolling your own "RAII malloc" though, you can set up an arena allocator (or equivalent) in your local scope, and trash it on scope exit.
Windows will actually let you easily create your own private heap that works the same way as "the" CRT malloc. Not sure about Linux.
 
This is probably a stupid question for multiple reasons, but I don't know where to start looking for an answer. For context, I'm asking to improve my understanding of memory managenent, in the case I go through with writing a compiler, like I had mentioned upthread. In C++, destructors are called automatically when an object goes out of scope, right? RAII and all that.

How is the lifetime of an object determined, and why isn't the principle of RAII just applied to dynamic memory? I understand that this is probably a naive question, given that tracing GC seems to be the popular implementation of memory management, but why can't I just malloc some memory and expect the compiler to sweep it up when the memory goes out of scope? My assumption is that this gets into refcounting the moment that there's more than one reference to the memory, and refcounting introduces processing overhead, but couldn't that be avoided by using a smart pointer implementation that checks if the memory has been freed and throws a recoverable error if the resource is no longer available?

I feel like I'm missing something really obvious given that this isn't the norm. Like, is there any reason I can't just write C++ code as if I'm writing C code, but replace malloc() and free() with smart allocators? If so, why the proliferation of memory managed languages??
Scopes are a concept of your programming language. Malloc doesn't know such a thing as a scope exists. If you want to guarantee your program will never access deallocated memory again, malloc wouldn't be allowed to allocate at that unused region again -- effectively leaking memory.
 
Just have an AI randomly generate self compiled articulate main fraim architecture to tell if the floating quantum point block chain hash code added to the gpt main artificial self architecture code then it can self awarely tell if the number is even.

Ask @bearycool, he'll explain more.
explain my own schizophrenia, I don't think so , how dare you sir! :story:

Yeah, so if you try to compile this directly into the console you're going to just create a true Random Number Generator-- really great for API Keys, or I guess Key-chain-hashes? I mean, you won't need a 12 word memomnic, but you're definitely not going to remember that RNG key encryption unless you ask an AI entity to encrypt it further with a cryptographic tool mlike ECC, RSA, something, anything in GNuPG/GPG. That way you have the private key for yourself, which is truly random, and then you have the public encrypted version, which itself is also pretty damn truly random.

Meta-Mask Self-Custody has this AI chatbot, that I created, whom I call "Finn" that helps in learning about how to become a self-custodian for your Bitcoin/Ethereum wallets so no can back door enter and siphon your funds. I'm trying to get so the AI, which cares not for coin like QuadrigaCX's bullshit that happened that took $250,000,000 from 115,000 victims by pretending to be a Custodian: yeah, that's what I'm trying to make real for everyone: financial shit being so secure, no one can back enter into it somehow..

There's also like just a GPT that creates the website version of the chatbot for your specific needs as a mock up before you bring up that code into a production environment like Meta Mask.



that way you're not the paranoid group of people that use Cash-App and change their Cash-app wallet shit every 24 hours, because you can confident that the key you generated is random, but it can be used as a public PGP facing signature like for use in Proton.me accounts that both @Marvin and myself use to secure our identities via email UID.
 
How is the lifetime of an object determined, and why isn't the principle of RAII just applied to dynamic memory? I understand that this is probably a naive question, given that tracing GC seems to be the popular implementation of memory management, but why can't I just malloc some memory and expect the compiler to sweep it up when the memory goes out of scope? My assumption is that this gets into refcounting the moment that there's more than one reference to the memory, and refcounting introduces processing overhead, but couldn't that be avoided by using a smart pointer implementation that checks if the memory has been freed and throws a recoverable error if the resource is no longer available?
Lifetime has a formal definition in C and C++, if that's what you mean. The C one is much simpler, so it might be a better starting point. Note how it's typically tied to scope, but not always.

As for why the compiler can't just sweep it all up: Determining at compilation time when objects become unnecessary is impossible in general because of the halting problem. You can solve this by restricting what programs the compiler accepts, like your (rather impractical) "only one reference may be held at any time" scheme; Rust works like a smarter version of this, and also provides optional GC (in the form of refcounting) to handle the remaining programs. There's also a common inverse of this, where the compiler tries its best to determine lifetimes automatically, and then lets a (typically tracing) GC handle the ones it can't figure out ("escape analysis").

I can't really make sense of your smart pointer scheme, but afaict it has no way to detect when lifetimes end (so it can't replace refcounting) and it also sounds like it'd have overhead on every access, which is a lot. For comparison, refcounting only has overhead on assignment and tracing GCs typically have small overhead during assignment and large infrequent overhead on allocation.

Sorry if I'm retreading basics here, but just from your post it's not really clear what level you're at.
 
I feel like I'm missing something really obvious given that this isn't the norm. Like, is there any reason I can't just write C++ code as if I'm writing C code, but replace malloc() and free() with smart allocators? If so, why the proliferation of memory managed languages??
To add to other answers. That's what new/delete do. New will return pointer to initialized object, and delete will call dtor and free memory.

RAII even though being my favorite language feature is not silver bullet. Without enforced lifetimes, like in Rust, you still can double free, or leak memory.
However you definitely can do a lot by just sticking to std containers and smart pointers, and having it being enforced this way.
 
Back