Programming thread

It's pretty rare, but malloc can fail to allocate memory. If any problems do arise, malloc and co. will return a NULL pointer; be sure to check your pointer isn't NULL before proceeding with using the memory you tried to allocate.
I assumed that he knew this, but yeah, it's worth mentioning.

If you're trying to allocate a huge chunk of memory and you fail, then yeah, it's probably appropriate to try to do something graceful about that. But depending on what you're doing, in some cases the only appropriate solution is to crash and burn spectacularly.
 
If you're trying to allocate a huge chunk of memory and you fail, then yeah, it's probably appropriate to try to do something graceful about that.
Every modern OS overcommits. You can ask for a terabyte of RAM straight up and get it. Except your process doesn't have ownership of it. The OS won't assign pages to your process until you actually write to a page.

Which means you can't do anything graceful in true out of memory situations. It'll come when you're writing something to memory, and then bam your program is an undefined state and the only thing you can do is terminate execution.
 
It's pretty rare, but malloc can fail to allocate memory. If any problems do arise, malloc and co. will return a NULL pointer; be sure to check your pointer isn't NULL before proceeding with using the memory you tried to allocate.
This is good advice in general, but IIRC in practice it depends on the OS. malloc only fails on Linux if it runs out of virtual memory space for your process, which can only really happen if you do something stupid. Windows has a quota system instead, so malloc can fail.

Every modern OS overcommits. You can ask for a terabyte of RAM straight up and get it. Except your process doesn't have ownership of it. The OS won't assign pages to your process until you actually write to a page.
Sort of. Windows will refuse to give you 1TB of RAM. It only doles out pages once they're called for, that's true, but this is just an optimization to keep the page-scrubbing thread happy. It doesn't actually overcommit - there's no OOM killer on Windows.
 
Thank you for answers everyone. I was asking because I wanted to my hand at making a dynamic array and after much frustration, it seems I have succeeded.
C:
//DemetiCa.c

#include <stdio.h>
#include <stdlib.h>
#include <stddef.h>

int main(){
    
    //use a collatz sequence to generate dynamic memory
    
    int stnum, count; //start num is the start of the collatz
    
    int *holdCol = (int*)malloc(sizeof(int));

    //int holdCol[1];
    //initialize stnum
    stnum = 45, count = 1;

    *holdCol = stnum;
    //basic collatz loop
    
    //holdCol[0] = stnum;

    //int *p_holdCol = holdCol;


    while(stnum != 1){
        if(stnum % 2 == 1)
            stnum = (stnum*3)+1;
        else
            stnum /= 2;

        //count++;
        //now try to allocate the memory
        holdCol = (int*)realloc(holdCol, (count +1) * sizeof(int));

        if(holdCol == NULL){
            //abort abort abort
            printf("Memory allocation failure. Terminating program\n");
            return 1;
        }else{
            //holdCol = tAr;
            holdCol[count++] = stnum;
        }
    }//end while loop
    
    
    //just to check if the dynamic array worked as intended
    for(int i = 0; i < count; i++)
        printf("%d\n", *(holdCol+i));

    free(holdCol);

    return 0; //always important
}
 
  • Like
Reactions: y a t s
Aside from needing to free memory after malloc, what are some other issues with using malloc?

Memory fragmentation, dangling pointers, indexing past the end of the allocated memory, and handling out-of-memory errors. Even mission-critical code often has security-related bugs due to a bug associated with memory handling in C.

One piece of advice I was given, but did not listen to for years, is that malloc/free should never appear inside production code other than an encapsulated memory manager. The thing is, once you start trying to encapsulate memory management and handle errors gracefully, you pretty quickly end up in a place where you're practically reinventing C++.
 
This is good advice in general, but IIRC in practice it depends on the OS.
Yeah, the C spec leaves a fair bit of room for interpretation in a number of memory-related areas like this. As such, in practice a programmer should always cover his ass.

malloc only fails on Linux if it runs out of virtual memory space for your process, which can only really happen if you do something stupid.
Not super uncommon if you are loading AI models—a problem that will only become more common. Then again, one could argue that falls under doing something stupid :)
 
Every modern OS overcommits. You can ask for a terabyte of RAM straight up and get it. Except your process doesn't have ownership of it. The OS won't assign pages to your process until you actually write to a page.

Which means you can't do anything graceful in true out of memory situations. It'll come when you're writing something to memory, and then bam your program is an undefined state and the only thing you can do is terminate execution.
Somebody already mentioned Windows, but OpenBSD doesn't overcommit either, and Linux can be easily configured not to overcommit via sysctl vm.overcommit_memory=2. Properly handling OOM is hard, and a big reason is that C/POSIX's only mechanism to signal low memory conditions is malloc returning null, at which point you're probably so low on memory that there's indeed nothing you can do anymore - unless you've prepared for the necessary contortions in advance. Linux has some opt-in cgroups stuff to inform processes that system memory is about to get scarce, and I believe Windows has a similar mechanism, but I rarely see them used in practice.
 
Code:
int *holdCol
Is it just me (and I'm not dogging on you specifically @Chiang Kai-shek, this is a very common style), but does anyone else absolutely hate this style of declaring int variables in C?

I don't do much C nowadays, but when I do, I hate declaring pointer variables that way because I like to think in terms of "this is my variable and its type is int*".

Not "this is my pointer variable and the type of one of its elements is int".

It just messes with my thinking when I get around to doing pointer arithmetic and other fuckery.

Am I alone here?

Important caveat:
I never do this:
Code:
int *foo, *bar;
I just do multiple declarations if I need to.
 
does anyone else absolutely hate this style of declaring int variables in C?
I agree, I'm not a fan. int and * together represent the type of hcol. When I read code with a strange coding style I'm not above cloning it, loading it into clion and auto-reformatting the whole damn thing so I can read it fluently.

I do the same when I see indenting conventions like GNU or Horstmann or the deeply, deeply cursed Whitesmiths:
Code:
if (condition)
    {
    doSomething();
    }
 
Aside from needing to free memory after malloc, what are some other issues with using malloc?
In addition to everyone else's good answers, I'll toss one in: the performance is unpredictable since you don't know exactly how long malloc is going to spend rummaging through its data structures to find you a memory block. Not that it matters for most (nearly all) applications since memory allocators are typically highly optimized and field-tested, and it usually doesn't add up to enough time to matter. But when it does matter, programmers will sometimes do "arena allocation" where they grab a big chunk of memory upfront and then parcel it out themselves one way or another.
 
Which means you can't do anything graceful in true out of memory situations. It'll come when you're writing something to memory, and then bam your program is an undefined state and the only thing you can do is terminate execution.
Reminds me that if you're using Linux and especially if your hardware is limited (e.g. on an SBC) configure your system to use the almighty Alt+SysRq+F combination for when one or more processes hog memory and the system starts thrashing:


(Usually such things will be off by default for security reasons)
 
Last edited:
  • Informative
Reactions: y a t s
Is it just me (and I'm not dogging on you specifically @Chiang Kai-shek, this is a very common style), but does anyone else absolutely hate this style of declaring int variables in C?

I don't do much C nowadays, but when I do, I hate declaring pointer variables that way because I like to think in terms of "this is my variable and its type is int*".

Not "this is my pointer variable and the type of one of its elements is int".

It just messes with my thinking when I get around to doing pointer arithmetic and other fuckery.

Am I alone here?

Important caveat:
I never do this:
Code:
int *foo, *bar;
I just do multiple declarations if I need to.
This is a pet peeve of mine too.

My three "WTF were they thinking?" gotchas in C are:
* Default passthrough in switch statements
* In int* foo, bar; foo and bar are different types
* There's both i++ and ++i, and they behave slightly differently
 
Is it just me (and I'm not dogging on you specifically @Chiang Kai-shek, this is a very common style), but does anyone else absolutely hate this style of declaring int variables in C?

I don't do much C nowadays, but when I do, I hate declaring pointer variables that way because I like to think in terms of "this is my variable and its type is int*".

Not "this is my pointer variable and the type of one of its elements is int".

It just messes with my thinking when I get around to doing pointer arithmetic and other fuckery.

Am I alone here?

Important caveat:
I never do this:
Code:
int *foo, *bar;
I just do multiple declarations if I need to.
I prefer to do it the way you hate, but it's sort of muscle memory at this point from stuff like kernel work. With the case of your caveat, it's arguable the way I prefer is less ambiguous due to the way C parses these definitions.

But of all stylistic decisions (spaces vs tabs, opening curly braces on a new line, all the garbage in the GNU style guide, etc.), this seems the most inconsequential. Go's formatter forces the way you prefer, but I can make peace with that a lot easier than trash like rustfmt and its list of terrible decisions it shoves down your throat with the help of the compiler.
 
Go's formatter forces the way you prefer, but I can make peace with that a lot easier than trash like rustfmt and its list of terrible decisions it shoves down your throat with the help of the compiler.
Lol go's formatter also insists on tabs which torches me up inside. I specifically distributed an in company pre-commit script to my coworkers that does all the gofmt stuff, except it also transmutes the tabs into spaces.
 
There's both i++ and ++i, and they behave slightly differently
Use-and-then-increment is a common enough pattern that I could see wanting to call it out for the compiler in the days when compilers were less smart. All that stuff like
array[i++] = next_val;
while (*i++ = *j++);

I imagine these days the compiler would figure it out even if you just wrote
C:
while (whatever)
{
array[i] = next_val;
/* more lines */
i++;
}
 
Use-and-then-increment is a common enough pattern that I could see wanting to call it out for the compiler in the days when compilers were less smart. All that stuff like
array[i++] = next_val;
while (*i++ = *j++);

I imagine these days the compiler would figure it out even if you just wrote
C:
while (whatever)
{
array[i] = next_val;
/* more lines */
i++;
}
There's nothing to figure out, there's no optimization that it lets the compiler do. array[i++] = next_val; and array[i] = next_val; i++; will both compile into something that looks like

Code:
mov DWORD PTR [rbx+(4*rax)], rcx
inc rax
 
I just realized you could make a programming language that was based off of Pat-Posting

instead of { } you have the function content start with Stalker child and then end with Enjoy Prison

a print function could become post_on_twitter

an error could become lost_lawsuit and then have a message like No Stalker Child this method will not exist

The compiler could be called file_lawsuit

Functions could be called with something like No-You-Will foo( )


This would be incredibly retarded to make and I will never do it but the idea made me laugh
 
There's nothing to figure out, there's no optimization that it lets the compiler do. array[i++] = next_val; and array[i] = next_val; i++; will both compile into something that looks like

Code:
mov DWORD PTR [rbx+(4*rax)], rcx
inc rax
I think the point is just to avoid writing a separate statement to increment after. The last time you use it, you post-increment it.
 
  • Disagree
Reactions: UERISIMILITUDO
Back