Programming thread

Okay, so suppose your whitespace filtering fucks up somehow; I notice you write to temp.json in the same directory (theoretically attacker-controllable), which means running the parser in parallel will cause massive problems. There's an example that uses valid JSON and gets broken results.

The broader point here is the dangers of assuming your inputs are well-formed. Why be autistic to the point of arguing against simple sanity checks?

Edit: I won the race pretty easily
View attachment 6802202
Can you post the json?
 
JSON:
{
 "license": "Gun in the glovebox"
}

What the fuck nigger.

I made stuff in godot with c# before.
I did the class and passed the microsoft csharp exam.
I am pretty fuckin sure I know the fundamentals. But I set up a project, and my hello nigger script doesn't fuckin run this time but it did the other times

What the fuck? I just wanna write some code.

I must have honestly forgotten the basics to setting up the project. Thats gotta be the case because the actual script is literally just printing "Hi Nigger, how are you" to the console. Its throwing an error saying csscript isnt a cmdlet
 
Last edited by a moderator:
What the fuck nigger.

I made stuff in godot with c# before.
I did the class and passed the microsoft csharp exam.
I am pretty fuckin sure I know the fundamentals. But I set up a project, and my hello nigger script doesn't fuckin run this time but it did the other times

What the fuck? I just wanna write some code.

I must have honestly forgotten the basics to setting up the project. Thats gotta be the case because the actual script is literally just printing "Hi Nigger, how are you" to the console. Its throwing an error saying csscript isnt a cmdlet
Sounds like something got excluded from PATH
 
  • Like
Reactions: Hall of Cost Payer
Sounds like something got excluded from PATH
I figured it out, I think.

I think my csproj was set up wrong. Still really new at c# and most of my experience with it was in godot.

Edit: It turns out Visual studio will generate the boilerplate for me. Im such a god damn fucking idiot. :story:
 
Last edited:
Why be autistic to the point of arguing against simple sanity checks?
Parsing is hard. I know it's fun to do, and I'm not knocking the utility of implementations like we're discussing, but like this conversation makes plain: Parsing is hard. Good parsers tend to be front-line products in and of themselves. (Consider FFMPEG, Firefox/Chrome, compilers, etc.)

If thou, as a mere mortal, must implementsted parsing, use a library, or if you're doing something genuinely novel and must have a DSL, use YACC/bison/whatthefuckever. Some declarative/functional languages have good parsing abilities too, but those aren't mere mortal languages.

The problem with parsing lies in the edge cases. Formats as big as JSON and XML are riddled with footguns. Use a library to mitigate those (and offload blame!) and you will be much happier and your productivity will thank you for it. (You'll find fun shit this way too, like how MS Office's XML formats (used in Sharepoint) were at odds with MS VS's XML formats and require special conversion to handle the quirks of both.)

If I was doing JSON-XML conversion? I'd be bouncing it off of a Postgres instance. Postgres has great import/export of both, and proper data-driven representation of your data is going to aid conversion.
 
Parsing is hard. I know it's fun to do, and I'm not knocking the utility of implementations like we're discussing, but like this conversation makes plain: Parsing is hard. Good parsers tend to be front-line products in and of themselves. (Consider FFMPEG, Firefox/Chrome, compilers, etc.)

If thou, as a mere mortal, must implementsted parsing, use a library, or if you're doing something genuinely novel and must have a DSL, use YACC/bison/whatthefuckever. Some declarative/functional languages have good parsing abilities too, but those aren't mere mortal languages.

The problem with parsing lies in the edge cases. Formats as big as JSON and XML are riddled with footguns. Use a library to mitigate those (and offload blame!) and you will be much happier and your productivity will thank you for it. (You'll find fun shit this way too, like how MS Office's XML formats (used in Sharepoint) were at odds with MS VS's XML formats and require special conversion to handle the quirks of both.)

If I was doing JSON-XML conversion? I'd be bouncing it off of a Postgres instance. Postgres has great import/export of both, and proper data-driven representation of your data is going to aid conversion.
This is how I used to think until my epiphany the other day, and I think this kind of thinking is maybe what stops good programmers from being great programmers.

While it's efficient and easy to use other peoples' abstractions, you never really learn about the underlying technology and just get caught up in the layers of abstractions already built for you. If you want to live strictly in the problem space of business logic so you can iterate a product quickly, it's probably fine to slap libraries together and have something built.

But if your problem space ever becomes the libraries, or you ever need to write your own abstraction, you won't have the requisite skills to be able to process arbitrary data streams.

That being said, not validating input is retarded and an incredibly easy way to make your software fragile and open to attackers. Anything on the edge of a system should assume that it might receive empty or erroneous data. If it's some internal function call where you know you'll only be calling it internally with data that's already been validated then maybe it's fine.

All functions, regardless of your language's paradigm, are just some transformation on data, whether that's parameterized data or stateful object data or whatever. If you can parse a language and emit whatever, you can write whatever you want. In fact, writing a parser that validates the data as it's parsed will more than likely follow the spec more closely because you have to keep the spec close at hand for what is/isn't valid sequences.

You write a language parser the skills carry over to any arbitrary data. Signals, network packets, voltages, whatever you want: the world's your oyster.
 
I hate when roguelike games do the ascii bullshit.

But at the same time Im trying to set up the project for my Roguelike idea and I also hate fucking around with making a decent looking UI and just wanna code up the fun stuff.

If I make a c# program that runs in the terminal, then down the line how difficult would it be to implement graphics and shit later?
 
I hate when roguelike games do the ascii bullshit.

But at the same time Im trying to set up the project for my Roguelike idea and I also hate fucking around with making a decent looking UI and just wanna code up the fun stuff.

If I make a c# program that runs in the terminal, then down the line how difficult would it be to implement graphics and shit later?
Not that hard, unless you do some weird coupling of logic and how it is displayed
i.e. checking displayed character instead of tile type.

Edit:
I also really liked this book when learning how to program: Game Programming Patterns
 
Last edited:
  • Informative
Reactions: Safir
I hate when roguelike games do the ascii bullshit.

But at the same time Im trying to set up the project for my Roguelike idea and I also hate fucking around with making a decent looking UI and just wanna code up the fun stuff.

If I make a c# program that runs in the terminal, then down the line how difficult would it be to implement graphics and shit later?
Decouple your renderer from your rendering API/interface, then you can add in whatever rendering mode you want later.

Your call to draw() or whatever will call an underlying drawing api which you can then plug in whatever back-end you want. No clue about C# but I'm pretty sure this is what object-oriented people call dependency inversion.

Edit: Can +1 Game Programming Patterns, Robert Nystrom is a great teacher.
 
Last edited:
Edit: It turns out Visual studio will generate the boilerplate for me. Im such a god damn fucking idiot.
Imagine running your hand crafted code in a virtual machine, couldn't be me
If thou, as a mere mortal, must implementsted parsing, use a library
Just write the damn parser
But at the same time Im trying to set up the project for my Roguelike idea and I also hate fucking around with making a decent looking UI and just wanna code up the fun stuff.
ASCII is easy, just make your char buffer the resolution w*h, character is always the center, of the screen, menu UI can be ncurses, good for a learning project
 
In haskell it's just:
Code:
newtype Parser a = Parser { runParser :: String -> Maybe (a, String) }

instance Functor Parser where
  fmap f (Parser p) = Parser $ \str ->
    p str >>= \(x, str') ->
    return (f x, str')

instance Applicative Parser where
  pure x = Parser $ \str -> Just (x, str)

  (Parser pf) <*> (Parser px) = Parser $ \str ->
    pf str  >>= \(f, str')  ->
    px str' >>= \(x, str'') ->
    return (f x, str'')

instance Monad Parser where
  (Parser p) >>= f = Parser $ \str ->
    p str >>= \(x, str') ->
    runParser (f x) str'

instance Alternative Parser where
  empty = Parser $ \_ -> Nothing
 
  (Parser lp) <|> (Parser rp) = Parser $ \str ->
    case lp str of
      Just lp -> Just lp
      Nothing -> rp str
 
In haskell it's just:
You think I can read that gibberish? I've been programming for nearly 20 years now and have never used a functional language to write anything. I appreciate how in functional languages the recursive nature of language itself is a part of the semantics but in reality it just makes a large chain of implicit dependencies that are increasingly incomprehensible and impossible to maintain the more code you add. So I would rather write that functional code in an imperative language and while it's 5x more code it's also much easier to jump to an arbitrary point in an unknown program and understand what's happening.
 
You think I can read that gibberish? I've been programming for nearly 20 years now and have never used a functional language to write anything. I appreciate how in functional languages the recursive nature of language itself is a part of the semantics but in reality it just makes a large chain of implicit dependencies that are increasingly incomprehensible and impossible to maintain the more code you add. So I would rather write that functional code in an imperative language and while it's 5x more code it's also much easier to jump to an arbitrary point in an unknown program and understand what's happening.
I think it's just a matter of familiarity.
 
checking displayed character instead of tile type.
Doing that is so unfathomably niggerlicious it wouldn't even occur to me to do it.
Can +1 Game Programming Patterns, Robert Nystrom is a great teacher.
I keep telling myself I should read more books in general so I'll look it up.
ASCII is easy, just make your char buffer the resolution w*h, character is always the center, of the screen, menu UI can be ncurses, good for a learning project
Learning is the main goal. But I also use my practice projects as an excuse to try and make some of my ideas.
 
I have to say, I like a lot of things about functional programming except reading functional programs.

Maybe it's the one-letter vars or the funky symbols, but I feel numb trying to make heads or tails of anything smarter than Elixir.

Give me higher order functions and method chaining/ufcs in an imperative language and I'm happy though. I also wouldn't mind atoms instead of enums but at that point I'm getting greedy.
 
Doing that is so unfathomably niggerlicious it wouldn't even occur to me to do it.
Just gayming on Steam, I've seen a lot of update log messages about fixing localization-related bugs where game logic is somehow dependent on the specific localized message. I don't know how people this niggerlicious manage to make, release, and support a fun game.
 
The point is that any JSON will fail if you have two copies of this code running at the exact same time in the same directory, because they're both writing to the same temp.json file with unpredictable results.
The real kicker is that temp.json will be generated in $PWD. So suppose you have
Code:
some_dir/
some_dir/another_dir/test1.json
some_dir/yet_another_dir/test2.json
...
some_dir/one_more_dir/test1488.json
$PWD == some_dir, and run parser_cmd another_dir/test1.{json,xml} in parallel with another thread running from the same $PWD (e.g. parser_cmd one_more_dir/test1488.{json,xml}). A common bulk parsing setup. The temp.json will get written to some_dir/test.json in each related thread. Given the right timings and circumstances, that's an easy segfault.

For anyone learning this stuff, see mktemp (from stdlib.h or unistd.h, depending on your OS). Or better yet, if you had to do it in a separate pass like this, just malloc a buffer based off the original JSON's filesize (ideally with some kind of limit, but that's its own can of worms); if you're simply trimming whitespace, the number of temp characters should never exceed the number of original ones after any whitespace is removed.
 
Last edited:
what stops good programmers from being great programmers
The point of writing code is to solve a problem.

if your problem space ever becomes the libraries, or you ever need to write your own abstraction
Parsing will not improve your general abstraction ability. If your problem space becomes the library, I question how valid your "problem" is. Parsing is a specialization. In the best case, where you have a grammar that is well-defined in the Chomsky hierarchy, ie. has a specific well-defined well-used logical representation, that then becomes an incredibly formal optimization process. This becomes less about programming and more about formal grammars, NLP, and the such.

Never a bad time to learn this stuff, for sure, but the rabbit-hole is dark, and full of LALR.

Devoid of use case, just like the usual outcome of "I'm going to use a parser to solve this problem!" It usually indicates that you've got the problem domain incorrectly.

Again, there are two general modes for "this needs a parser": a mode where it can be resolved fucking around in a console for half an hour to just mangle the data acceptably, or reach out to a third party because the data format is more complicated than I care to deal with by hand, and this speeds problem resolution.

Am I getting paid to big-brain some local parsing solution with a theoretically unlimited budget? Prolog's DCGs is where I go. Free backtracking. https://www.amzi.com/manuals/amzi/pro/ref_dcg.htm
 
Parsing is a specialization.
Is it though? They teach practical parser writing for shit like HTML in 1st year CS. Tokenizing and ASTing isn't super hard, especially in newer languages where you don't need to allocate it all manually. It becomes a matter of reading the spec; just RTFM in another form.
 
Back