Programming thread

I dropped out of coding school after being unsatisfied with doing this the academic way, and now I'm focusing on teaching myself over the next 6 months so I am employed by January.

I want to be a backend dev specifically. I already know 80% of what is required in backend dev listings. After I learn the last 20 percent I will build at least one well made MERN project and maybe a Discord bot. My only weak point is React but I will have enough time to work with it until I get it.

Excited to do this my own way so I can hit the ground running.
 
I decided to serialize a non-trivial data structure involving nested dictionaries to JSON and dump it into a file using Python. The key types in one of those dictionaries were not of any of the accepted types, but they had a proper String representation. No problem, I thought, people on the Internet say I can just inherit the encoder class and make a small conversion hack on top.

An hour later, I'm coming to the realization that I'm just gonna have to eat shit on that one. For some reason (probably performance), dictionary keys aren't given a method you can just tweak to properly convert them for the encoder. Maybe there was a better way to do it, I wanted to do stuff later with the same structure as-is and copy.deepcopy() felt like cheating.

While I was reading the module source for the encoder, I also found this:
Code:
if o != o:
    # code snippet telling the JSON encoder it's a NaN

"o", in this context, is always a Float type. What kind of black fucking magic is happening here?
 
  • Optimistic
Reactions: Kosher Salt
"o", in this context, is always a Float type. What kind of black fucking magic is happening here?
That's just part of the definition of IEEE 754 floating-point numbers. As it so happens, a NaN is "unordered" and all comparisons with any other value return false. And as it further so happens, NaNs are the only values that can be compared to themselves and have "equal" be false. So that code is a valid way to identify them.
 
Here's my take (because it's very similar to what I did).

1. Work at your $13/hr job, and outside of work, start a project for an hour or two a day.
2. Complete one project completely (deploy it to AWS or something cloud-based)
3. Continue building projects for 6-12 months
4. Now apply at a higher paying job, stating that you are employed but also have experience in your project areas

Guaranteed you'll get a way higher paying job. I don't have a college degree and have a 6 figure developer salary with no certs, purely by just showing prowess in projects and my old job.
Careful with NDAs; a lot of companies make contracts so that you can't develop software similar to them. Something about technology theft...

Also, what kind of projects would interest employers? Looking to expand my portfolio. Any specific techs or languages?
 
  • Agree
Reactions: WonderWino
It's unfortunate that this is true, however if you don't work as a developer, go home and work on projects, constantly exploring new technologies in your spare time, you won't really get out of the cycle of being treated as just another code monkey.
I disagree with this one slightly. That makes sense for the first 5 years or so of your professional career. But after that you are better served learning the business your company is in beyond your role in it.

At some point, being a better coder just provides diminishing returns. You don't need to spend as much time learning new technologies because a lot of them are extensions of concepts you already understand. Learning the business and how to make a project successful/mentor others is what moves you up to architect.
 
But after that you are better served learning the business your company is in beyond your role in it.
As it turns out, in the business my favorite programming language is actually "None" and the best code is the code that I didn't need to write because {someone else already wrote it/their "problem" wasn't actually a problem/that process isn't worth automating when we need something for the summer intern to do anyway/etc}.
 
Careful with NDAs; a lot of companies make contracts so that you can't develop software similar to them. Something about technology theft...

Also, what kind of projects would interest employers? Looking to expand my portfolio. Any specific techs or languages?
C# has a ton of business use, and is syntactically similar to Java. There is also a lot of need for Swift and Android developers for mobile apps. Java is used a ton for companies with API's, especially older codebases. Python is increasingly useful for writing ETL processes and with data science generally. Javascript goes hand in hand with front-end or full stack web development.

C# is my recommendation. A ton of jobs, a ton of free resources to learn on your own, a ton of support on stack overflow and such. Basically every problem you will ever encounter as a professional dev has happened and a solution is already documented online.

To answer your question on projects, do a simple api that demonstrates dependency injection, a data and service layer, makes use of a repository and a database. It shows the employer you understand all of these concepts and covers 85% of anything you would be asked to do at work.
 
Last edited:
That's just part of the definition of IEEE 754 floating-point numbers. As it so happens, a NaN is "unordered" and all comparisons with any other value return false. And as it further so happens, NaNs are the only values that can be compared to themselves and have "equal" be false. So that code is a valid way to identify them.
The fact that high-level languages give you access to NaN and don't immediately raise an exception when one is generated is quite possibly the most retarded design choice that we now have to suffer with in the modern programming landscape.
 
The fact that high-level languages give you access to NaN and don't immediately raise an exception when one is generated is quite possibly the most retarded design choice that we now have to suffer with in the modern programming landscape.
Disagree. There are plenty of edge cases where the option to handle NaN manually is useful. Think about a method to validate whether or not a value can be converted to an integer. NaN can be an expected possibility with a defined response. I would be pissed if a NaN value automatically threw an exception.

Another scenario: you are linking two unrelated systems together. One has a guid as it's primary key, one uses a double. You need to determine whether you're getting the id or the externalId and match the two based on the value you get. NaN in this case indicates a guid.
 
Wasted two days at work at what can only be described as "CMake hell". What it seemed to be was that two libraries we have in the software call Eigen, but one had it with all sort of architecture specific flags while the other didn't. So it crashed in one library with certain input, and any fix had caused it to crash in the second library. The solution was yeeting every architecture compiler flag from the code.
 
Careful with NDAs; a lot of companies make contracts so that you can't develop software similar to them. Something about technology theft...

Also, what kind of projects would interest employers? Looking to expand my portfolio. Any specific techs or languages?
APIs, Microservices, Management Services, React sites and even a mobile application.

Use languages that companies use. Want to work at a hip, new startup company? JavaScript/TypeScript & Python. Want to work at a corporation? Java/C#. Automotive/manufacturing? C#/C/C++. Usually just research what industry you would like to work in and pick the major tech stacks that they use.

I built one semi-large project in C# and got hired for C#, even though my other massive project was in Java and Python.

C# has a ton of business use, and is syntactically similar to Java. There is also a lot of need for Swift and Android developers for mobile apps. Java is used a ton for companies with API's, especially older codebases. Python is increasingly useful for writing ETL processes and with data science generally. Javascript goes hand in hand with front-end or full stack web development.

C# is my recommendation. A ton of jobs, a ton of free resources to learn on your own, a ton of support on stack overflow and such. Basically every problem you will ever encounter as a professional dev has happened and a solution is already documented online.
100%. My company is currently merging everything into React/C#, but there's still some MASSIVE legacy systems running on Delphi, so yeah C# is absolutely the language to get into if you want to write some software and get paid big bucks for it.
 
  • Like
Reactions: Gender: Xenomorph
Are you actually using linear algebra for anything? Disabling arch specific flags might really kill your throughput.
All of the hard calculations are on the GPU. The linear algebra is for relatively small runtime tasks. I still use the regular compiler optimization flags.
 
All of the hard calculations are on the GPU. The linear algebra is for relatively small runtime tasks. I still use the regular compiler optimization flags.

As long as it meets your needs. Linear algebra libraries are one of the few cases where it's 100% guaranteed it has #ifdefs somewhere checking for those flags and radically changing the way the library does it's calculations based on their presence or absence.
 
Hey guys, I just finished a Java course and I realized that I have no idea what to do with my knowledge. The course taught me how to code but not what to do with it. So I’m wondering what do you guys think I should next?
 
Disagree. There are plenty of edge cases where the option to handle NaN manually is useful. Think about a method to validate whether or not a value can be converted to an integer. NaN can be an expected possibility with a defined response. I would be pissed if a NaN value automatically threw an exception.

Another scenario: you are linking two unrelated systems together. One has a guid as it's primary key, one uses a double. You need to determine whether you're getting the id or the externalId and match the two based on the value you get. NaN in this case indicates a guid.
These are pretty bad examples to be honest. The first one is throwing away important error information. NaNs don't just pop out of the ether, are the causes of NaN really the same as the causes of not being convertible to an integer? The second one should be handled through a sum type, using NaN there is just a hack comparable to NaN boxing.

Look at pages 7-9 of Kahan's notes for a rundown why NaNs exist, the guy designed most of the standard and his entire page is full of FP info. I think floating-point gets a bad reputation simply because most people misuse it. FPs aren't rationals (like C programmers like to pretend), they aren't real numbers (like mathematical illiterates think) and they most certainly aren't a general number type (die, Javascript). They exist for really specific purposes and do that job pretty well.

Hey guys, I just finished a Java course and I realized that I have no idea what to do with my knowledge. The course taught me how to code but not what to do with it. So I’m wondering what do you guys think I should next?
Ultimately, programming is about making the machine do your work for you. Ever been bothered by some annoying task that should be easy to automate, like downloading a lot of stuff from a site or checking something for updates? Scratch that itch yourself, you can do it now.
 
HackerNews actually gave me something fun to read today.

1655997097058.png


:semperfidelis:
 
Hey guys, I just finished a Java course and I realized that I have no idea what to do with my knowledge. The course taught me how to code but not what to do with it. So I’m wondering what do you guys think I should next?

Make your GF/parents/spouse/kids a cute program that tells them how much you love them with sound and animation.

Go extra special and make it an android app.
 
Disagree. There are plenty of edge cases where the option to handle NaN manually is useful. Think about a method to validate whether or not a value can be converted to an integer. NaN can be an expected possibility with a defined response. I would be pissed if a NaN value automatically threw an exception.

Another scenario: you are linking two unrelated systems together. One has a guid as it's primary key, one uses a double. You need to determine whether you're getting the id or the externalId and match the two based on the value you get. NaN in this case indicates a guid.
I dunno, sounds kinda like you should throw an exception.

Because of course you can always catch and handle the exception. Right??
 
  • Agree
Reactions: Fcret
Back