- Joined
- Oct 11, 2023
This is kind of hard for me because a Music player is already more complex than like 95% of the things people use Python for. Like a lot of people use Python for data science, mathematical models and Machine Learning but those things are fairly easy to do Python wise and only the concepts are really that hard.Do you have any recommendations when it comes to intermediate projects in Python? I'm currently building a music player in Python and want some other projects to keep me occupied after I finish it. Preferably don't recommend any projects that have to do with compilers and hardware design.
So I'm going to throw a bunch a stuff at you that I have no idea whether you'll find it too hard or too easy.
Implement a Fourier or fast Fourier transform in a general way.
Do something with implementing rotation and reflection matrices for a 3D graphical object.
Implement a basic compression algorithm for something like images.
If you are feeling this stuff is easy do Fluid simulations.
These might be too hard I have no idea.
All of these are good because you could make something out of them that is visual that will give you something nice you attach your code to mentally.
This might start a massive argument but I have to say it. I don't think "AI" has fundamentally developed in the last 7-8 years or so.but I see it quickly being used at a rapidly accelerating rate as financial and political interests start focusing on it. I got very blackpilled over time about the powers that be and what kind of research actually gets interest and funding.
Oh jeez people might not like this one.
But my understanding with what I have done and heard from people who have done a lot of this stuff and exist within that ecosystem is that the "AI" isn't getting any more accurate. Most of the developments have been making the scope of the things the "AI" tries to do larger but not actually improving its accuracy. Like I wasn't surprised by LLMs because I had seen stuff like this 5 or so years before they became popular. Even the name shows my point, Large Language Models.
To discuss accuracy, we have to think of this as a problem of diminishing returns. The standard easy response to improve accuracy is just MOAR data. This doesn't work because you need orders of magnitude more data to get a very small improvement in accuracy. To put numbers on it is very easy to make something that is right 70-80% of the time. It becomes more difficult to get to 90%. And it is hard but very possible to get to 95%. I have gotten to about 98.5% accuracy before I started overfitting on an AI trying to beat the stock market. Getting any further than that is basically impossible without a fundamentally new development in the technology.
We can refine the data and this is usually what AI people do to make things work better but refining the data is always going to get you somewhat small improvements. Refinement is not a massive leap forward but a slow and small process of improvement.
Where are the self-driving cars? The answer is they are not coming and that can't work with current technology. Everyone completely forgets about that little debacle because it was inconvenient but "AI" failed to deliver this.
A lifetime of immuno-suppressants is not pleasant. My main point in mentioning this is that things are actually really difficult and over a long time period we have built a lot of stuff, but it is always worth remembering that limitations have not gone away we have just clever ways to make them not matter as much.we can literally take the heart out of one person and put it in another with fairly minimal long-term complications.