Programming thread

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
Common Lisp has a lot of things that seem to match what Im interested in. Lots of flexibility and power without being dependent on importing 100 libraries of functions made by other people.

You can have that with other languages too. Just don't use the libraries. But I get what you mean. I think the main "selling points" of lisp are the REPL and the everything is a list thing (code is data and data is code), which I otherwise only know from Forth really. First thing you need to get away from is from the coding -> compiling -> executing thought process. It makes more sense to think of lisp images as living virtual machines you feed data into. That data can be functions or variables and eventually you'll understand there's no real difference between the two. You can even save the state of a machine as it is, external files not required, and start it back up later. Smalltalk does that better but it can feel even more alien if you are used to more conventional approaches. Then since once again data is code and code is data, it is trivial to write your own language (the keyword is DSL) in lisp, specialized to solve a given problem.

Good advice has been given but I cannot stress enough that you will need emacs. Emacs is basically a lisp interpreter that deals in the currency of text buffers. You can write everything in emacs. Some People will tell you emacs is bloated but once again, it makes the most sense to not see emacs as a text editor, but as a virtual lisp machine you run programs in. Emacs has slime (although I recommend using sly) which is a "plug in" with which you can connect it to lisp images and send data to them. So with emacs then, you will basically develop your program *while* it's running adding and changing functions and variables, with the direct ability to test them or to manipulate the lisp VMs internal state on the fly. It's completely different from programming in languages like C and when you master it, programming in these languages will feel clumsy and slow.

But yes, start with emacs. It comes with a lisp dialect called elisp which is very "batteries included" when it comes to everything text. The biggest difficulty will be wrapping your mind around some of the aged terms and workflows. I actually don't recommend starting with any "emacs distro" or adding too many packages to it because they add a considerable amount of complexity that will be impossible to understand if you're not used to emacs. You'll see many people ricing their emacs by adding 2482384822 packages, mostly to completely change how default emacs works but that really isn't necessary and I wouldn't recommend doing it before you actually understand bog standard emacs workflows, which in all honesty, are different from what you are probably used to, but not bad at all.

I recommend SBCL as common lisp, it's the most batteries included and up to date one in my opinion. Good luck.
 
It's unfortunately representative of the real world. I work at FAGMAN, and everyone else on my team is pajeet except for one white guy, who is a tranny. If you ever wonder why the quality of FAGMAN software keeps getting worse, its because of teams like the one I'm on.

I regularly have to review code where I'm unconvinced that my team mates even read their code. It will have all the telltale ai signs, down to weird spacing characters and emojis in the comments. And the one jeet on my team who doesn't use AI has even worse code.

Avoid any learning resources from these cohorts. But you don't need to hear that, you have the sense to do that anyway.

If Im browsing youtube and I see a jeet coding tutorial I will automatically downvote, report and if Im feeling particularly spiteful I will AI DMCA false copyright strike on an alt account.

Total Pajeet-tube Death.
 
The problem lies mostly in the modern programmer not caring about their craft. Many have been pulled in with the promise of high salaries, just doing the bare minimum to get by. Besides having low interest, this group is also less cognitively gifted (INDIANS).
at a lot of shops you cant really care as its deadlines, deadlines, deadlines and the software has gotten expinentially more complex.
 
As a heterodox data point, Common Lisp writes just fine under neovim, but make sure to have a rainbow brackets plugin of some sort. I use https://github.com/HiPhish/rainbow-delimiters.nvim They really help you visually ascertain where you are in the paren nesting hierarchy. But if you want more of an "IDE" style experience, yeah, EMACS.
 
As a heterodox data point, Common Lisp writes just fine under neovim, but make sure to have a rainbow brackets plugin of some sort. I use https://github.com/HiPhish/rainbow-delimiters.nvim They really help you visually ascertain where you are in the paren nesting hierarchy. But if you want more of an "IDE" style experience, yeah, EMACS.
For Emacs the equivalent package is rainbow-delimiters however out of the box the colors are quite dim as such i have the following set:

Code:
(custom-set-faces
 ;; custom-set-faces was added by Custom.
 ;; If you edit it by hand, you could mess it up, so be careful.
 ;; Your init file should contain only one such instance.
 ;; If there is more than one, they won't work right.
 '(rainbow-delimiters-depth-1-face ((t (:foreground "#ff0000"))))
 '(rainbow-delimiters-depth-2-face ((t (:foreground "#ffa500"))))
 '(rainbow-delimiters-depth-3-face ((t (:foreground "#ffd700"))))
 '(rainbow-delimiters-depth-4-face ((t (:foreground "#05fb00"))))
 '(rainbow-delimiters-depth-5-face ((t (:foreground "#31d5c8"))))
 '(rainbow-delimiters-depth-6-face ((t (:foreground "#33a7c8"))))
 '(rainbow-delimiters-depth-7-face ((t (:foreground "#001eba"))))
 '(rainbow-delimiters-depth-8-face ((t (:foreground "#a538c6"))))
 '(rainbow-delimiters-depth-9-face ((t (:foreground "#d29ce3"))))
 '(rainbow-delimiters-unmatched-face ((t (:background "cyan")))))
 
i got filtered by emacs not syntax highlighting numbers by default in c mode

even donald knuth who has syntax highlighting turned off wants his numbers colored nigga
 
if you are grading first years and they are doing java, you will see the worst slop code you have EVER seen in your life. Comments making any sort of sense and code that won't run are just par for the course. Hell my favorite was just explaining how to install the jdk ... multiple times throughout the semester. While all my experience is before the advent of AI and all you had were kids attempting to compile java using eclipse on a weird fucked up mix of ubuntu win7 and i think osx Lion. In this post AI world, I expect the code to be WAY worse.
lol my friend grades intro to programming courses and to be honest, their code looks a little better. the mistakes are understandable. i would never want to grade for a class that taught java just because i'm so tired of hearing students bitch about java. it's always python or rust fanboys crying about java and how complicated it is. it's a prerequisite for a reason, just shut up and learn.

i pulled out some examples since i have all their submissions saved. there's worse examples i could find but i don't have it in me right now and a majority of "bad code" i saw was a literal copy and paste, line for line, from chatgpt or claude. one interesting thing was almost NO ONE knew how to make a proper makefile. can't even google it, they got their makefiles from AI too.
{
read a b
read c
} > "$a1$b2.txt"

<TAB>rm -f *.o ...

#1/bin/bash
#!bin/bash

get_name() {
read dontcare
}

int main(char args[]. int argsv)

..
M)

./Modify.bash
;;​
"")


exit 0
;;​
*)
echo "ERROR"
;;​
esac

printf "%s %s\n%s\n%s %s %s\n%s\n%s\n" \
"${file_name}" "$record_name" \
"$text_name" \​
"$file_record" "$file_start" "$file_end" \​
"$file_hist" \​

"$file_size" > "$new_file"​


echo "blah blah example txt here"​
 
i got filtered by emacs not syntax highlighting numbers by default in c mode

even donald knuth who has syntax highlighting turned off wants his numbers colored nigga
In Emacs everything including syntax highlighting is banally configurable and especially so after treesit modes were added as the only thing necessary to enable is it to create new face/use existing one and adding one rule to font lock settings as follows:

Code:
(defface font-lock-numbers
    '((t (:foreground "#ff0000")))
    "Font lock for number literal")

(setq numbers-rule
    (car (treesit-font-lock-rules
          :language 'c
          :override t
          :feature 'numbers
          '((number_literal) @font-lock-numbers))))

(defun enable-numbers-font-lock ()
    (add-to-list 'treesit-font-lock-feature-list '(number_literal))
    (add-to-list 'treesit-font-lock-settings numbers-rule))

;; Auto enable treesit
(use-package c-ts-mode
  :init
  (add-hook 'c-ts-mode-hook #'enable-numbers-font-lock)
  (add-to-list 'major-mode-remap-alist '(c-mode . c-ts-mode)))
 
My mini PC has this terribly dingy microphone hardwired in it's case the chinese probably included to spy on me but what the thing mostly records is the inside of the mini PC's case and it's fan and various air turbulence and electrical interference, absolutely useless as actual microphone. I always put off desoldering it since I didn't want to take the thing apart entirely because of the fan of the APU, anyways- I thought yesterday, hey, entropy source so I wrote a small C program that captures with a decimation of 1:8 (so basically, at a sample rate of 44100 Hz we effectively only capture at ~5500 Hz and throw the rest away). This I do because I don't trust ALSA not to do some software fuckery to give me a smoothed out lower sampler rate if I request it (the lower sampling rate is basically to break up possible correlation of the sampling data because of the physical properties of the microphone) and then I pass the result through a simple von Neumann Whitener. I know hashing is what the cool kids would do nowadays but it felt intellectually more honest.
Cool idea.

Is there a particular rationale around that choice of decimation ratio? With a ratio of 1:8, while it brings your sample rate to 5512.5 Hz, it lowers the upper limit of your frequency range to 2756.25 Hz (per Nyquist-Shannon theorem). This ~3 kHz cutoff strikes me as being kinda low and that it strips away a sizable chunk of useful entropy. Not trying to nitpick; just curious.
 
Yeah but the point here is that its worse off for it, not purely their fault, but no external dependencies > any external dependencies
yt-dlp already had their own little JavaScript interpreter before (written in Python, had only the features required for YouTube's challenges) and it was horribly slow compared to the usual JavaScript engines. I ran one of YouTube's JavaScript challenges in yt-dlp's interpreter and in QuickJS (fast non-jit interpreter) and yt-dlp's was 3 to 4 order of magnitudes slower IIRC. That's the sole reason why JavaScript YouTube downloaders like https://cobalt.tools/ were so much faster. Maybe yt-dlp will be faster now that it's using a proper JS engine. To clarify I do not mean startup speed, that was bad already with Python and now there might be a JS jit warm-up too.

It's funny how Neal Mohan Jeet Brahman keeps raising the bar, without any impact because the tools adapt so quickly. Makes me :optimistic: that they will give up eventually and just put the plain video stream in the HTML again.
 
Is there a particular rationale around that choice of decimation ratio? With a ratio of 1:8, while it brings your sample rate to 5512.5 Hz, it lowers the upper limit of your frequency range to 2756.25 Hz (per Nyquist-Shannon theorem). This ~3 kHz cutoff strikes me as being kinda low and that it strips away a sizable chunk of useful entropy. Not trying to nitpick; just curious.
Nah, it's fine, I'm playing around myself anyways. For the avid reader reading this, this exercise is entirely pointless; with modern CPUs and how the kernel does /dev/random now, you really don't need these entropy sources. Chances are big you make things worse, not better.

I know you probably know this but I'll start from the beginning anyways: A sample of 44 kHz is basically sampling about every 22 µs. The microphone's diaphragm (which physically exists and has mass) can't actually switch directions that fast, it is likely to be in a correlated/similar position if the sampling intervals are too short. So the microphone diaphragm position at t=0 is highly predictive of its position at t=1 (22µs later). If Sample A predicts Sample B with 90% accuracy, then Sample B provides zero new entropy. It's redundant. Look at these samples as "echos" of the earlier ones. The von Neumann whitener can't help here. It can fix bias, but it cannot fix correlation.

In a perfect environment, the high frequency range we discard might actually contain valuable entropy but the case of my mini PC is no perfect environment. That range is most likely full of predictable and "rhythmic" EMI/RFI garbage, so coil whine etc.. We don't want that noise, especially since it's very likely to alias into stable patterns. Von Neumann can't help us here either.

We basically want to listen to the mechanical movement of the diaphragm. While the force of air molecules hitting the mic is random and fast (white noise), the displacement of the diaphragm is "colored" (Red or Brownian noise). The diaphragm acts as a mechanical integrator. It takes time for it's mass to accelerate and move, so especially in this mic the physical enthropy bandwidth is far lower than 20 kHz anyways.

Imagine it like taking 1000 pictures of a driving car with a high-fps camera. You get 1000 frames of the car, but the car is in the same spot in all of them. You haven't captured 1000 random locations, you captured one location a thousand times.

So yes, you are correct that we sacrifice bandwidth (Nyquist limit), but we are optimizing for statistical independence. In our physical TRNG, the "event horizon" of a new random state is determined by the mechanical relaxation time of the diaphragm, not the clock speed of the ADC. Any sample taken before the diaphragm has had time to "forget" its previous position is correlated, not random. You rather want 500 bits/sec of pure entropy than 44,000 bits/sec of highly correlated, deterministically biased noise.

The better question would be, why a power-of-two number? Writing this I realized it actually would make more sense to take a prime number as decimation ratio. If we decimate through a power of two like this, we actually might risk synchronizing our sampling with periodic cycling like mains hum, usb polling or coil whine/fan vibrations. Using a prime number as decimation step actually maximizes the likelihood that our sampling intervals are coprime to the periods of possible interfering signals. Instead of potentially just happening to sample with the same interval with one of these signals, we'd slowly "drift" through the phase of rythmic interference, effectively averaging it out.

So in short: 7 would actually be better, but in reality, is probably still way too low (~158 µs) a number. EDIT: On consideration, 13 (~3400 Hz) or 43 (~1000 Hz) would probably be best to avoid some very usual and rhythmic noise you'll find in a computer.
 
Last edited:
The diaphragm acts as a mechanical integrator. It takes time for it's mass to accelerate and move, so especially in this mic the physical enthropy bandwidth is far lower than 20 kHz anyways.
This reasoning appears suspect to me. How do you square this with the reality that these transducers routinely record signals well into the ultrasonics?
 

As a newfag hobbyist trying to learn basic programming and reading your post, I can only take solace in those wiser then me:

"What's reality? I don't know. When my bird was looking at my computer monitor, I thought, 'that bird has no idea what he's looking at.' And yet, what does the bird do? Does he panic? No, he can't really panic, he just does the best he can," - Terry Davis

By the way programmer Kiwis, what is the HTML sphere looking like? Last time I actually did a programming project was programming an HTML website for an Infotech final in school. Using just notepad and Macromedia I managed to score 100%. I know web development is a jeet infested Javascript hell, but pretending that Javascript never existed and India was nuked by Pakistan in 2001...how powerful is HTML5? Can you even realistically make a pure HTML / CSS website anymore?
 
As a newfag hobbyist trying to learn basic programming and reading your post, I can only take solace in those wiser then me:



By the way programmer Kiwis, what is the HTML sphere looking like? Last time I actually did a programming project was programming an HTML website for an Infotech final in school. Using just notepad and Macromedia I managed to score 100%. I know web development is a jeet infested Javascript hell, but pretending that Javascript never existed and India was nuked by Pakistan in 2001...how powerful is HTML5? Can you even realistically make a pure HTML / CSS website anymore?
technically you can make raw html sites without js but it's uncommon.
the classic web stack (html + css + js) is still dominant and i don't see it changing much.
 
Back
Top Bottom