KF Math Thread - Discuss Math

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
Really curious to hear takes on this.
I am glad to be given the opportunity to sperg.

My personal view on this issue is that one is doing mathematics precisely when one is proving things. In particular, if one is not doing something which constitutes "proving", then one is not doing mathematics. This is not necessarily a bad thing.

Just to develop the above a bit more, mostly for myself: when it comes to problem solving, there are at least two reasonably effective methods, namely the mathematician's approach and the engineer's approach. Often they overlap in various ways, and someone may employ both at the same time, but the difference is that a mathematician will attempt to arrive at a solution through proof (thereby also proving the correctness of the solution), while the more practically minded engineer basically uses a guess-and-check methodology. Apply heuristics to generate guesses, then check if they work. Continue refining until the approximated solution is good enough for the intended application.

As said, the two often overlap in various ways. Mathematicians frequently apply the heuristics of the engineer's approach to generate conjectures, for example. Meanwhile, engineers frequently apply the mathematician's approach to generate heuristics, or reduce the problem space. The mathematician's approach also appears in practical applications when a provable solution is for one reason or another necessary, such as in safety-critical programs and so on. I'm thinking of things like programming languages that implement some amount of formal program verification. One will also find things that are called "mathematics" but which are in many ways much closer to engineering, particularly in numerical methods and machine learning, where very few results are actually known to provably work, and when they do, it's often in super specialized scenarios that don't necessarily apply in practical situations. Conversely, though, a shocking amount of mathematical research in the area concerns methods that are genuinely useless in practice, but which are studied only because it is possible to prove things about them.

For me, as someone who is actually a working mathematician, of course I cannot say I think there is too much focus on proofs. The proofs are what I like about it. Without proofs, I am very lost. At the same time, I cannot say they help me with solving problems, unless those problems are to prove things.


Here's a topic which irks me and tangentially relates to what I've written above, in case anyone wants to chime in on it: research papers in areas like modern algebraic geometry, (higher) category theory, algebraic topology, etc. are absolutely full of statements which are simply not proven, for which there is no citation, and for which there is also no proof anywhere in the literature. These are statements which are certainly correct, but whose proofs are only contained in the heads of a few relevant experts who never felt it was important to write them down. This is played off as "folklore".

Even more ridiculous: there is a developing, fashionable field of mathematics, involving at least one Fields medalist, for which the main reference at the moment is a YouTube playlist. I'm not joking.
 
if one is not doing something which constitutes "proving", then one is not doing mathematics
I think this is a solid point. Part of my perspective is me coming from physics, where the majority of what we do is derivation as opposed to proving things. I suppose this is an applied/theory thing.
 
but I've been finding more and more with stuff that I work on or read about that the proofs themselves don't really aid me in problem solving.
I don't see any reason these two should necessarily go hand-in-hand. I mean, just to take the most famous example, Russell and Whitehead took hundreds of pages to prove 1+1=2. Of course nobody was ever expecting this to be useful in real-world arithmetic.
 
Really curious to hear takes on this.
N.b.: I asked ChatGPT:

"I've heard that Benoît Mandelbrot was less 'proof-oriented' than other mathematicians"

ChatGPT replied:

"Mandelbrot really did come at mathematics from a sideways corridor, and that flavor shows in how he worked. He wasn't 'anti-proof' in the crank sense; he simply didn’t treat proofs as the central engine of discovery. His instinct was geometric first, analytic second. He trusted patterns you could actually see—not as decoration, but as empirical data about the structure of mathematics itself.

That rubbed some mathematicians the wrong way, because the culture of pure math leans heavily toward formalism: conjecture → theorem → proof → publication. Mandelbrot didn't start there. He started by poking at messy phenomena—turbulence, price fluctuations, the jagged coast of Britain—and noticing that nature kept drawing the same shape again and again with different pens. He then built mathematical structures backward from those observations. This is closer to how a physicist or an applied mathematician thinks.

His big contribution, fractal geometry, illustrates the divide. Mathematicians had studied various fractal sets for decades before Mandelbrot. He unified them under a single geometric idea and name, pushed visual computing to map them at scale, and argued that these objects weren’t just curiosities but the native language of roughness itself. That unification was fundamentally conceptual and aesthetic. It wasn't the kind of step that requires a long chain of lemmas; it was the kind that requires you to look at a screen full of pixels and mutter 'this shape is telling me something.'

Critics sometimes portrayed him as sloppy because he didn't always chase down the full formal machinery. He expected the 'pure' folks to do that part, and they did. The proofs that the Mandelbrot set is connected, for instance, were completed by others. But the pipeline itself—empirical curiosity → visual experiment → concept → formalization—was sound. It simply violated the preferred ritual order in some departments.

The irony: the computer visualization that helped earn him that reputation also made a whole generation fall in love with the subject. He made roughness respectable. And the rough edges he left behind became invitations for others to finish the job.

If you want to see this tension in sharper relief, the history of turbulence modeling and heavy-tailed probability is a fascinating next step; Mandelbrot was early there too, and the proofs followed slowly and reluctantly."

I've also never seen a picture of Benoît Mandelbrot and Wendy's founder Dave Thomas in the same room.
 
What's your last and latest aha moment where things just clicked? Any tricks to help speed up this process of brain expansion? Personally, arguing about whatever you're trying to study with someone and sleeping on it are a great combo.

I revisited Baire's category theorem (which I remember as being unintuitive) in terms of nowhere dense sets and it just started making sense. I could even sketch out a rough proof for it off the top of my head.
 
N.b.: I asked ChatGPT:

"I've heard that Benoît Mandelbrot was less 'proof-oriented' than other mathematicians"

ChatGPT replied:
[wall of text]
So, are we now all supposed to go individually fact check this, to ensure it's not just BS word salad, or what? I don't understand the point of generated text posts on a forum.
 
Category theory scares me, I think I might be 2 retarded for it
I would say that Category Theory isn't that difficult (relatively speaking) if you can look past all the formalisms and abstraction. It's like noticing how an isomorphism from algebra is kinda like a homeomorphism from topology and formalizing the analogy between them. It has literally 0 uses so I wouldn't worry about not getting it.

(p.s. a monad is a monoid in the category of endofunctors)
 
Category theory is one of those subjects which is really very hard to get into on its own. It's better to study it while being supplied with many examples, or while studying something else which happens to use it. Luckily, it is possible to do this since it is actually quite pervasive in certain areas of mathematics, notably algebraic topology and homological algebra. Algebraic geometry also benefits from category theory, at least when using the formalism of schemes (which everyone should, anyway). A certain type of representation theorist also employs a good amount of category theory, I believe. If one is interested in logic, then topoi are a big deal there.

Personally, category theory got a lot easier for me when I simply got used to the way it does things, and now it's probably one of my favorite areas of mathematics. The big thing for me was letting go of the necessity of having a concrete, constructive model for things like (co)limits, adjoint functors, and so on. Of course, with that said, after seeing many, many examples, one does build a kind of mental model of what these things tend to do and mean, which plays a similar role. Limits are like subsets of products, and colimits are like quotients of unions. Adjoints are frankly too versatile to narrow down to just one concept, but there are essentially only three types of adjunctions which appear "in nature" so to speak, namely free/forgetful adjunctions, tensor/hom adjunctions, and spectrum/global functions adjunctions.

My opinion is that everyone doing mathematics could benefit from learning some category theory, just because I think learning it changes your brain and makes you write better proofs.
 
I was doing some reading about the great Haskell Curry and came across this amusing excerpt:
The "Curry" in "Currying" is a reference to logician Haskell Curry, who used the concept extensively, but Moses Schönfinkel had the idea six years before Curry. The alternative name "Schönfinkelisation" has been proposed.

(p.s. a monad is a monoid in the category of endofunctors)
I am convinced that category theory would be much easier for people to get into if it didn't use all of these retarded nebulous terms that sound like reproductive organs.
 
I'm sort of ashamed to admit I barely scraped by my mathematics courses in high school, to the point I wanted to try to learn it in college out of my own pocket and flunked miserably when I finally got to math 100. Saw this thread and it got me wanting to look up a math 100 test just to see what I could figure out, and eventually got to a point I asked chatGPT to teach me (not solve) equations. Been a fun way to kill an hour, and I apparently know more than I thought I did. so thank you, math nerds, for inspiring this dumb himbo to play with high school math. I think this is one of the only things I've ever consulted a chatbot for and it's been surprisingly fun. I just hope it's correct, but I assume extremely basic math algebra is just fine for a computer to teach.
 
I am convinced that category theory would be much easier for people to get into if it didn't use all of these retarded nebulous terms that sound like reproductive organs.
Would topology be easier or harder to teach/study if all the terms weren't basically synonyms? Especially when you start chasing implications, like if a closed set can be compact. What about dense? Nowhere dense? Surely it can't be open, right? Or that you can have uncountable bounded sets but unbounded countable ones while both are infinite.
 
I'm sort of ashamed to admit I barely scraped by my mathematics courses in high school, to the point I wanted to try to learn it in college out of my own pocket and flunked miserably when I finally got to math 100. Saw this thread and it got me wanting to look up a math 100 test just to see what I could figure out, and eventually got to a point I asked chatGPT to teach me (not solve) equations. Been a fun way to kill an hour, and I apparently know more than I thought I did. so thank you, math nerds, for inspiring this dumb himbo to play with high school math. I think this is one of the only things I've ever consulted a chatbot for and it's been surprisingly fun. I just hope it's correct, but I assume extremely basic math algebra is just fine for a computer to teach.
There are many reasons why somebody would fuck up in math class in high school or earlier, most of them can be addressed directly and aren't usually reflective of any innate deficiency you have, especially if you did well in other classes or in other parts of life. If you are able to do basic arithmetic in IRL situations and you aren't significantly worse than your peers, then you are, at worst, perfectly average.

Speaking from experience, if you are doing this to brain train or to pursue higher education, then the only (free) way to self study is to search up whatever the curriculum in your area is, see what part of it you get fully or mostly fully, then proceed linearly or non linearly at your own pace from there. If you are confident enough in your math skills, practice sheets for mathematical tests like the SAT or whatever your local equivalent is can help you gauge where you are at. After high school levels, open courseware projects like MIT OCW might be of interest to you. I haven't tried it, but I hear that the infinite practice on sites like Khan Academy might also be helpful.

I'm just laying out all the options that usually come after each stage assuming the uncommon but not rare "spark" that some people get when retrying math might happen to you. If you want to fiddle around with what you got or you feel like stopping, then that's fine as well. You aren't going to absorb it long term if you aren't interested in the material, so it's best to master or have fun with what you really want to do. Anyone telling you otherwise is either an elitist or a retard that doesn't understand basic pedagogy.
 
It's kind of insane to realize what computers actually do underneath all the abstractions and human-readability and they just perform mathematics and conditional math, but at a speed so fast that we can barely fathom it without the necessary abstractions to go with it. 2GHz on a CPU doesn't sound impressive because we're so used to games, their fps and when it's lower than 30 it's bad. But that 2GHz CPU is still doing 4 billion calculations every single second per core. You get an appreciation for older tech once you realize what it's actually doing underneath it all. I got an appreciation for my calculator running at 0.0006W, doing calculations some of us wouldn't even try at the mere sight, whereby all it's doing is conditional mathematics, it's kind of insane to think about. We have equipment people in the 1800's could only dream about and we say "oh, that's a shitty Intel Pentium CPU, I won't bother with such a shit CPU, even for its price of $3". That shit would cost millions in the 1800s for its sheer mathematical performance magic.
 
Dear Green Reddit,

I have been memeing my friend into trying to do fractions with me for a year now to improve his skills (he hates math and is already working blue collar so this would just be for self improvement etc.). He has refused me. He has denied me the pleasure.

However, he will soon be taking an aptitude test for a bluecollar job that has a portion on it for polynomials and multi-step equations with fractions. He now needs help.

My long game has prevailed.

I will teach this nigga that fractions are
✨fun✨
 
Not exactly embedded, but I threw together a quick browser extension that will render math in posts on the forum. Firefox users can get it from AMO here. Chomelets can pester me in private messages to get a .zip with the unpacked extension. No git forge as of yet, but I can send source tarballs to anyone keen.

You can use this to see if it's working: $\mathbb Nigger$. $$\mathbb N^iG_Ge \mathbf R$$ It should also work in previews so you don't accidentally embarrass yourself with your shitty LaTeX skills. If you click the addon icon it'll rerun on the current page. Also $$ doesn't work across multiple lines right now.
Your extension was breifly mentioned in an article where someone tried to run Firefox with every single extension added to it.

1775942688385.png
 
I have two questions relating to astronomy, but it's really about spherical geometry. I'm not very good at putting what's in my mind's eye to the page so this is a long one, sorry.

I'm interested in converting the Sun's ecliptic coordinates (longitude and latitude) to equatorial coordinates (right ascension and declination). For illustration, the ecliptic coordinates have 0° latitude as the green line, and equatorial coordinates have 0° declination as the red line in the figure. The March equinox is the point where the circles cross that is nearest to the viewer:
1777576744307.png
File: Celestial Sphere - Equatorial Coordinate System.png, https://commons.wikimedia.org/w/ind...atorial_Coordinate_System.png&oldid=852226549 (last visited April 30, 2026).

The formulas for converting between the two systems are listed below
Code:
𝛼 = arctan(cos(𝜀)tan(𝜆)), where 𝛼 is in the same quadrant as 𝜆 (1)
𝛿 = arcsin(sin(𝜀)sin(𝜆)) (2)
Where:
- 𝛼 is the right ascension.
- 𝜀 is the obliquity of the ecliptic i.e. the inclination between the two great circles above.
- 𝜆 is the ecliptic longitude. Since the Sun's latitude only varies by around one arcsecond, this one parameter fully describes the Sun's position.
- 𝛿 is the declination.

Reference:
Position of the Sun, https://en.wikipedia.org/w/index.php?title=Position_of_the_Sun&oldid=1329436423 (last visited Apr. 30, 2026).

I don't deny that these equation produce correct results, but I'm curious why they work. I will explain why I think these equations work and if I'm wrong please tell me, because I still have some niggling doubts. I have labeled points I am unsure about with [?]. I have also spoilered them for length.

Assume you have some position of the Sun at an angle, 𝜆, along the ecliptic from the March equinox. Imagine a plane tangent to the sphere at the March equinox. Next, draw a line along this plane that is also tangent to the sphere at the March equinox and coplanar with the ecliptic. Continue until it meets a line projected outward from the center of the sphere and passing through the Sun. The length of this line is tan(𝜆).

Draw a line along the tangent plane that is parallel to the polar axis and passes through the end of the first line. Continue until the second line meets the equatorial plane.

Finally, draw a line along the tangent plane that is coplanar with the equatorial plane and tangent to the sphere at the equinox and continue until it meets the second line and forms a right angle with it. This line will be inclined from the first by 𝜀.

The three lines form a right triangle with hypotenuse tan(𝜆), opposite side sin(𝜀)tan(𝜆) and adjacent side cos(𝜀)tan(𝜆). The adjacent side is the tan(𝛼) [??], so arctan(cos(𝜀)tan(𝜆)) (equation (1)) will output the right ascension.

The main reason I put double question marks over the final point is that I would have thought that using arctan on the opposite side, sin(𝜀)tan(𝜆), would give the declination, but it does not. Why? I had assumed the second line drawn would be equivalent to the meridian of the Sun's right ascension and therefore its length would be equal to its declination, but that isn't the case?

This explanation assumes that the angle between the two faces of a spherical wedge is constant as long as it is measured on a plane that is perpendicular to the line between the two points where the great circles forming the wedge cross and the two faces [?].

Again, assume we have the Sun at some angle along the ecliptic, 𝜆. The ecliptic and the equator form a spherical wedge. The line measuring sin(𝜆) is coplanar with one face of the wedge. A line that is drawn through the point where sin(𝜆) meets the line between equinoxes and coplanar with the other face (the equatorial plane) and perpendicular to the line passing through the equinoxes will always form an angle with sin(𝜆) of 𝜀.

A third line passing through the Sun and parallel to the (equatorial) polar axis will form another right triangle. This time with hypotenuse sin(𝜆), opposite side sin(𝜀)sin(𝜆) and adjacent side cos(𝜀)sin(𝜆). The opposite side is the same line as sin(𝛿). Therefore arcsin(sin(𝜀)sin(𝜆)) will give the declination.

My question is: Equation (2) makes the assumption that the ecliptic latitude, 𝛽, is almost zero, which is true for the Sun, but what if I wanted to convert the coordinates of an object whose latitude was not zero? How would equation (2) change?

I'm always asking things of this thread and never giving. Here's a nifty site containing everything there is to know about the Equation of Time. It was interesting to me and I hope it will be interesting to you.
 
Back
Top Bottom