- Joined
- Sep 2, 2023
ZFC is an easy to understand, intuitive and practical axiomatic system that does pretty much anything a mathematician might need, I never understood the sperging about other axiomatic systems.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Of course for like 90% of what mathematicians are doing, ZFC is more than adequate, mostly because 90% of mathematics doesn't even particularly rely on the foundations at all as long as they're reasonable. There are examples where subtleties involving the axioms of choice, the continuum hypothesis, etc. can have a non-trivial effect, but they're not usually relevant (you bringing up non-measurable sets earlier is interesting example, since if you drop choice then you can produce models of the real numbers where there are none). There are situations (e.g. in category theory) where one needs to assume things about sufficiently large sets existing, but I've never figured out quite what the precise issue is (something about functor categories?).ZFC is an easy to understand, intuitive and practical axiomatic system that does pretty much anything a mathematician might need, I never understood the sperging about other axiomatic systems.
AFAIK said models assume the existence of inaccesible cardinals which is IMO a very bold axiom. The Axiom of Choice has some nasty consequences like the infamous Banach-Tarsky Paradox, however, these nasty results don't disapear with vanilla ZF, they just become undecidable so non-measurable sets are still a PIA either way.There are examples where subtleties involving the axioms of choice, the continuum hypothesis, etc. can have a non-trivial effect, but they're not usually relevant (you bringing up non-measurable sets earlier is interesting example, since if you drop choice then you can produce models of the real numbers where there are none).
Ah yes, this is necesary to define many categories like the category of sets without suffering from paradoxes.There are situations (e.g. in category theory) where one needs to assume things about sufficiently large sets existing, but I've never figured out quite what the precise issue is (something about functor categories?).
That seems like a reasonable argument though I know too little to comment.An argument for alternative foundations like type theory is that if you care about computer proof systems, then type theory is a very natural way to approach that since it's both a possible foundation for mathematics as well as one for computer science. There's also some other ideas in this direction with trying to have formal systems that allow one to reason about things like infinity-categories as though they're not autistic as fuck, but as far as I can tell none of them are particularly successful at that.
Machine learnig "math" is called Numerical methods, or at least it's the building blocks for "machine learning". These were my main sources when I learned the basics of it:Whats a good place to start when it comes to learning that math that is involved with machine learning?
For the working mathematician, just saying "okay it's actually a class, my bad" and adding "locally small" to your categories works well enough. After all, most of the "useful" parts of CT usually involve the locally small Set (i.e. Yoneda).There are situations (e.g. in category theory) where one needs to assume things about sufficiently large sets existing, but I've never figured out quite what the precise issue is (something about functor categories?).
This looks like a good book; its sections for differential equations look promising. Here's a PDF (tor mirror) for the 2nd edition.Scientific Computing: An Introductory Survey by Michael Heath
Combine that book with "Mathematical Methods for Scientists and Engineers" by Donald A. McQuarrie. You are now very well equipped to solve most shit that will show up. The book is pretty much a "recipe cookbook" to solve differential equations, you try some of the methods given and hopefully it will work out in your case.This looks like a good book; its sections for differential equations look promising. Here's a PDF (tor mirror) for the 2nd edition.
McQuarrie is pretty solid, saved my ass in quantum. Those bessel equations are a bitchCombine that book with "Mathematical Methods for Scientists and Engineers" by Donald A. McQuarrie. You are now very well equipped to solve most shit that will show up. The book is pretty much a "recipe cookbook" to solve differential equations, you try some of the methods given and hopefully it will work out in your case.
If the content in these books (I have recommended in this thread) doesn't cover your problems that you are trying to solve, I would really be questioning the model/problem set up itself given to you in the first place, since now you are on borderline mathematical research level of math/physics.
Your reply prompted me to finally bother to go think properly about how to do things about size in a sensible way, and I think I have it figured out now. In part because your phrasing made me see the correct way to look at things. I suddenly see how to actually make use of this hierarchy of universes U₁, U₂, ..., that's usually mentioned.What you're referring to I think is an issue that comes up when talking about (locally) small categories with the usual universe construction. Let U be a universe, and let locally small categories be those with hom-sets that are U-small. If you consider functor categories for locally small categories, how large are the functor categories? We know the hom-sets are in U, but the categories themselves might not be in U (they're only locally small). You have to either define a larger universe, or assume that one exists in order to talk about functor categories in this case.
This I want to ask about, though, because I don't see the relation between n-categories (1 < n ≤ ∞) and size issues. Of course, one must be more careful about how one defines smallness (e.g. because, of course, in a moral sense objects will only be defined up to some notion of equivalence, and that notion could be really disrespectful of set theory; a point is homotopy equivalent to a disk, and obviously they are quite different in size), but I don't see why the situation is fundamentally different from ordinary 1-category theory. Notably, one can of course do an entire theory of n-categories purely inside a fixed universe, so everything is small (just as one can do with 1-categories). The standard approach to ∞-categories following Lurie, Joyal, etc., is basically this, since one models them with simplicial sets (and these naturally assume a fixed universe with which to form the category Set).But seeing how CT is also about finding and abstracting over patterns, you soon hit stuff like 2/n/infinity-categories, or just large categories in general, where you have to be really careful on how you talk about objects.
but I don't see why the situation is fundamentally different from ordinary 1-category theory
Technically, it's the same problem as in a 1-category, but I think morally it spirals out and you can see this in the limiting case where you need a bit more subtlety than just gluing additional supersets.The standard approach to ∞-categories following Lurie, Joyal, etc., is basically this, since one models them with simplicial sets
T
, then creating an associated operator U = exp(T)
, and applying it to power series or taylor series.T
has a closed form for being raised to a given power, you can typically end up having one for U
.U=exp(k*D)
where D
is the derivative, you end up with U f(x) = f(x+k)
. I actually saw this operator used without explanation or justification in a quantum class, (in the form of exp(p)
, where p
is the momentum operator), and it bothered me so much that I had to do the derivation myself.Oh, quick note, make sure you understand your operator's properties, otherwise you'll just be wrong.I've become addicted to taking an operatorT
, then creating an associated operatorU = exp(T)
, and applying it to power series or taylor series.
As long asT
has a closed form for being raised to a given power, you can typically end up having one forU
.
A great example isU=exp(k*D)
whereD
is the derivative, you end up withU f(x) = f(x+k)
. I actually saw this operator used without explanation or justification in a quantum class, (in the form ofexp(p)
, wherep
is the momentum operator), and it bothered me so much that I had to do the derivation myself.
Another fun thing is to get a representation of your operator as an infinite matrix that multiplies the polynomial basis vectors, then take the transpose and see what that gives you
So, logarithms are simply the conversion of one exponential base into another. The natural log is the conversion to and from the natural base, e. e^x by divine provenance has the property that it's the unit eigenfunction of / idempotent under the derivative and integral operators, and is in effect, the natural basis for any differential equation in one form or another. If you were comfortable using 2^x, you could do all of your calculus and just have some abstract factor k that appears when doing derivatives and integrals, but by inspection, it will always be the natural log of 2.I have a couple of questions about e and the natural logarithm.
1) I have seen, and accept, several proofs and justifications that d(a^x)/dx = ln(a)*a^x. For example, here's a screenshot from this 3Blu1Brown video where Prof. Sanderson plugs 2^t into the definition of the derivative and factors 2^t out:
View attachment 6321358
I can see that the part in parentheses is the "inverse exponential" definition of ln(2) with the n parameters replaced with their reciprocals, and I accept this. My question is more philosophical: When going from dt=1 to dt=0, dy/dt goes from 2^t to ln(2) * 2*t. Where does this logarithm come from? To be clear, I am not asking why the base of this logarithm is e, of all numbers, a point covered in the video. I am asking why shrinking dt to 0 causes the derivative to be scaled down proportionally by this constant. Where does this constant come from? And why is it logarithmic with the base (of the exponent) as input?
2) Somewhat related to question 1. Is there a reason this scaling constant is also the area under 1/x? In other words, is this relationship a coincidence?
View attachment 6321403
I'm currently on break, but I considered what you said about the operator U. One can devise the function h on the reals such that h(x) = exp(-1/x^2) for x > 0 and h(x) = 0 for x <= 0. This is a smooth function with all the derivatives of h at x = 0 being zero. For this, one would have U(h)(0) = 0 for any chosen k. However, h(k) > 0 for k > 0. Thus U(h)(x) != h(x + k) when x = 0.I've become addicted to taking an operatorT
, then creating an associated operatorU = exp(T)
, and applying it to power series or taylor series.
As long asT
has a closed form for being raised to a given power, you can typically end up having one forU
.
A great example isU=exp(k*D)
whereD
is the derivative, you end up withU f(x) = f(x+k)
. I actually saw this operator used without explanation or justification in a quantum class, (in the form ofexp(p)
, wherep
is the momentum operator), and it bothered me so much that I had to do the derivation myself.
Another fun thing is to get a representation of your operator as an infinite matrix that multiplies the polynomial basis vectors, then take the transpose and see what that gives you
A quick correction to the above proof: About a week after I made this post and had moved on from the problem, I mentioned the result and the proof to a colleague of mine. When it came to the series representation I had for U_k, he pointed out potential convergence issues with chosen k. Now, this was something that did occur to me while working on the problem, but I erroneously thought I had it handled with the fact that U_{k+l} = U_k o U_l as in "Statement and Initial Results". However, looking at U_k more carefully: I realized the error, and its subtlety I believe is something worth mentioning.This got me thinking about what sort of functions f would satisfy U(f)(x) = f(x + k). Clearly it worked for polynomials, but certainly not so for all smooth functions. So what about analytic functions? It doesn't., and the proof of this is below.
I would definitely like something like that. What I do now if I really need the notation is just type it up in a TeX editor and screencap the pdf.Based and mathpilled kiwis.
It'd be cool if there was an easy way to add latex support to the forums, like with MathJax or something.