Eliezer Schlomo Yudkowsky / LessWrong

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Since hell is infinity bad, any non-zero probability of an infinitely bad outcome is also infinitely bad (side note, this very problem is why Aleph-Null was invented, as a reason to still try to reduce your probability of going to hell if you can't get it down to zero, something a Bayesian literally can't do unless their prior of going to hell was originally exactly zero, something no good Christian would do) no increased risk of going to hell is worth it to be heroic, and leading a good life is the best thing to do. Yudkowsky is arguing against that. He may have written this as an edgy teenager or 20-something, I should check at some point.
This is not only morally broken but even broken in the probabilistic sense it tries to pretend it's being. Since there's no previous knowledge of what is more likely or not, you can't look at it from a frequentist perspective, and since you have no even incomplete knowledge of the possible outcomes, there's no way to look at it from a Bayesian perspective.

There's also no reason to think whatever "God" exists wouldn't just be pissed off by such a crude probabilistic bet-hedging strategy for tricking him and throw you in Hell just for that, so for all you know, all your scheming is just going to guarantee you ending up in Hell.

It's like a more sophomoric and pretentious version of Pascal's Wager and broken for the same reason.
 
There's also no reason to think whatever "God" exists wouldn't just be pissed off by such a crude probabilistic bet-hedging strategy for tricking him and throw you in Hell just for that, so for all you know, all your scheming is just going to guarantee you ending up in Hell.
Sure, I think there's a reason almost no one argues like this anymore. Eliezer's parents were obviously Jewish and not Christian, but from what I've read he had a major problem with that, presumably leading him to write lines such as "No rescuer hath the rescuer." "no mother and no father,/only nothingness above." as 1) if the word's all there is there's nothing left if you don't save it, "only nothingness above" 2) There's no heaven, there's no do-over, nothing supernatural is going to save you, and you're the one responsible for saving the world "No rescuer hath the rescuer" 3) Obviously, "no mother and no father" relates to the religious parents he had, as well as other "parent figures" like science (Science?) and Feynman. "You" in this case refers to Yudkowsy, and according to the article, he thinks he's it, nothing above him, and everything below.
Edit: yes, it's silly, but so are Jewish elevators that don't require manual operation and the attempt to put an elevated wire around Manhattan. People have in the past, and still do, attempt to rules-lawyer God.
 
Last edited:
3) Obviously, "no mother and no father" relates to the religious parents he had, as well as other "parent figures" like science (Science?) and Feynman. "You" in this case refers to Yudkowsy, and according to the article, he thinks he's it, nothing above him, and everything below.
John Lennon said the same shit in "Imagine" but somehow, less insufferably than Yuddo the Clown.
 
side note, this very problem is why Aleph-Null was invented, as a reason to still try to reduce your probability of going to hell if you can't get it down to zero
At the risk of completely derailing the thread with autism: What are you referring to here? Probabilities are real numbers between 0 and 1, the cardinality of ℕ doesn't seem related to me.
 
At the risk of completely derailing the thread with autism: What are you referring to here? Probabilities are real numbers between 0 and 1, the cardinality of ℕ doesn't seem related to me.
Probabilities include 0 and 1, not just the numbers between them.
 
  • Autistic
Reactions: 306h4Ge5eJUJ
At the risk of completely derailing the thread with autism: What are you referring to here? Probabilities are real numbers between 0 and 1, the cardinality of ℕ doesn't seem related to me.
A prospect is defined as a probability of an outcome, so the utility of a prospect = the utility of the outcome * the probability of that outcome. The probability of the outcome can't be infinite (except philosophers like smuggling in infinities wherever they can, a lot of them take space and time to be infinitely divisible, so if you imagine the time right before you throw an infinitely narrow dart (and assume it can't go in sideways whatsoever) and randomly pick a point from [-0.5, -0.5] to [0.5, 0.5] on the square 1 meter by 1 meter target, the dart has an exactly zero probability of hitting that point, but it's not an impossible event, as the dart always hits somewhere on the target. Hence they take "P(whatever) exactly equals 0" and "it's impossible for whatever to happen" to be different statements.) but the expected reward or punishment can be infinite, and since any non-zero number (non-zero probability) multiplied by infinity is infinity, you get a problem there. You can "fix" it in various ways, one way is to assign various positive and negative transfinites as utilities to various outcomes, for example assigning a "bigger" transfinite (like Aleph-Null) to being in heaven, and various "smaller" transfinites to being in hell.
 
Last edited:
a lot of them take space and time to be infinitely divisible
I assume spacetime has a certain granularity, that is, neither space nor time is infinitely subdivisible, but there is a smallest unit of either. One convenience of this is that if it is so, things like Zeno's paradoxes (like Achilles/Tortoise) evaporate. I think they're still arguing about this though.
 
I assume spacetime has a certain granularity, that is, neither space nor time is infinitely subdivisible, but there is a smallest unit of either. One convenience of this is that if it is so, things like Zeno's paradoxes (like Achilles/Tortoise) evaporate. I think they're still arguing about this though.
At the risk of completely derailing the thread with autism: What are you referring to here? Probabilities are real numbers between 0 and 1, the cardinality of ℕ doesn't seem related to me.
In practice I agree with both of you, I don't think infinities of any kind (infinite precision, infinite numbers, infinite "tape size" (for Turing machines), and infinite time) should be anywhere in calculations related to the real world, even implicitly or smuggled in. It's just not the way many mathematicians and philosophers like to think about it, but they should accept that it's impossible to be perfectly rational in the real world, and to make a form of rationality practical you need to break recursions early and approximate like your life depends on it (maybe it does). Even clamping all the inputs to Bayes (prior, P(evidence given hypothesis), P(evidence given not hypothesis) in one standard form) to values between epsilon and 1 - epsilon, as long as you prune out everything too close to impossible or certain, lets you do a lot more computations at a reasonable accuracy and make better decisions than if you were unwilling to do so, and instead rigidly followed what the philosophers say is "as rational as possible" under the time and storage constraints you have.
 
A prospect is defined as a probability of an outcome, so the utility of a prospect = the utility of the outcome * the probability of that outcome. The probability of the outcome can't be infinite (except philosophers like smuggling in infinities wherever they can, a lot of them take space and time to be infinitely divisible, so if you imagine the time right before you throw an infinitely narrow dart (and assume it can't go in sideways whatsoever) and randomly pick a point from [-0.5, -0.5] to [0.5, 0.5] on the square 1 meter by 1 meter target, the dart has an exactly zero probability of hitting that point, but it's not an impossible event, as the dart always hits somewhere on the target. Hence they take "P(whatever) exactly equals 0" and "it's impossible for whatever to happen" to be different statements.) but the expected reward or punishment can be infinite, and since any non-zero number (non-zero probability) multiplied by infinity is infinity, you get a problem there. You can "fix" it in various ways, one way is to assign various positive and negative transfinites as utilities to various outcomes, for example assigning a "bigger" transfinite (like Aleph-Null) to being in heaven, and various "smaller" transfinites to being in hell.

Philosophers are bullshit artists and conmen by trade. Proper understanding of mathematics, physics, or just science in general often helps to unravel their nonsense or expose it for the idiocy that it is. However, most people don't understand infinity, and it's not something that usual arithmetic operations apply to since it's not a number. For example if I have infinitely many numbers, such as is the case with integers, I can remove the even numbers (of which there are infinitely many) and be left with the odd numbers (of which there are also infinitely many). In other words ∞ - ∞ = ∞ if I were to write it out like a math equation. I could also take all of the integers and remove all of the numbers that aren't 42 (of which there are infinitely many) and write something like ∞ - ∞ = 42. Or take the set of integers and remove all of the integers and I get ∞ - ∞ = 0. This should suggest that subtraction isn't something that applies to infinity or that it's meaningless in our normal understanding of numbers.

Your own example with the dart has a flaw. Since you posit that the dart must hit somewhere on the target, there isn't any point on the target where the probability of being hit is exactly 0 assuming that the likelihood of hitting anywhere is truly random. Start with the simple case where we treat the entire target as one giant point (essentially you can't measure any smaller than one meter), which makes the probability of hitting that point exactly 1. Divide it into quarters and now you have each point having a probability of .25 of being hit (assuming each is just as likely), but the total probability is still 1 since the dart must land in one of those four different points. Further sub-divide each of those four points into quarters and the total probability remains 1 even though the probability of any one point being hit has decreased. The probability of hitting any one point after doing this repeatedly might become infinitesimally small (i.e. very close to 0), but it will never be exactly 0. Since we can look at the target and see where the dart hit, we can divide the target into some set of points based on our limitations of measurement. Perhaps it would be possible with future technology to increase the accuracy of our measurements, but that still doesn't reduce the probability of hitting any of those points to zero.

One could make a mathematical argument that it's not possible to express the coordinate as a real number as those are not countably infinite and that the law of total probability requires a countable set of events, but I think that's just attempting to play word games, since we'll eventually reach a limit in how accurately we can measure and that it is not possible for us to distinguish between two values (e.g. the x coordinate points .0123456788 and .0123456789). Or rather it's best to say that since you've already stated that there's a non-zero probability of hitting somewhere on the target, that there are a set of disjoint points with non-zero probabilities that sum to that total probability. If this isn't the case, statisticians would very much like to know because it must mean that their formulas are wrong.

Yudkowsky, like other bullshitters, has a tendency to start from some flawed premise and assume that the rules of probability or mathematics can still be applied. It's similar to mathematical proofs that 1 = 2, or something along those lines. It's obviously not true, but if a person can't spot where the laws of mathematics were violated, the proof might seem convincing even though the result is clearly incorrect. Conmen rarely pick something quite so obvious, so some people might accept the arguments as reasonable or valid. They might like to invoke mathematics to appear intelligent, but when they're applying it incorrectly or violating some precondition, any conclusion that they might draw is worthless. I think Yudkowsky is enough of an idiot (the sort who labors under the delusion that he's highly intelligent, but is really just getting high of of his own farts) that he's conned himself as much as or even more than anyone else.
 
Yudkowsky, like other bullshitters, has a tendency to start from some flawed premise and assume that the rules of probability or mathematics can still be applied
I don't think Yudkowsky believes this stuff, and I absolutely don't. Back when I think he was working at the Singularity Institute and still wanted to create an AGI he said wait and let the computer handle it, but now he'd probably say it's not worth thinking about.
Your own example with the dart has a flaw. Since you posit that the dart must hit somewhere on the target, there isn't any point on the target where the probability of being hit is exactly 0 assuming that the likelihood of hitting anywhere is truly random. Start with the simple case where we treat the entire target as one giant point (essentially you can't measure any smaller than one meter), which makes the probability of hitting that point exactly 1. Divide it into quarters and now you have each point having a probability of .25 of being hit (assuming each is just as likely), but the total probability is still 1 since the dart must land in one of those four different points. Further sub-divide each of those four points into quarters and the total probability remains 1 even though the probability of any one point being hit has decreased. The probability of hitting any one point after doing this repeatedly might become infinitesimally small (i.e. very close to 0), but it will never be exactly 0. Since we can look at the target and see where the dart hit, we can divide the target into some set of points based on our limitations of measurement. Perhaps it would be possible with future technology to increase the accuracy of our measurements, but that still doesn't reduce the probability of hitting any of those points to zero.

One could make a mathematical argument that it's not possible to express the coordinate as a real number as those are not countably infinite and that the law of total probability requires a countable set of events, but I think that's just attempting to play word games, since we'll eventually reach a limit in how accurately we can measure and that it is not possible for us to distinguish between two values (e.g. the x coordinate points .0123456788 and .0123456789). Or rather it's best to say that since you've already stated that there's a non-zero probability of hitting somewhere on the target, that there are a set of disjoint points with non-zero probabilities that sum to that total probability. If this isn't the case, statisticians would very much like to know because it must mean that their formulas are wrong.
It's not a realistic scenario, but remember the math gets even worse if the bounds on the allowed values of probabilities are not symmetric, so clamping between epsilon and 1 - epsilon makes sense, but clamping between epsilon and 1 does not. I got the dart example from https://plato.stanford.edu/entries/rationality-normative-utility/ (3.3 Counterexamples Involving Probability 0 Events) (https://archive.is/XJsM6#selection-12885.197-13029.0, section highlighted) and slightly modified it, I think I checked somewhere when I read it if that's a belief that (a reasonable number of) philosophers have, and I think that came up positive, but I can't remember now where I checked. Possibly Skyrms or Jeffrey like they cite. Something similar to your subdivision idea is given in https://plato.stanford.edu/entries/paradox-stpetersburg/ (3. Unrealistic Assumptions?) (https://archive.is/X0Q6Z#selection-2961.1-3315.0, section highlighted), but I think that's different from the example in the normative rationality page and my example, because it gives a range as the target, instead of an infinitely small point (something I think is a useless concept, but various philosophers apparently disagree on).
 
I'm just saying it sounds very objectivist-y. It wouldn't surprise me if Yudkowsky was on the John Galt train.
Eliezer Yudkowsky's not EA (probably? Maybe he claims to be but I kind of doubt he actually is) but "EA is Objectivism with a different utility fuction" is not an untrue statement.
 
Ayn Rand?
I don't think so. The article was about deep atheism so I was trying to compose it more spiritually void, almost enlightened by his own intelligence with his H.G. Wells idea of Sciencing problems away mixed in. It's hard to do because Yudd clearly just ran his poem through google translate, so it's only the literal meaning of words. I didn't even bother enrolling in poetry, but that's not how Latin was used. To give a non-poem translation:
'The saved do not belong to the savior
nor the protectors to their lord
and not mother and not father
nothingness is supreme above all'
Yeah it's not loaded with meaning. So I did my best to tie it to Deep Atheism.
 
Back