Eliezer Schlomo Yudkowsky / LessWrong

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
Barring something very weird going on with human conciousness it's entirely plausible all you need is compute.
It's probably pseudoscientific bibble-babble, but some (I forget who but they are mostly in this general transhuman loony realm) have speculated there are effectively random phenomena that go on in neurons at the quantum level (one sign of loon babble is any time someone who is not a physicist goes on about that) that would need to be replicated or simulated for actual consciousness.

I'm pretty sure it's not a settled issue and is probably Yuddo-tier bloviating, but who knows? There might actually be some ghost in the machine, or even something like a soul, which can't be manufactured.

The fact we're running into stone walls where just throwing more computing power at the problem has increasingly diminishing returns suggests there's a hard limit on what we can do with current models.
 
It's probably pseudoscientific bibble-babble, but some (I forget who but they are mostly in this general transhuman loony realm) have speculated there are effectively random phenomena that go on in neurons at the quantum level (one sign of loon babble is any time someone who is not a physicist goes on about that) that would need to be replicated or simulated for actual consciousness.

I'm pretty sure it's not a settled issue and is probably Yuddo-tier bloviating, but who knows? There might actually be some ghost in the machine, or even something like a soul, which can't be manufactured.

The fact we're running into stone walls where just throwing more computing power at the problem has increasingly diminishing returns suggests there's a hard limit on what we can do with current models.
It's also possible that there's just some physical difference between carbon and silicon where one can generate consciousness and one can't. I'm not sure I believe that, but it's possible.
 
It's also possible that there's just some physical difference between carbon and silicon where one can generate consciousness and one can't. I'm not sure I believe that, but it's possible.
Elemental carbon and silicon form molecules with rather different means, but both of them have a lot of reactivity. If their different modes of orbital hybridization result in fundamentally different behaviors, it's possible that consciousness that can emerge from brains comprised of the two elements would be utterly different.

There are reasons even early sci-fi writers contemplated silicon-based life and intelligence.
 
Elemental carbon and silicon form molecules with rather different means, but both of them have a lot of reactivity. If their different modes of orbital hybridization result in fundamentally different behaviors, it's possible that consciousness that can emerge from brains comprised of the two elements would be utterly different.

There are reasons even early sci-fi writers contemplated silicon-based life and intelligence.
Silicon based life is fucking gay and retarded silicon for complex reasons almost always forms long chains of 2d molecules while carbon forms complex chains of 3d branching structures, and that's not even getting into the whole shitting out glass problem
 
Silicon based life is fucking gay and retarded silicon for complex reasons almost always forms long chains of 2d molecules while carbon forms complex chains of 3d branching structures, and that's not even getting into the whole shitting out glass problem
You're presuming earthlike conditions and that earthlike planets are the only ones that could form any kind of life. Silicon has different behaviors at very high temperatures.
 
I still can't decide if rationalists or rationalist a-logs are more pathetic. I'm leaning towards the a-logs because holy shit
She I will not name also hates this guy because he won't let her be refereed to an a 'researcher' on Wikipedia. Throw that into your math.
Allea1.png

Also retweeted this.
Aella.png

If anyone, ever, in my presence speaks of moderating 'unreasonable passionate humanity', I would discuss it with them before I break their skull and eat their heart to prove a point.

Anyway. if it isn't posted yet, go ahead and steal it. I do not consent to being credited in that cursed hell thread.
 
You're presuming earthlike conditions and that earthlike planets are the only ones that could form any kind of life. Silicon has different behaviors at very high temperatures.
Except that the silicon bond is in general weaker than than the equivalent carbon bond thus they can't withstand high temperatures; example methane is a stable gas @ STP silane(SiH4 the silicon equivalent of methane) is a pyrophoric(spontaneous combustion on contact with air) gas:
 
Last edited:
Do you think AGI is possible or it's a pipe dream?
Ignoring the something-quantum tunneling-something conciousness hypothesis, an idiot god Azathoth AGI which has great intelligence but zero wisdom or awareness of what and why it is doing is possible. Still, it's the same as the halting problem and travelling salesman problem where you need infinite computational power to solve it and if you have that, the game changes so much it's pointless to speculate. AGI without senses and experiences humans have will never raise above the level of a very clever automatization device.

I'd be easier to go nuts on brain research and derive a new species from it for AGI, methinks.
 
It's probably pseudoscientific bibble-babble, but some (I forget who but they are mostly in this general transhuman loony realm) have speculated there are effectively random phenomena that go on in neurons at the quantum level (one sign of loon babble is any time someone who is not a physicist goes on about that) that would need to be replicated or simulated for actual consciousness.

I'm pretty sure it's not a settled issue and is probably Yuddo-tier bloviating, but who knows? There might actually be some ghost in the machine, or even something like a soul, which can't be manufactured.

The fact we're running into stone walls where just throwing more computing power at the problem has increasingly diminishing returns suggests there's a hard limit on what we can do with current models.
It's just an atheistic physicalist's perspective. Someone religious would just say a soul or some equivalent causes consciousness, but if you don't believe in the supernatural it's still clearly a thing (or maybe it isn't, but that would be a different philosophy) and as such it must come from something physical. If you're not a fan of basing your explanation for consciousness off something completely impossible to measure or detect, then you're stuck saying you're not at the stage of having one yet besides that it's extant within your greater worldview.
 
It's just an atheistic physicalist's perspective. Someone religious would just say a soul or some equivalent causes consciousness, but if you don't believe in the supernatural it's still clearly a thing (or maybe it isn't, but that would be a different philosophy) and as such it must come from something physical. If you're not a fan of basing your explanation for consciousness off something completely impossible to measure or detect, then you're stuck saying you're not at the stage of having one yet besides that it's extant within your greater worldview.
I'm religious(obviously) and believe in the soul but I'm entirely comfortable with pure physicalism philosophically for the same reason Princess Elizabeth sent to Descartes.
 
It's just an atheistic physicalist's perspective. Someone religious would just say a soul or some equivalent causes consciousness, but if you don't believe in the supernatural it's still clearly a thing (or maybe it isn't, but that would be a different philosophy) and as such it must come from something physical. If you're not a fan of basing your explanation for consciousness off something completely impossible to measure or detect, then you're stuck saying you're not at the stage of having one yet besides that it's extant within your greater worldview.
It's basically a debate about whether consciousness is top-down or bottom-up. The more bottom up keeps failing, the more it seems like it might be some property that has to be imposed top-down from something/someone already existing with it. But then again, maybe bottom-up will get its lucky break someday.
 
I was looking for something else and found this random reddit comment about LessWrong, which I thought was pretty funny and insightful, especially from someone who seems otherwise unaware of it.

Should I read the lesswrong wiki or not?
I read through a bunch of it some years ago, because I had heard some of my students mention it and wanted to see what the fuss was about. The main problems I could find were the following:

1, Reinventing the wheel. The author is admittedly not familiar with much of philosophy, and would often float ideas that already exist. Except he would give them a different name, and say he had invented them, and they would be glitchier versions of the real thing. This has created a lot of confusion for readers who go on to study philosophy, because they find such a difference in terminology.

2, Contempt for philosophy. Despite admitting to not knowing philosophy very well, the author frequently criticizes philosophy as a general body and criticizes the way it is taught. This usually amounts to a suggestion that philosophy students start by studying physics, mathematics and computer science. The problem is that philosophy is a big field, and that suggestion would prepare a student for only a tiny slice of it - funnily enough, the exact sorts of issues that the author's own ideas look at. The author has also said that much of philosophy is irrelevant, so it is not surprising that he recommends studying the particular bits he likes. But again, his own knowledge of the field is quite limited, so it is difficult to take these recommendations seriously.

3, Errors. Even on occasions when the author did engage with actual philosophy, there was ample misunderstanding and misquoting. Though the author's forays into physics are reputedly even worse. I don't know physics one way or the other, but r/askphysics could probably help in that regard.

4, Cult-like atmosphere surrounding the author. The author's articles have a strange kind of following... There is a community that quotes from them almost like quoting from the bible, as if they are an authoritative source on each issue. This was one of the strangest things I found. After all, this is a source that advocates critical thinking and rationality - not really a place you'd expect to find so much appealing to authority.

5, Biographical stories from the author all over the place. Articles would often wander off into stories about people the author has met, and considers irrational. One thing that made me cringe is how he kept coming back to this idea that working as an AI developer is really shocking and subversive. "I'm an intellectual maverick after all, because I work in machine learning"..."I told them I work in AI, and their jaws hit the floor"... "People often say they're afraid of me, because I work in AI"... "Don't mind me, these are just the opinions of a rogue maverick AI cowboy". It just kept going on and on. It made no sense. Even years ago when I read these posts, machine learning was already being used by nearly every app on my PC, and AI developers were common (as was the subject at university). Yet the author would repeatedly suggest that having a job in machine learning somehow implied he was a mad genius. A lone wolf maverick intellectual working against the system, like a character in a cyberpunk novel. But yes. I just didn't get why we were supposed to be so impressed by this guy, in a series of articles that are supposed to be a critical thinking course.

I'm sure much more could be said; that's just scratching the surface.
 
I retract my endorsement of Scott Alexander's Scott Siskind's short story. Maybe he's still a better writer than ZHPL, because I've seen a couple endorsements of his UNSONG on here, but I haven't read it.
 
  • Like
Reactions: Markass the Worst
I retract my endorsement of Scott Alexander's Scott Siskind's short story. Maybe he's still a better writer than ZHPL, because I've seen a couple endorsements of his UNSONG on here, but I haven't read it.
I think one of those endorsements was me, and I prefer ZHPL over Scott that doesn't make UNSONG bad it just makes ZHPL good.
 
  • Informative
Reactions: Vecr
I'll give the smallest of props to The Atlantic for causing Scott to suffer a minor case of sperging in his latest newsletter that focused on cancel culture:

A little while ago, the Atlantic published an article saying that people who like quiet are racist and need to shut up, because noise is objectively vibrant and good. I have strong noise sensitivities that already make it hard for me to go out in public places, this felt like denying my right to exist in public, and I got angrier than I’ve ever gotten at anything in the media. I’m still so mad I’m not sure I’ll ever link an Atlantic article on ACX again, and I have trouble staying civil when I encounter people who work for the Atlantic. This isn’t out of some well-thought-out political strategy, just that it would personally warm my heart if the Atlantic failed as a business and everyone associated with it died of starvation. Probably this is dysfunctional and I should get over it eventually.
 
I'll give the smallest of props to The Atlantic for causing Scott to suffer a minor case of sperging in his latest newsletter that focused on cancel culture:
Journoscum will be journoscum. I'm not sure why he hasn't accepted that fact and gotten over it because it's been quite obvious for several years now. Nobody sane will give a shit what a journo roach thinks about noise in public spaces.
 
Back