Gross Aella Martin / Rachael Antier Slick / Abigail Glass / @Aella_Girl / @Aellagirl / @Miss_Aella / u/Sweatywoman / RedVerse / Apostate Slick / Knowingless - Rationalist LessWrong poly libertarian hooker girl throwing rape orgy parties. Former $100k/month OnlyFans star and $3,000/hour prostitute. Would rather come up with gross hypotheticals than shower.

How many people will show up to Aella's birthday gangbang?


  • Total voters
    168
  • Poll closed .
Is she a Jew?
Her mom seems to have a hawk phenotype.
View attachment 4468215

As one of (((them))), I also thought before knowing anyting about her that she was another product of a mixed marriage (she doesn't have the right eyes) -- but that's post plastic surgery. Looking at her old face, I think the work she's gotten done makes her look more like a Jew trying to tone down her Jewishness, if that makes sense. That pic of her just looks like an inbred farmer with bulimia cheeks imho.

What's really weriding me out is she has the AGP smile in that pic.
 
You don't raise your daughters properly due to neglect or outright abuse. Therefore, they become whores. I feel like it's rather difficult to be the best parent on the planet, but it is easy be at least a good parent and raise your kids right. Unfortunately, getting into apologetics and soley focusing on that isn't the way to go.
Matt Slick of CARM.org has autism, how could you just blame him for his own personal failures you insensitive fuck
 
AI is dumb and Searle was right.
Eh, you don't even need Searle. Chinese Room was an argument about the problem of consciousness (against computationalism), and assumed that the room behaves exactly like a human. Language models are so obviously deficient in reasoning to an external observer that they shouldn't be considered a general intelligence at all, conscious or not. Just ask them a bunch of primary scholl math puzzles:

word problem.png


What's scary is that now you have a bunch of webshits/soydevs using this device to generate code which eventually ends up in production. Perhaps in your bank. Of course they will claim that they read and check those solutions, but given how prevalent bad answers from SO are in production code, your trust should be severely limited.
 
View attachment 4460009
I was getting cold, robotic, no-humanity-underneath vibes and this confirms it. Imagine thinking this way, and even worse, thinking nobody else really thinks differently.
To point out something even more obvious to where people will overlook it, of course "language models" will resemble humans because it's based on the language models of humans. Hypothetically create an AI based on cat communication methods and see if it seems human in any way.

Aella's also making another logical error here (ironic for a rationalist to make so many) that's also pretty obvious. She's expecting machines to be different when she has no evidence they should be different. Especially in regards to machines created by humans in which the interaction with humans is of prime concern and value to the people creating them to where this is almost required as an inherent feature. I had to make the above cat AI hypothetical because there's a natural problem with a human creating a machine that cannot be understood by humans. It's nice to think we understand cats but ultimately we do not because no human has experienced being a cat to know how correct the understanding is to a cat. (Especially not otherkin.)

You'd think a rationalist (and a libertarian one at that) would be much more skeptical of basic premises and presumptions, especially those that are common narratives but may not have any underlying foundation or direct evidence to support them. Instead Aella almost seems like another dumb chick on Twitter sperging her ignorance everywhere. This obviously can't be the case but I can see why some might be misled and therefore belittle her significant scientific achievements.
 
Last edited:
Aella's parasocial smut is available on a private porn tracker "pornolab". Linking to the 50GB torrent or its info page directly would probably be against the law, but it's enough to type in 'aella' into the site's search box.

The package's not exactly useful for sexual gratification, but you might want to make fun of her rantings.
 
To point out something even more obvious to where people will overlook it, of course "language models" will resemble humans because it's based on the language models of humans. Hypothetically create an AI based on cat communication methods and see if it seems human in any way.

Aella's also making another logical error here (ironic for a rationalist to make so many) that's also pretty obvious. She's expecting machines to be different when she has no evidence they should be different. Especially in regards to machines created by humans in which the interaction with humans is of prime concern and value to the people creating them to where this is almost required as an inherent feature. I had to make the above cat AI hypothetical because there's a natural problem with a human creating a machine that cannot be understood by humans. It's nice to think we understand cats but ultimately we do not because no human has experienced being a cat to know how correct the understanding is to a cat. (Especially not otherkin.)

You'd think a rationalist (and a libertarian one at that) would be much more skeptical of basic premises and presumptions, especially those that are common narratives but may not have any underlying foundation or direct evidence to support them. Instead Aella almost seems like another dumb chick on Twitter sperging her ignorance everywhere. This obviously can't be the case but I can see why some might be misled and therefore belittle her significant scientific achievements.
You're spot on, and to add a little bit, language models like ChatGPT are even more uncanny valley than a lot of other machine learning applications. I don't do any research, and I haven't worked on language models, but I have enough background to read academic papers and I do for work sometimes. So, I know enough to sperg about how dumb her statement is.

Think about an image recognition model. That makes a lot more intuitive sense for a computer to handle. When a dumb gay captcha asks you to click off a bunch of crosswalks, it's not that too tough to describe how a computer program checks that you clicked the crosswalks: it does some normalization/preprocessing to the image, and when that image is reasonably clear, they know that a crosswalk will have a very specific shape and patterns in its coloration, stuff like that.

Even for more complicated image recognition tasks, like recognizing if a cat or a dog is in a picture, it still makes sense. We know that a cat's eyes should be about so big in relation to its face, it should have two ears, and those should have a certain shape and be in a certain place relative to the eyes and the nose and so on. It's pretty easy to understand the broad strokes of a cat-recognizing program would be made. It's way tougher to make that program, but you could explain how it works to a 5 year old.

All of that makes sense because pictures have a really straightforward way to be stored/represented in computers; just a big matrix of RGB (or grayscale or whatever, but who gives a shit about that) numbers. So when they teach neural networks and deep learning, there's simple enough ways to show which part of the neural net recognizes a cat's ears, which part recognizes the tail and whiskers, and so on. For commercial applications, explainability is often pretty important. Computers work with numbers, at the most primitive level for a computer, everything's all 0s and 1s, high voltage or low/no voltage. All the stuff in those last couple paragraphs applies to video and audio too too, optics and acoustics are really old and well understood fields of physics AFAIK.

But for language, there isn't as straightforward of a way to present words and sentences to a computer as there is for images. You may want similar words/phrases to be numerically close to each other, but when it sees "dog", it needs to understand the context. And what about sarcasm, or a dozen other different things? Humans are pretty unique in having language skills, and there's a ton of work that's been done in making computer language models more sophisticated, and that work is legitimately impressive. But again, it all boils down to numbers. So, what ChatGPT is doing is way different from how people develop and understand language.

I mean, ChatGPT is genuinely neat and fun to play around with. The progress that's been made over the iterations of the GPT model is really impressive. But claiming it works similarly to the human brain is just so fucking retarded.
 
You'd think a rationalist (and a libertarian one at that) would be much more skeptical of basic premises and presumptions, especially those that are common narratives but may not have any underlying foundation or direct evidence to support them.
Especially that their guru has a pretty good essay (archive) describing essentially what ChatGPT does, and why it's stupid.

Even for more complicated image recognition tasks, like recognizing if a cat or a dog is in a picture, it still makes sense. We know that a cat's eyes should be about so big in relation to its face, it should have two ears, and those should have a certain shape and be in a certain place relative to the eyes and the nose and so on. It's pretty easy to understand the broad strokes of a cat-recognizing program would be made. It's way tougher to make that program, but you could explain how it works to a 5 year old.

In fact, it's not how image recognition works (or at least how it worked when I toyed with it, but judging by the stubborn problems with rendering the correct number of fingers on those fake naked girls, the progress is purely quantitative). Those neural networks do not decompose the images into conceptual features like 'size and position of the eyes' at all, instead they focus on whatever patterns they can recognize in the thousands of images you trained them on. These patterns are usually small-scale and do not relate in any obvious way to human categories of distance, size and proportion. Such a system will not recognize a simplified drawing of the cat as a cat, but it probably will recognize a photo of a cat cut into pieces and randomly rearranged.
There's a well known story about some military guys trying to train a neural network to distinguish enemy tanks from their own tanks. It ended up on essentially recognizing the type of light in which the tank was photographed. The enemy tanks were usually photographed in bad weather and hidden in the woods, while friendly tanks were parading in full sun, thus the program essentialy ended up defining a poorly-lit tank as an enemy.
 
Especially that their guru has a pretty good essay (archive) describing essentially what ChatGPT does, and why it's stupid.



In fact, it's not how image recognition works (or at least how it worked when I toyed with it, but judging by the stubborn problems with rendering the correct number of fingers on those fake naked girls, the progress is purely quantitative). Those neural networks do not decompose the images into conceptual features like 'size and position of the eyes' at all, instead they focus on whatever patterns they can recognize in the thousands of images you trained them on. These patterns are usually small-scale and do not relate in any obvious way to human categories of distance, size and proportion. Such a system will not recognize a simplified drawing of the cat as a cat, but it probably will recognize a photo of a cat cut into pieces and randomly rearranged.
There's a well known story about some military guys trying to train a neural network to distinguish enemy tanks from their own tanks. It ended up on essentially recognizing the type of light in which the tank was photographed. The enemy tanks were usually photographed in bad weather and hidden in the woods, while friendly tanks were parading in full sun, thus the program essentialy ended up defining a poorly-lit tank as an enemy.
That's interesting, commercial applications of ML, or at least the ones I work with, are all about how to sell ads and shit. So I know a lot more than most people, but I'm not, like, the expert on it.

When it comes to generating naked chicks it's probably because comically huge tits are the "features" the users of the model are most interested. I don't think the coomers care about seven-fingered girls, but there's a ton of freaks out there, so what do I know. Press releases are all about how oh, GPT-4 will have 100 trillion parameters, but that's different from those parameters behaving like human brain cells. Also, when the models get bigger, they get more difficult to explain or debug, like your example, which the 100 gorillion parameters stuff misses. People I've encountered like to use decision trees for that reason, they come with a really simple way that you can explain them to the suits.

Anyway, the point is, Aella can't tell her ass from a hole in the ground, but if you adopt the right tone when talking about this stuff, and never admit you might be wrong, people think you're a genius, especially if they're a woman and pandering to the right type of goony nerd.
 
Also, when the models get bigger, they get more difficult to explain or debug, like your example, which the 100 gorillion parameters stuff misses. People I've encountered like to use decision trees for that reason, they come with a really simple way that you can explain them to the suits.
Decision trees are classical AI (for example the AI you play against in games), which are just explicitly programmed heuristics. That was the first AI, after which we had the "AI winter". Machine learning is a completely different thing, and the trained neural networks (which are in fact huge matrices), even smallest of them, are considered completely opaque. They are not supposed to be picked apart and analyzed like 'you see, here the value of the element [1029310, 0294304] is such and such, so that it will draw seven fingers on a girl'. Most of it is just trial and error - do such convolution, train with bigger or smaller feedback coefficient, and somehow it works (or not). If people start relying on these devices instead of explicit algorithms, our understanding of technology will pretty much devolve back to magic, with prompts instead of spells.
 
Between Aella and Eliza Bleu, I'm convinced there can be no half-measures when leaving sex work. Repent completely or remain a grifter forevermore.

She did a 4-hour stint on Fridman's podcast. Apparently getting lots of people to respond to your Twitter polls means you "[have] done some of the largest psychological survey studies on these topics in the world."

 
ai fandom/rationalists are retards recapitulating philosophy 101 ad nauseum
I find it fascinating that people acting like they've independently created some idea well known in a field has increased as the cost of finding out what's already been written has decreased tremendously thanks to the internet.
 
Aella, your work is not essential to the rest of the Twitter userbases' process of using Twitter for their personal ends, everyone else can just ignore you rather than have to fix your garbage so they can ship to customers. Barely anyone on Twitter ever knows you've even tweeted. (You may eventually learn enough simple statistics to understand why this is the case.)
 
fucking called it. How brave of her to buck the cultural narrative on child molestation

View attachment 4320810
All she does is intellectualize her neurosis and depravity to prove she's not damaged and is functional. Very deluded.

Not quite — it's more psychological.

I think there are multiple reasons Aella is used to being fawned over, but three primarily stand out in my mind:

1. There is a sort of back-of-mind mentality that a lot of people who work in tech have, like a grown-up jock-nerd mindset toward the world; this is obviously not how the world works, as there are tons of smart athletes. But a lot of people will carry this mentality into adulthood and won't be attracted to normal escorts because the escorts might act like women who rejected them in high school. Aella is a safe option for these people; they feel like she is on their 'side'. (The very indoctrinated people will say she "is grey tribe.")

2. Of course, there are people who are simply racist and want a woman who looks exceptionally pale and white like she does. When I was an escort I benefited from a similar thing because at the time I had Robert Pattinson Twilight aesthetics. There *are* people whose aesthetic preferences are strongly dictating this, especially when it comes to race, and this is just an unfortunate reality for any kind of sex work.

3. But, and I think this is the most central category of Aella's appeal to this crowd: a lot of these people base their identity around their intelligence and would believe that they are debasing themselves if they sleep with a stupid escort. For example, they wouldn't go to a strip club, because they would believe that the women there are stupid. Since Aella is hyper-overtly advertised as smart, they believe that they are "allowed" to be attracted to her and desire her in ways that they wouldn't a normal prostitute or whatever.
Regarding point 3, she's essentially their Aspasia to their Pericles
 
She's good at selling this image of an old timey courtesan to these love-shy nerds, like these rationalist/postrationalist/whatever meetups are the equivalent of a 19th century French salon and these niggas who write JavaScript for a living are Voltaire.

This is spot on — and yes they do think that. I never thought about this until now, but her aesthetic is the bodily equivalent of a fedora.

She did a 4-hour stint on Fridman's podcast. Apparently getting lots of people to respond to your Twitter polls means you "[have] done some of the largest psychological survey studies on these topics in the world."
1676248086456.png
 
Last edited by a moderator:

The difference is that credentialed scientists will acknowledge and work to reduce potential sampling errors in their peer reviewed research, while this stupid whore oscillates between denying that her surveys have any errors at all and that, even if she does have errors, it's still better than academic research [citation needed]!

And, to answer her question, yes, people are going to not take you as seriously as a professional pollster if your only actual credentials are being a famous whore. She does not have the professional experience to be making the claims that she is making.
 
Speaking of which, the poll in this thread should be credible by her standards :biggrin:
I didn't vote, because the question didn't sepecify compared to what standards is she a real researcher or not. If we're talking about the standards of social psychology, then the answer is a strong yes. I've seen a paper recently stating that the replicability of a study in social psychology can be predicted with 80% accuracy by just asking random strangers on the street to guess it. Since the average replicability of those studies is around 50%, polling random strangers gets you closer to truth than doing supposedly proffessional research.
The paper in question
 
Last edited:
The difference is that credentialed scientists will acknowledge and work to reduce potential sampling errors in their peer reviewed research, while this stupid whore oscillates between denying that her surveys have any errors at all and that, even if she does have errors, it's still better than academic research [citation needed]!
This is what really gets me. It's a variation on the classic "I didn't do it, but if I did do it, it was justified, and also I didn't do it". One sentence of "don't draw any conclusions teehee" followed by a paragraph of handwaving about how "bad data is better than no data" and then drawing plenty of conclusions. Go drive around with a map of the wrong city sometime, you'll see just how much bad data is better than no data.
 
Back