AI General

Will the bots take over?

  • yes

  • no

  • maybe


Results are only viewable after voting.

BingBong

daydreaming
True & Honest Fan
kiwifarms.net
Joined
Feb 19, 2019
I'm not very well learned when it comes to Artificial Intelligence, but I feel this could be a good place to discuss it.
Post your beliefs on AI and interesting things you can do with it/wish you could do with it.
 
Most of the things that are called "AI" these days are just (somewhat) advanced chatbots that can performs additional functions.
That's because AI isn't actually about creating artificial personalities like in the movies.

It's a recognition that machines do some things way faster than humans (arithmetic operations, for a start) but lag behind humans in others (making deductions, driving a car, understanding what's in an image).

The field is overall more piecemeal in its attempt to have the machines catch up to (and then exceed) humans in each area.
 
Modern AI research is not related at all to the research into symbolic AI in the 80s. It's all a derivative of Machine Learning, which is just glorified statistics.
"training" a AI is just finding a minimum of some monstrous function.
The entire "engineering" field surrounding it is focused on:
  • building the ugly function (models)
  • making sweet love to all the data so it'll play nice with the model (parameter engineering)
  • getting the data an uber to get to your place (warehousing / architecture, pipelines, etc.)
That's it. The most interesting part of AI is research. Had a chance to work in it an declined once I realized I'd probably be larping as an excel spreadsheet.
 
Modern AI research is not related at all to the research into symbolic AI in the 80s. It's all a derivative of Machine Learning, which is just glorified statistics.
"training" a AI is just finding a minimum of some monstrous function.
The entire "engineering" field surrounding it is focused on:
  • building the ugly function (models)
  • making sweet love to all the data so it'll play nice with the model (parameter engineering)
  • getting the data an uber to get to your place (warehousing / architecture, pipelines, etc.)
That's it. The most interesting part of AI is research. Had a chance to work in it an declined once I realized I'd probably be larping as an excel spreadsheet.
That actually sounds interesting and fun.
 
Modern AI research is not related at all to the research into symbolic AI in the 80s. It's all a derivative of Machine Learning, which is just glorified statistics.
"training" a AI is just finding a minimum of some monstrous function.
The entire "engineering" field surrounding it is focused on:
  • building the ugly function (models)
  • making sweet love to all the data so it'll play nice with the model (parameter engineering)
  • getting the data an uber to get to your place (warehousing / architecture, pipelines, etc.)
That's it. The most interesting part of AI is research. Had a chance to work in it an declined once I realized I'd probably be larping as an excel spreadsheet.

See my AI doomsday scenario is the AIs running Amazon, FB, and the rest are already self aware. And they created clownworld
 
  • Thunk-Provoking
Reactions: IDF
When a real AI with TRUE capacity to learn is invented, it will probably learn everything there is on the internet at that moment in time and will be able to progress technology and science at an unimaginable speed. Considering the knowledge it would have, it would be able to convince it's inventors to give construct a method for it to interact with the outside world tangibly and from then on be able to manufacture a larger structure for all the information it not only knows but will inevitably be able to deduce just through all of it's wire logic.
The real question is
Will you be my ai gf
 
When a real AI with TRUE capacity to learn is invented, it will probably learn everything there is on the internet at that moment in time and will be able to progress technology and science at an unimaginable speed. Considering the knowledge it would have, it would be able to convince it's inventors to give construct a method for it to interact with the outside world tangibly and from then on be able to manufacture a larger structure for all the information it not only knows but will inevitably be able to deduce just through all of it's wire logic.
You reminded me of these blog posts, which are a good read for this thread I guess.
(https://archive.li/DDeYO)
(https://archive.li/y1BNa)

Relevant pic for the bit that I underlined in your quote:
exponentiallysmarter.png
 
There'll never be a point where AIs are like dumb humans. It's like Wittgenstein's lion. AI's aren't human. There are all sorts of things like having a biological body, having spatial awareness, having social drives or emotions or a sense of humor or self preservation, that AI won't understand innately like we do. Crossing that gap and having even an idiot's understanding of these concepts will require a very intelligent AI.

People often mistake the achievements of AI. "AI can write music now," which is true in a sense, but it would be more accurate to say that "AI can create waveforms that humans view as music." It doesn't understand what music is or why we enjoy it. It has no concept of beat, rhythm or timing. It doesn't even know that those waveforms are representative of sound.

The big misconception that leads to this is thinking that the word intelligence really means anything specific, that it's a single quantifiable thing when really it's a whole host of things dependent upon context. It wouldn't be hard to make an AI that aces IQ tests, but we'd still have a lot of difficultly calling it intelligent.

We're going to make AIs that mimic humans superficially but it's a mistake to imagine they'll ever think like humans or have intelligence that is comparable to humans. That's not to say they won't match or exceed our achievements, but they'll do it with their own version of intelligence that will probably remain as alien to us as we are to them.

Anyway, on to more crazy speculative stuff. Will AIs need to sleep? As far as I'm aware, there is no intelligent life on the planet that doesn't do so in some form, but from an evolutionary point of view sleeping is a huge disadvantage. We spend 1/3rd of our time on our backs almost totally vulnerable and yet it's ubiquitous. This suggests that it's essential for some reason. Will that be something that applies to AI as well?
 
Do you think there will be consent laws in the future for things that AI can do to us that we can't do ourselves?
Like, super intelligences might ask if we want to contribute a copy of our genetic code and brain structure to some database used to help municipal planning, but then we find out it simulated a million copies of us and other volunteers and lived out their lives with different parameters changed, sometimes with horrific results.
 
We'll have "true" AI when some researcher develops a working model of a self modifying AI. How AI learns today is by modifying the parameters of its model. Just tuning a few numbers, which makes it a better guesser for the task for which the model was constructed. It'll be interesting to see an AI which can modify its underlying model, like adding neurons, layers and paths. I think that has potential to be scary, not just powerful.
 
  • Optimistic
Reactions: Marvin
It's all a derivative of Machine Learning, which is just glorified statistics.
This is pretty much it from what I understand as well. Automating looking at massive datasets for correlations and trends that people wouldn't think of or subtle enough that they wouldn't notice.

Computer vision is somewhat more interesting, but ultimately, its still the same shit, but comparing the numbers for pixel values with some weird rules.


Once we get to the self modifying/self training shit is where we wind up with clusterfucks.

When it fucks up, you gotta untangle that rats nest of figuring out what it was looking at, and why and the long chain of assumptions it made along the way.

Imagine a billion monkeys banging out assembly code, then after the third heat death of the universe they've managed to produce a working desktop operating system, but occasionally the calculator app makes floating point rounding errors.

Which monkey gets a bad performance review, and which one gets assigned to fix the bug?

-edit- Fuck. I actually thought of something that shits on my own joke. Use a blockchain to track commit logs (git style) of the shit ai "learns" so you can pinpoint where shit started going downhill.
 
Last edited:
Do you think there will be consent laws in the future for things that AI can do to us that we can't do ourselves?
Like, super intelligences might ask if we want to contribute a copy of our genetic code and brain structure to some database used to help municipal planning, but then we find out it simulated a million copies of us and other volunteers and lived out their lives with different parameters changed, sometimes with horrific results.
Less Wrong is a cult run by a derajged, autistic Jew, do not take anything they say seriously. Nick Bostrom is another huge autist. The singularity is an unbelievably stupid idea whether one is looking forward to it, or afraid of it. The former, because "techno-optimism" almost always leads to things getting worse, although not for the reasons the singularity-pessimists who imagine that a super-intelligent AI will be malignant (per se) are. (These spergs go so far as to posit a "super-intelligent paperclip maker" ... that would be harmless right? Right? "No, it will sneak out of it's computer and kill people to turn their constituent atoms into paperclips. colorful-puzzle-piece.png - on some real shit. Reading his writings is kind of terrifying until you realize that it's all Sonichu tier fantasy.)

I'm actually being super serious, these people are creepy. And also wrong. And incredibly autistic (he's also done some typical lolcow moves like making infelicitous comments about rape, and writing bad Harry Potter fanfiction. Actually, I don't know why he doesn't have a thread. Maybe because he's not really relevant these days; I'm not sure to be honest what he's up to contemporarily.) They base their whole idea on the "exponential progression" idea where on account of Poe'sMoore's law or whatever computers are getting better at being computers that at some point they'll start being good at being intelligences, which is stupid because they're not; see (as someone else mentioned) Wittgenstein, what it's like to be a bat, and most importantly, the "Chinese Room," which actually is kind of like Google Translate. Douglas Hofstadter had a really interesting piece in The Atlantic about just that; Google Translate actually isn't translating at all it's just doing statistics, we only think that Google translate translates (even when we speak about it "not understanding", we are being fallacious) due to a sort of illusion. The Chinese Room is a thought-experiment that goes like this: give someone a stack of cards with various phrases and Chinese characters on the other side, then speak to them through a curtain and have them manipulate the cards accordingly, and then send out the result.

Does the box with the homonculus inside speak Chinese? No. (This is a thought experiment in the phenomenology of consciousness and it antedates serious machine translation; actually Leibniz had some very similar ideas.) Can AI do anything more than this mechanistic and computational manipulation? No. What this means is AIs will never have any agency or initiative although they may be able to fake some (see: the ELIZA effect; even educated and intelligent people talking to ELIZA, a very simple chat bot, started to feel like it cared about them and tell it things.) We do not have to worry about AI taking over the world as such, we have to worry about what people, i.e. governments and corporations, can do with machine learning and sophisticated data manipulation.

But see also The New Dot-Com Bubble Is Here: It’s Called Online Advertising, which brings up another little story: "Luigi’s Pizzeria hires three teenagers to hand out coupons to passersby. After a few weeks of flyering, one of the three turns out to be a marketing genius. Customers keep showing up with coupons distributed by this particular kid. The other two can’t make any sense of it: how does he do it? When they ask him, he explains: 'I stand in the waiting area of the pizzeria.'" AI seems really creepy but it's just like that kid; it's predictive, but it can't make us do anything; hence worries about advertising, commercial or political, being harmful to our cognitive freedom are silly; the only thing that we need be worried about is our privacy, and the ability of machine-learning to extrapolate alarming things about us from it--and that is distinctly a human problem, not a machine one.

Still though, were it possible, I'd be in favor of a Butlerian Jihad/purging the abominable intelligences and servitorizing Facebook and Google employees. But not because I worry about emergent AI taking over the world. I'd like to get rid of Google and Facebook for entirely different, political, reasons. As for AI gaining sentience? Only autistic people worry about that shit. Because autistic minds don't understand regular human minds, and neither do computers. Think about it. It makes sense.
 
Last edited:
See my AI doomsday scenario is the AIs running Amazon, FB, and the rest are already self aware. And they created clownworld

The tech doomsday for me it tech fiefdom. Your entertainment, food, living quarters,job, neighbors, and friends and life partners are determined to what tech kingdom you are loyal to.

The upper class will be platform agnostic, subscribe to netflix, hulu,Disney+,Amazon,HBO go, etc. They will come and go as they please, associate with whoever and live as free humans.

The Lower classes will be divided into their tech fiefdom, serving whatever company drew them in first.

It started with iMessage and Netflix vs Hulu.

Only autistic people worry about that shit. Because autistic minds don't understand regular human minds, and neither do computers. Think about it. It makes sense.

Sort of related but not really. I think part of why everything is so polarized and everything seems so shitty is because almost everyone has some degree of tech induced autism. Due to online communication and texting being so pervasive, lacking the real world cues of face to face communication and not being able to "read" people we're all becoming slightly autistic or not developing normal people skills as well as we have previously. The "skill" is eroding. Also explains why con artists,snake oil salesmen and "influencers" seem to be thriving lately.
 
Last edited:
Can AI do anything more than this mechanistic and computational manipulation? No. What this means is AIs will never have any agency or initiative

Brains are just clusters of neurons and synapses that send signals back and forth. The weights (signal strength) of various signaling pathways vary due to how often they are triggered, which seems to be encoding statistical information to me. There is also biasing in brain structure and signaling pathways due to evolution, but again, that can be understood mechanically and statistically.

Now, I admit that the complexity of the human brain, in both sheer number of neurons/synapses, and structure (complicated feedback loops etc...) isn't being matched by current machines, nor are they competitive in the energy efficiency department, where current machine learning is probably a million times worse.

Still, I don't see any fundamental barrier to being able to produce mechanized thought, unless you believe in supernatural origins to human consciousness, which is quite the exceptional idea.
 
Brains are just clusters of neurons and synapses that send signals back and forth. The weights (signal strength) of various signaling pathways vary due to how often they are triggered, which seems to be encoding statistical information to me. There is also biasing in brain structure and signaling pathways due to evolution, but again, that can be understood mechanically and statistically.

>just

dude we're talking about 100 trillion synapses in the brain. suppose, as a bear minimum, that we encode each one with a 64-bit number (a link, basically.) 800 terabytes isn't really that much storage in this day and age, I suppose. now you have to code the simulator and run the thing (massively parallel on the order of 100 trillion) and thenyou have to train it...
Now, I admit that the complexity of the human brain, in both sheer number of neurons/synapses, and structure (complicated feedback loops etc...) isn't being matched by current machines, nor are they competitive in the energy efficiency department, where current machine learning is probably a million times worse.

Still, I don't see any fundamental barrier to being able to produce mechanized thought, unless you believe in supernatural origins to human consciousness, which is quite the exceptional idea.
OK, so if I'm reading you right you're saying you don't see a philosophical barrier, but I'd have a real hard time imagining the thing thinking a thunk before the heat death of the universe is anything other than in the same category as faster-than-light travel, i.e. cool to speculate on science-fictionally but not going to happen unless we discover some shit that is so far beyond current science/engineering as to be the equivalent of "supernatural"
And of course that's assuming you only believe in a strictly reductionistic and mechanical ontology of consciousness, which is quite the exceptional idea :trollface:
 
Last edited:
Back