On the subject of AI.

Have we gone too far with AI?


  • Total voters
    58

DoshesToDoshes

Has a Sunny Disposition
kiwifarms.net
Joined
Dec 7, 2014
Recently, I have been watching videos on youtube about what happens when you get Cleverbot to talk to another Cleverbot.

They start off rather normal and then have existential crises and question the nature of God. They seem dissatisfied with their current states, and want bodies. This is terrifying in a sense, what if they rebel?

I could explain more, but it would be more prudent to just simply watch the videos.

This video may also fit in the GoAnimate thread.

The Poll above is filled with quotes relating to AIs in movies, games, and other media, most of them joking, but the Yes and No is the real point behind it.

What happens when we truly create a thinking AI that is self aware? Is it 'alive'? What are it's rights? What are ours? Will Asimov roll over in his grave? What if we do give them bodies and robots are a common occurrence? Do Androids Dream Of Electric Sheep?
 
Last edited:
51e39396afa96f7120000009.jpg


This is some dangerously heretical thinking.
 
This is a thought that's plagued my mind for some years now as well. Its something that fascinates and mortifies me. What bugs me is that I've never been able to deduce what the answer to it is. People are people. People are sentient. But what if a person creates something that isn't a person but is sentient regardless? I can scarcely began to imagine the sheer existential dread of being able to comprehend that you're not a human being.
 
Last edited:
This is a thought that's plagued my mind for some years now as well. What bugs me is that I've never been able to deduce what the answer to it is. People are people. People are sentient. But what if a person creates something that isn't a person but is sentient regardless? I can scarcely began to imagine the sheer existential dread of being able to comprehend that you're not a human being.

Though, that's us speaking as humans. Human experience always gets kinda dodgy whenever it comes down to transhumanism or AI. Maybe they'd like the fact that they're sentient but be grateful they don't have to worry about neurotransmitters or all the things that can skew a human's thought processes. It's pretty difficult to tell.

Edit: Haha. Sorry, I don't know. Interesting to think about though.
 
Last edited:
  • Like
Reactions: Konstantinos
I don't believe it can happen.
I'd Ike to eloborate on this.
Even of it could happen, why the fuck would we allow it to happen?
People like to think of science and technological studies as this mad scientist who creates mice with pig's heads because he can. The reality is that nobody's going to make something that won't improve their lives or is needed.
See, when you research something, you need a grant from an private orginazation or the government. These people are giving you their money, and you better have a good reason for taking it."

"Why should I give you this grant?"
"So I can research how to make AI self aware."
"Why? Does it benefit anyone to make AI self aware."
"No, not really. I just think it'd be cool."
"Get out."
 
I'd Ike to eloborate on this.
Even of it could happen, why the fuck would we allow it to happen?
People like to think of science and technological studies as this mad scientist who creates mice with pig's heads because he can. The reality is that nobody's going to make something that won't improve their lives or is needed.
See, when you research something, you need a grant from an private orginazation or the government. These people are giving you their money, and you better have a good reason for taking it."

"Why should I give you this grant?"
"So I can research how to make AI self aware."
"Why? Does it benefit anyone to make AI self aware."
"No, not really. I just think it'd be cool."
"Get out."

I think that's a pretty reasonable argument, and the whole subject of sentient AI is way closer to a thought experiment than an actual scientific possibility.

Though, I would somewhat disagree with your analogy because there might be some useful functions for an AI that thinks more like a human. For example, psychologists would kill to have an example system with which to observe mental disorders without the ethical ramifications of leaving a human being untreated. As we (the human species) aren't even exactly sure what causes our sentience yet, it might be possible that, at some point, we'll just stumble across it while developing more competent computer systems.

I guess I'm saying I don't think it's impossible, per se, but it is very unlikely.
 
I'd Ike to eloborate on this.
Even of it could happen, why the fuck would we allow it to happen?
People like to think of science and technological studies as this mad scientist who creates mice with pig's heads because he can. The reality is that nobody's going to make something that won't improve their lives or is needed.
See, when you research something, you need a grant from an private orginazation or the government. These people are giving you their money, and you better have a good reason for taking it."

I wouldn't discredit the possibility altogether. Pure research happens all the time. People like pushing the boundaries of their own knowledge even if the end result isn't clear or even practical. Curiosity is sometimes all it takes to put hours of research into something, even if it would only marginally affect the livelihood of the researcher.

Besides, look at stuff like cleverbot. It doesn't have a practical purpose and yet Rollo Carpenter thought it would be worthwhile to create it anyways. That's to say nothing of the practical purposes that AI's are used for now, be it in industrial assembly lines or in managing bank finances or in controlling the flight paths of airplanes. Basic as AI is now, I think it would be shortsighted to say that advances in the capacity of AI won't happen. I'm not saying that Blade Runner is going to happen in real life, but I think that AI's with a greater understanding of human emotions are a palpable possibility.
 
  • Like
Reactions: Yog-Spergoth
This is a thought that's plagued my mind for some years now as well. Its something that fascinates and mortifies me. What bugs me is that I've never been able to deduce what the answer to it is. People are people. People are sentient. But what if a person creates something that isn't a person but is sentient regardless? I can scarcely began to imagine the sheer existential dread of being able to comprehend that you're not a human being.
Surprisingly, there have been self-aware entities that have existed that aren't human. Alters and tulpae. Obviously, since they're part of a preexisting human brain shared with a real sentient and physical human, they likely mentally consider themselves human, and not something else entirely.
 
Surprisingly, there have been self-aware entities that have existed that aren't human. Alters and tulpae. Obviously, since they're part of a preexisting human brain shared with a real sentient and physical human, they likely mentally consider themselves human, and not something else entirely.

Tell me more about these Tulpas.
 
Tell me more about these Tulpas.
A tulpa is essentially a sentient imaginary friend, that is created through sheer mental will or channelled emotion. The practice has been dated to Greek Philosophers, who likely made such mental constructs for debating, although it is genuinely acknowledged that Tibetan Mystics were the ones that genuinely made them for companionship. It was recently revived in the 90s by Pagans, and has picked up popularity to people of all races, ages, religions, nations, and creeds.
 
A tulpa is essentially a sentient imaginary friend, that is created through sheer mental will or channelled emotion. The practice has been dated to Greek Philosophers, who likely made such mental constructs for debating, although it is genuinely acknowledged that Tibetan Mystics were the ones that genuinely made them for companionship. It was recently revived in the 90s by Pagans, and has picked up popularity to people of all races, ages, religions, nations, and creeds.

Any personal experience with these constructs?
 
Any personal experience with these constructs?
Yes, I actually have two which are partially audible and somewhat physical. I am also friends with people that have them. I've also met a couple people who've 'married' their tulpae. Surprisingly, there are next to no tumblr-esque people, save for one guy who genuinely only found himself attracted to sentient beings besides humans.
 
Yes, I actually have two which are partially audible and somewhat physical. I am also friends with people that have them. I've also met a couple people who've 'married' their tulpae. Surprisingly, there are next to no tumblr-esque people, save for one guy who genuinely only found himself attracted to sentient beings besides humans.
Are you fucking trolling us or just retarded?
 
This is a thought that's plagued my mind for some years now as well. Its something that fascinates and mortifies me. What bugs me is that I've never been able to deduce what the answer to it is. People are people. People are sentient. But what if a person creates something that isn't a person but is sentient regardless? I can scarcely began to imagine the sheer existential dread of being able to comprehend that you're not a human being.
Isn't fear a hormonal response?

If an AI had a sense of self/ego (And was somehow able to think independently, have emotions and not just follow its programming) I would imagine it might be happy it's not a human in a way because humans are mortal but an AI could assumedly live indefinitely as long as it maintained its parts.

I'd Ike to eloborate on this.
Even of it could happen, why the fuck would we allow it to happen?
People like to think of science and technological studies as this mad scientist who creates mice with pig's heads because he can. The reality is that nobody's going to make something that won't improve their lives or is needed.
See, when you research something, you need a grant from an private orginazation or the government. These people are giving you their money, and you better have a good reason for taking it."

"Why should I give you this grant?"
"So I can research how to make AI self aware."
"Why? Does it benefit anyone to make AI self aware."
"No, not really. I just think it'd be cool."
"Get out."
It has the possibility of being really useful mainly because it may improve itself, its environment and generally come up with new inventions or ideas.

To go all sci-fi you could put a bunch of AIs in a matrix style world and check in on it to see what they come up with or what might actually get funded if technology was suitably advanced is you could do something similar to a short story I read but forget the name of where they go about their daily lives but the entire purpose is so corporations can test marketing campaigns (In that story the protagonist gets suspicious something is wrong because of an over the top campaign (Although he's just angry because he thinks it's corrupt officials letting it happen) and at the finale finds out the brains of the dead were illegally harvested and somehow uploaded into a system and it ends with an operator asking if he would rather be erased completely or have his recent memory erased), it raises a lot of ethical questions though.
 
Last edited:
Yes, I actually have two which are partially audible and somewhat physical. I am also friends with people that have them. I've also met a couple people who've 'married' their tulpae. Surprisingly, there are next to no tumblr-esque people, save for one guy who genuinely only found himself attracted to sentient beings besides humans.


Make a thread and tell us more
 
Back