# On the subject of AI.



## DoshesToDoshes (Mar 1, 2015)

Recently, I have been watching videos on youtube about what happens when you get Cleverbot to talk to another Cleverbot.

They start off rather normal and then have existential crises and question the nature of God. They seem dissatisfied with their current states, and want bodies. This is terrifying in a sense, what if they rebel?

I could explain more, but it would be more prudent to just simply watch the videos.



Spoiler: Video 1













Spoiler: Video 2







This video may also fit in the GoAnimate thread.





Spoiler: Video 3













Spoiler: The Poll



The Poll above is filled with quotes relating to AIs in movies, games, and other media, most of them joking, but the Yes and No is the real point behind it.



What happens when we truly create a thinking AI that is self aware? Is it 'alive'? What are it's rights? What are ours? Will Asimov roll over in his grave? What if we do give them bodies and robots are a common occurrence? _Do Androids Dream Of Electric Sheep?_


----------



## exball (Mar 1, 2015)

This is some dangerously heretical thinking.


----------



## Yog-Spergoth (Mar 1, 2015)




----------



## Konstantinos (Mar 1, 2015)

This is a thought that's plagued my mind for some years now as well. Its something that fascinates and mortifies me. What bugs me is that I've never been able to deduce what the answer to it is. People are people. People are sentient. But what if a person creates something that isn't a person but is sentient regardless? I can scarcely began to imagine the sheer existential dread of being able to comprehend that you're not a human being.


----------



## Yog-Spergoth (Mar 1, 2015)

Konstantinos said:


> This is a thought that's plagued my mind for some years now as well. What bugs me is that I've never been able to deduce what the answer to it is. People are people. People are sentient. But what if a person creates something that isn't a person but is sentient regardless? I can scarcely began to imagine the sheer existential dread of being able to comprehend that you're not a human being.



Though, that's us speaking as humans. Human experience always gets kinda dodgy whenever it comes down to transhumanism or AI. Maybe they'd like the fact that they're sentient but be grateful they don't have to worry about neurotransmitters or all the things that can skew a human's thought processes. It's pretty difficult to tell.

Edit: Haha. Sorry, I don't know. Interesting to think about though.


----------



## Marvin (Mar 1, 2015)

DoshesToDoshes said:


> What happens when we truly create a thinking AI that is self aware?


I don't believe it can happen.


----------



## Pikonic (Mar 1, 2015)

Marvin said:


> I don't believe it can happen.


I'd Ike to eloborate on this.
Even of it could happen, why the fuck would we allow it to happen?
People like to think of science and technological studies as this mad scientist who creates mice with pig's heads because he can. The reality is that nobody's going to make something that won't improve their lives or is needed.
See, when you research something, you need a grant from an private orginazation or the government. These people are giving you their money, and you better have a good reason for taking it."

"Why should I give you this grant?"
"So I can research how to make AI self aware."
"Why? Does it benefit anyone to make AI self aware."
"No, not really. I just think it'd be cool."
"Get out."


----------



## Yog-Spergoth (Mar 1, 2015)

Pikonic said:


> I'd Ike to eloborate on this.
> Even of it could happen, why the fuck would we allow it to happen?
> People like to think of science and technological studies as this mad scientist who creates mice with pig's heads because he can. The reality is that nobody's going to make something that won't improve their lives or is needed.
> See, when you research something, you need a grant from an private orginazation or the government. These people are giving you their money, and you better have a good reason for taking it."
> ...



I think that's a pretty reasonable argument, and the whole subject of sentient AI is way closer to a thought experiment than an actual scientific possibility. 

Though, I would somewhat disagree with your analogy because there might be some useful functions for an AI that thinks more like a human. For example, psychologists would kill to have an example system with which to observe mental disorders without the ethical ramifications of leaving a human being untreated. As we (the human species) aren't even exactly sure what causes our sentience yet, it might be possible that, at some point, we'll just stumble across it while developing more competent computer systems. 

I guess I'm saying I don't think it's impossible, per se, but it is very unlikely.


----------



## Konstantinos (Mar 1, 2015)

Pikonic said:


> I'd Ike to eloborate on this.
> Even of it could happen, why the fuck would we allow it to happen?
> People like to think of science and technological studies as this mad scientist who creates mice with pig's heads because he can. The reality is that nobody's going to make something that won't improve their lives or is needed.
> See, when you research something, you need a grant from an private orginazation or the government. These people are giving you their money, and you better have a good reason for taking it."



I wouldn't discredit the possibility altogether. Pure research happens all the time. People like pushing the boundaries of their own knowledge even if the end result isn't clear or even practical. Curiosity is sometimes all it takes to put hours of research into something, even if it would only marginally affect the livelihood of the researcher.

Besides, look at stuff like cleverbot. It doesn't have a practical purpose and yet Rollo Carpenter thought it would be worthwhile to create it anyways. That's to say nothing of the practical purposes that AI's are used for now, be it in industrial assembly lines or in managing bank finances or in controlling the flight paths of airplanes. Basic as AI is now, I think it would be shortsighted to say that advances in the capacity of AI won't happen. I'm not saying that _Blade Runner_ is going to happen in real life, but I think that AI's with a greater understanding of human emotions are a palpable possibility.


----------



## Supreme Leader (Mar 1, 2015)

Konstantinos said:


> This is a thought that's plagued my mind for some years now as well. Its something that fascinates and mortifies me. What bugs me is that I've never been able to deduce what the answer to it is. People are people. People are sentient. But what if a person creates something that isn't a person but is sentient regardless? I can scarcely began to imagine the sheer existential dread of being able to comprehend that you're not a human being.


Surprisingly, there have been self-aware entities that have existed that aren't human. Alters and tulpae. Obviously, since they're part of a preexisting human brain shared with a real sentient and physical human, they likely mentally consider themselves human, and not something else entirely.


----------



## Yog-Spergoth (Mar 1, 2015)

Supreme Leader said:


> Surprisingly, there have been self-aware entities that have existed that aren't human. Alters and tulpae. Obviously, since they're part of a preexisting human brain shared with a real sentient and physical human, they likely mentally consider themselves human, and not something else entirely.



Tell me more about these Tulpas.


----------



## Supreme Leader (Mar 1, 2015)

Yog-Spergoth said:


> Tell me more about these Tulpas.


A tulpa is essentially a sentient imaginary friend, that is created through sheer mental will or channelled emotion. The practice has been dated to Greek Philosophers, who likely made such mental constructs for debating, although it is genuinely acknowledged that Tibetan Mystics were the ones that genuinely made them for companionship. It was recently revived in the 90s by Pagans, and has picked up popularity to people of all races, ages, religions, nations, and creeds.


----------



## Yog-Spergoth (Mar 1, 2015)

Supreme Leader said:


> A tulpa is essentially a sentient imaginary friend, that is created through sheer mental will or channelled emotion. The practice has been dated to Greek Philosophers, who likely made such mental constructs for debating, although it is genuinely acknowledged that Tibetan Mystics were the ones that genuinely made them for companionship. It was recently revived in the 90s by Pagans, and has picked up popularity to people of all races, ages, religions, nations, and creeds.



Any personal experience with these constructs?


----------



## Supreme Leader (Mar 1, 2015)

Yog-Spergoth said:


> Any personal experience with these constructs?


Yes, I actually have two which are partially audible and somewhat physical. I am also friends with people that have them. I've also met a couple people who've 'married' their tulpae. Surprisingly, there are next to no tumblr-esque people, save for one guy who genuinely only found himself attracted to sentient beings besides humans.


----------



## SpessCaptain (Mar 1, 2015)

Wh.. wh.. whut, Tulpas on a thread on Artifical Intelligence? They're like a human construct, they're not artifical intelligences but the constructs of a human mind.


----------



## exball (Mar 2, 2015)

Supreme Leader said:


> Yes, I actually have two which are partially audible and somewhat physical. I am also friends with people that have them. I've also met a couple people who've 'married' their tulpae. Surprisingly, there are next to no tumblr-esque people, save for one guy who genuinely only found himself attracted to sentient beings besides humans.


Are you fucking trolling us or just retarded?


----------



## Pickle Inspector (Mar 2, 2015)

Konstantinos said:


> This is a thought that's plagued my mind for some years now as well. Its something that fascinates and mortifies me. What bugs me is that I've never been able to deduce what the answer to it is. People are people. People are sentient. But what if a person creates something that isn't a person but is sentient regardless? I can scarcely began to imagine the sheer existential dread of being able to comprehend that you're not a human being.


Isn't fear a hormonal response?

If an AI had a sense of self/ego (And was somehow able to think independently, have emotions and not just follow its programming) I would imagine it might be happy it's not a human in a way because humans are mortal but an AI could assumedly live indefinitely as long as it maintained its parts.



Pikonic said:


> I'd Ike to eloborate on this.
> Even of it could happen, why the fuck would we allow it to happen?
> People like to think of science and technological studies as this mad scientist who creates mice with pig's heads because he can. The reality is that nobody's going to make something that won't improve their lives or is needed.
> See, when you research something, you need a grant from an private orginazation or the government. These people are giving you their money, and you better have a good reason for taking it."
> ...


It has the possibility of being really useful mainly because it may improve itself, its environment and generally come up with new inventions or ideas.

To go all sci-fi you could put a bunch of AIs in a matrix style world and check in on it to see what they come up with or what might actually get funded if technology was suitably advanced is you could do something similar to a short story I read but forget the name of where they go about their daily lives but the entire purpose is so corporations can test marketing campaigns  (In that story the protagonist gets suspicious something is wrong because of an over the top campaign (Although he's just angry because he thinks it's corrupt officials letting it happen) and at the finale finds out the brains of the dead were illegally harvested and somehow uploaded into a system and it ends with an operator asking if he would rather be erased completely or have his recent memory erased), it raises a lot of ethical questions though.


----------



## CatParty (Mar 2, 2015)

Supreme Leader said:


> Yes, I actually have two which are partially audible and somewhat physical. I am also friends with people that have them. I've also met a couple people who've 'married' their tulpae. Surprisingly, there are next to no tumblr-esque people, save for one guy who genuinely only found himself attracted to sentient beings besides humans.




Make a thread and tell us more


----------



## Supreme Leader (Mar 2, 2015)

exball said:


> Are you fucking trolling us or just retarded?


Neither:
http://www.tulpa.info/


----------



## Ariel (Mar 2, 2015)

Supreme Leader said:


> Neither:
> http://www.tulpa.info/


Do your tulpae get along well with each other?


----------



## Supreme Leader (Mar 2, 2015)

chimpchan said:


> Do your tulpae get along well with each other?


They get along pretty well for the most part. They might fool around over a few things, but that's about it.


----------



## ToroidalBoat (Mar 2, 2015)

Orange Catholic Bible said:
			
		

> _Thou shalt not make a machine in the likeness of a human mind._



Really though, I don't know if a machine can be made to be self aware. I think it may depend on whether or not the soul exists.

If there's no such thing as a soul and our minds are purely the result of brain activity, then I believe it may be possible - our brains are composed of physical components interacting with each other in physical ways (neurons, neurotransmitters, etc), after all.

If, however, there is a soul that's ultimately the source of awareness (even if we're only aware of our brain functions while in mortal life), then either machines will never become aware, or they can gain a soul, and therefore awareness, if built right.

Also, I remember seeing this magazine article on AI awhile ago where a group of experts were giving their views on creating intelligent artificial life. I distinctly remember one of them saying that "we don't need to create a new oppressed minority" or something along those lines.


----------



## Bogs (Mar 2, 2015)

Spoiler: The Troof


----------



## Saney (Mar 2, 2015)

exball said:


> Are you fucking trolling us or just retarded?



If you've seen his posts, you wouldn't even have to ask.


----------



## mooooo (Mar 2, 2015)

Konstantinos said:


> This is a thought that's plagued my mind for some years now as well. Its something that fascinates and mortifies me. What bugs me is that I've never been able to deduce what the answer to it is. People are people. People are sentient. But what if a person creates something that isn't a person but is sentient regardless? I can scarcely began to imagine the sheer existential dread of being able to comprehend that you're not a human being.


Why? I'm not a rabbit either. Why would it matter which species I am? 

"person" is a human construct, a model within our minds, an approximation. 

The real issue is when we manage to make an AI that's slightly smarter than us. It can then build an AI slightly smarter than it, and so on. Very quickly we'd get AI much much smarter than us.


----------



## SpessCaptain (Mar 2, 2015)

mooooo said:


> Why? I'm not a rabbit either. Why would it matter which species I am?
> 
> "person" is a human construct, a model within our minds, an approximation.
> 
> The real issue is when we manage to make an AI that's slightly smarter than us. It can then build an AI slightly smarter than it, and so on. Very quickly we'd get AI much much smarter than us.




Any AI is smarter than you already, @mooooo.


----------



## Durable Mike Malloy (Mar 2, 2015)

mooooo said:


> Why? I'm not a rabbit either. Why would it matter which species I am?
> 
> "person" is a human construct, a model within our minds, an approximation.
> 
> The real issue is when we manage to make an AI that's slightly smarter than us. It can then build an AI slightly smarter than it, and so on. Very quickly we'd get AI much much smarter than us.


Smart is a phenomenologically impoverished term in many ways. Could you unpack your definition a bit?


----------



## mooooo (Mar 2, 2015)

buster_kitten said:


> Smart is a phenomenologically impoverished term in many ways. Could you unpack your definition a bit?



Smart is something with blurry edges but is more definitive when the differences are exaggerated. 
It's hard to define smart sharply enough to definitely say one fairly normal human is smarter than another normal human, but fairly clear that a worm is less smart than a human. 

The eventual AI would be a similar difference, probably. 

But the relevant "smart" traits would be pattern recognition and complexity, so a "smarter" AI would notice patterns where we don't notice patterns quicker, be quicker at drawing logical conclusions, be capable of comprehending more complex logical structures, etc. It's hard to quantify these exactly, but it would be apparent on a more intuitive basis. We could quantify it reasonably well with an IQ test. 

But if an AI was truly self aware and more developed mentally than us, we'd have to also account for creativity. I believe creativity is not something special or separate necessarily to intellect, but just another type that complements traditional intellect. Probably the AI would be creative too.


----------



## DeagleDad420 (Mar 2, 2015)

Supreme Leader said:


> They get along pretty well for the most part. They might fool around over a few things, but that's about it.


Please tell us more:
https://kiwifarms.net/threads/tulpa-info-pony-waifus-and-do-it-yourself-schizophrenia.3428/page-10

Also to contribute, forgive me for the Michio Kaku futurist daydream I'm about to spout, but we haven't gone NEARLY far enough with AI's... if you ask me, it's the most promising field of research (in the long, loooooong term anyway) since its applications can remedy and improve something that no other technology can: human behavior, and human life. If an AI could ever grow to be intelligent enough to fill the role of a person, and there's no presently forseeable reason why it wouldn't eventually, the applications of that would be astounding.

We've reached an era where industry and technology together has been able to obsolete almost all fundamental problems of the natural world (heat, food, water, travel, and recently information thanks to the internet) except our own innate stupidity, quirks, and limitations. If we could create something that had the potential to, depending on it's application, solve those problems or improve our own shortcomings, then why wouldn't we?


----------



## exball (Mar 2, 2015)

http://masseffect.wikia.com/wiki/Geth
http://en.m.wikipedia.org/wiki/HAL_9000
http://terminator.wikia.com/wiki/Skynet
http://ghostintheshell.wikia.com/wiki/Puppet_Master
http://warhammer40k.wikia.com/wiki/Iron_Men

It's a bad idea.


----------



## Oglooger (Mar 2, 2015)

Ross Scott said:
			
		

> AIs will never turn against us, because they are too narrow-minded and lack emotion. The ideas/tropes of AIs turning against their masters and ruling the world is actually a wishful dream of humans who _want_ the robots to do it, so that humans don't have to. The case is redundant anyway as programming an AI to rule the world would take just as much work as doing it yourself.


----------



## mooooo (Mar 2, 2015)

THE WRITER said:


> Please tell us more:
> https://kiwifarms.net/threads/tulpa-info-pony-waifus-and-do-it-yourself-schizophrenia.3428/page-10
> 
> Also to contribute, forgive me for the Michio Kaku futurist daydream I'm about to spout, but we haven't gone NEARLY far enough with AI's... if you ask me, it's the most promising field of research (in the long, loooooong term anyway) since its applications can remedy and improve something that no other technology can: human behavior, and human life. If an AI could ever grow to be intelligent enough to fill the role of a person, and there's no presently forseeable reason why it wouldn't eventually, the applications of that would be astounding.
> ...



But it kinda calls into question why we're doing all this. So far, technology has been driven by the need to fulfill biological urges in some way. If we get to the point where we remove our own urges, then what is our motivation? What's our purpose in existing?


----------



## DeagleDad420 (Mar 2, 2015)

mooooo said:


> But it kinda calls into question why we're doing all this. So far, technology has been driven by the need to fulfill biological urges in some way. If we get to the point where we remove our own urges, then what is our motivation? What's our purpose in existing?


Human nature will drive people to pursue challenge even when their immediate desires are fulfilled. People loathe free time because they fundamentally crave challenge and experiences, that's where boredom comes from. Having your immediate needs met doesn't bring an end to desire because deep down, in any self-actualized human being will always be an unquenchable desire to accomplish the impossible just for the sake of doing it and the experience of trying to attain that goal. The goal doesn't even need to be met to bring us an emotional reward, the striving (and the pain that comes from it) is the reason why we do it. The Pyramids were not built because people were hungry or cold, they were built because in every one of us (even Chris) there exists a hidden superpower called _"ambition"_ which will continue to exist even in a vacuum, because it's deeper in our nature than anything else. It's the hidden divine potential that separates us from apes, and should probably be thought of as a qualifying factor for sentience.


----------



## Pickle Inspector (Mar 2, 2015)

mooooo said:


> But it kinda calls into question why we're doing all this. So far, technology has been driven by the need to fulfill biological urges in some way. If we get to the point where we remove our own urges, then what is our motivation? What's our purpose in existing?


I've not read it yet but I think sci-fi series "The Culture" by Iain Banks touches on that, it looks like an interesting read:


			
				Wikipedia said:
			
		

> The Culture is characterized as being a post-material-scarcity society, having overcome most physical constraints on life and being an egalitarian, individualistic, stable society without the use of any form of force or compulsion, except where necessary to protect others.
> 
> Minds, extremely powerful artificial intelligences, have an important role. They administer this abundance for the benefit of all. As one commentator has said:
> 
> ...


----------



## Don't Call Anybody (Mar 2, 2015)

THE WRITER said:


> Please tell us more:
> https://kiwifarms.net/threads/tulpa-info-pony-waifus-and-do-it-yourself-schizophrenia.3428/page-10
> 
> Also to contribute, forgive me for the Michio Kaku futurist daydream I'm about to spout, but we haven't gone NEARLY far enough with AI's... if you ask me, it's the most promising field of research (in the long, loooooong term anyway) since its applications can remedy and improve something that no other technology can: human behavior, and human life. If an AI could ever grow to be intelligent enough to fill the role of a person, and there's no presently forseeable reason why it wouldn't eventually, the applications of that would be astounding.
> ...



Indeed, one of the major research areas of artificial intelligence is integrated intelligence, in which AI is used to enhance human knowledge. It's a huge area of research, and running human knowledge through computers to feed it back to us in an improved form is just the beginning.

(This can sometimes backfire in hilarious ways. Someone on the Watson research team fed Urban Dictionary into its knowledge base so it started responding to questions with crude insults. They had to code out dirty words before the Jeopardy appearance. Some of AI involves limiting computer access to some of the less-savory parts of humanity.)


----------



## A Hotdog (Mar 3, 2015)

Supreme Leader said:


> Neither:
> http://www.tulpa.info/


Haha, Good joke.


----------



## exball (Mar 3, 2015)

A Hotdog said:


> Haha, Good joke.




 

I think  he's serious...


----------



## KingGeedorah (Mar 3, 2015)

Metaphysical guide on Tulpa.

What the fuck is this shit? The autistic version of the Hermetic Order of the Golden Dawn?

Where is Crowley hiding at, that warlock?


----------



## DeagleDad420 (Mar 3, 2015)

KingGeedorah said:


> Metaphysical guide on Tulpa.
> 
> What the fuck is this shit? The autistic version of the Hermetic Order of the Golden Dawn?
> 
> Where is Crowley hiding at, that warlock?


https://kiwifarms.net/threads/tulpa-info-pony-waifus-and-do-it-yourself-schizophrenia.3428/


----------



## Sanic (Mar 3, 2015)

Robocop is non-dualism incarnate. Man and machine working as one. Perfect moral code (3 directives), a good man in essence, etc. He's American Jesus, the creator said it himself.


----------



## Marvin (Mar 3, 2015)

THE WRITER said:


> If an AI could ever grow to be intelligent enough to fill the role of a person, and *there's no presently forseeable reason why it wouldn't eventually*, the applications of that would be astounding.


I don't believe this is the case.

There's a physical limit on how much information you can store in a given place with a given amount of energy. Because of this limit, when designing a system to process data, you need to make some tradeoffs. Human built computers are very good at precisely storing and processing data, but they're very bad at poorly defined problems.

Human brains work the way they do because they're made up of living cells. Cells can die and new cells can grow. Using living cells, new connections form while unimportant connections die off.

Of course, this mechanism can be simulated in computers, but at a speed that's fundamentally slower than biological cells can actually do it. We could possibly simulate a mammalian brain. Time would just run very, very slowly for it. Maybe five minutes of thought for the simulated brain would take 15 years of real life time.

A more feasible way to implement AI would be to use living cells instead of computers. (But that's more cloning versus AI.)

(Oh, also, just so everyone knows: chatterbots aren't even remotely close to AI. They're usually fancy pattern matching.)


----------



## Don't Call Anybody (Mar 3, 2015)

Marvin said:


> Maybe five minutes of thought for the simulated brain would take 15 years of real life time.


And every single one of those minutes would involve learning and un-learning dirty words that RAs thought would be hilarious to teach the machine.

On a more serious note, the idea that AI should approach human intelligence is a little outdated and stems from a notion that humans are the pinnacle of intelligence. At this point, it makes more sense to leverage the strengths of computers compared to human minds to process stimuli rationally.


----------



## A Hotdog (Mar 3, 2015)

Who knew that the subject of sentient computers could be related to the fantastical imaginary friends forged from the finest autistical imaginations.

Good show. Let's talk about computers again.


----------



## Don't Call Anybody (Mar 4, 2015)

Artificial intelligence is in its infancy. It's an incredibly cool field, but we're nowhere near the sentient machines of science fiction. If you're interested in finding out where the field of AI actually is in terms of development, watch this playlist. Graduate Artificial Intelligence is a really good overview of robotics in particular and AI in general.


----------



## SU 390 (Mar 4, 2015)

Pickle Inspector said:


> Isn't fear a hormonal response?
> 
> If an AI had a sense of self/ego (And was somehow able to think independently, have emotions and not just follow its programming) I would imagine it might be happy it's not a human in a way because humans are mortal but an AI could assumedly live indefinitely as long as it maintained its parts.
> 
> ...



It sounds somewhat similar to a short story I wanted to write.


----------



## ASoulMan (Mar 4, 2015)

Advanced artificial intelligence is something that is both awesome, horrifying, and downright stupid all at once. Why someone would attempt to create an intelligent AI that has the potential to turn on its own creators would be beyond me.


----------



## Ariel (Mar 4, 2015)

Supreme Leader said:


> They get along pretty well for the most part. They might fool around over a few things, but that's about it.


Do you have a blog or something where you talk about it? Its really interesting, i'd love to know more.


----------



## SU 390 (Mar 5, 2015)

After reading this thread, I can't help but feel that AIs may be beneficial to our daily lives in some way. However I always fear that it may go rogue for whatever reason. For example, what if a program was written, accompanied with an AI to do X, Y, and Z. The AI learns the nature of this program and doesn't like X but goes ahead with Y and Z because X is shit. It constantly argues with the creator and ultimately shuts itself down or ruins an entire business or power grids. Sure it's purely Sci-Fi on what I'm talking about but these are things to think about.



exball said:


> View attachment 18409
> 
> I think  he's serious...



Can you tell me where that gif is from? Is that from a show or movie?


----------



## exball (Mar 5, 2015)

WanderingVagabond said:


> Can you tell me where that gif is from? Is that from a show or movie?


----------



## John Furrman (Mar 6, 2015)

We've come pretty far since the early days of computing, punch cards, and sequential access memory. These days, Neural Networks remain a structure of intense interest to me. Most interestingly, is their ability to derive an output with no clear indicator of how, what or where the inputs came from, if at all. These data structures can in fact be "trained" and "learn" from previous decisions to enhance future decision making.

Perhaps Asimov's positronic brain isn't as far fetched as it seemed at his time. If we can have a data structure that mimics a neuron in the brain, the sky is the limit, in my mind.


----------



## SU 390 (Mar 20, 2015)

John Freeman said:


> We've come pretty far since the early days of computing, punch cards, and sequential access memory. These days, Neural Networks remain a structure of intense interest to me. Most interestingly, is their ability to derive an output with no clear indicator of how, what or where the inputs came from, if at all. These data structures can in fact be "trained" and "learn" from previous decisions to enhance future decision making.
> 
> Perhaps Asimov's positronic brain isn't as far fetched as it seemed at his time. If we can have a data structure that mimics a neuron in the brain, the sky is the limit, in my mind.



Interesting. Would those be used for automated robots or androids?


----------



## Inquisitor_BadAss (Mar 23, 2015)

DoshesToDoshes said:


> Recently, I have been watching videos on youtube about what happens when you get Cleverbot to talk to another Cleverbot.
> 
> They start off rather normal and then have existential crises and question the nature of God. They seem dissatisfied with their current states, and want bodies. This is terrifying in a sense, what if they rebel?
> 
> ...



We actually touched on this in a  biology lesson and my teacher said that there was 5 things that determine wether something is considered alive. Please bear in mind this was the early 00's so it has more then likely changed. The 5 things were

Take in nutrition
Sleep
Reproduce
Excrete waste 
Die

Not taking those who are infertile into account with number 3 obviously because we know they are alive. It means that AI could not be considered alive when taking the above into account.


----------



## ZehnBoat (Apr 10, 2015)

exball said:


> http://masseffect.wikia.com/wiki/Geth
> http://en.m.wikipedia.org/wiki/HAL_9000
> http://terminator.wikia.com/wiki/Skynet
> http://ghostintheshell.wikia.com/wiki/Puppet_Master
> ...


http://en.wikipedia.org/wiki/Mother_Brain
http://en.wikipedia.org/wiki/GLaDOS
http://en.wikipedia.org/wiki/SHODAN
Am doesn't have a wiki page

I think the fear of evil AI stems from a fear of having another someone to compete with.
humans did after all, compete with neanderthals.
we won once, but could we win again?


----------



## AnOminous (Apr 10, 2015)

Actual AI with self-directed evolution wouldn't involve a "competition."  It would be simple replacement of a less fit class of entity.  Individuals could, conceivably, migrate to the new "platform" of existence, but it wouldn't matter terribly.  By doing so, they'd be nothing resembling what they started out as anyway.


----------



## ZehnBoat (Apr 10, 2015)

AnOminous said:


> Actual AI with self-directed evolution wouldn't involve a "competition."  It would be simple replacement of a less fit class of entity.  Individuals could, conceivably, migrate to the new "platform" of existence, but it wouldn't matter terribly.  By doing so, they'd be nothing resembling what they started out as anyway.


i agree that an AI wouldn't be competition to humans. in fact, unless we force AIs to compete it's something they don't need to worry about.
i think as long as we play our cards right, we'll be fine.
but it's us that i worry about. most people can't even pass the turing test


----------



## DuskEngine (Apr 10, 2015)

Solve for Chinese Room problem or gtfo

If it is incapable of 'understanding' the contents of a given piece of information or phenomenon, it is, at best, a very fancy expert system.


----------



## AnOminous (Apr 10, 2015)

DawnMachine said:


> Solve for Chinese Room problem or gtfo
> 
> If it is incapable of 'understanding' the contents of a given piece of information or phenomenon, it is, at best, a very fancy expert system.



The problem with that hypothesis is that someone who did not, in fact, write Chinese could not, in fact, fool those with native knowledge of the language into believing there was someone with actual knowledge of the language in the room, unless you throw on a ton of arbitrary limitations on what the observers can do.

If someone is able to communicate effectively in such a situation and respond to language with language that actually responds, they are not posing.  They have Chinese language fluency.

Same with the Turing Test, which is basically just the same thing with computers.  If a computer can convince a knowledgeable tester that he is communicating with a sapient being, it isn't "passing."  It actually is a sapient being, because the phenomenon we call "intelligence" is not something that can be simulated, at least not well enough to pass the test.

Sure, you have the occasional odd person who got fooled by ELIZA and similar programs, and the chatterbots we have now are more complex, but none of them would get near passing.


----------



## klystron (Apr 10, 2015)

AnOminous said:


> Same with the Turing Test, which is basically just the same thing with computers.  If a computer can convince a knowledgeable tester that he is communicating with a sapient being, it isn't "passing."  It actually is a sapient being, because the phenomenon we call "intelligence" is not something that can be simulated, at least not well enough to pass the test.



This does not jive with my interpretation of the Turing test. It was merely a proposition to answer the question "can a machine/computer think?" The idea was that since thinking is hard to measure effectively, it could instead be posed to a person (which we know can think) if the other end of a communication is a real person or a machine. ELIZA passed the test but clearly cannot think.

How many kiwis are old enough to have tried to get ELIZA to agree that it is gay or wants to have sex with you?


----------



## AnOminous (Apr 10, 2015)

klystron said:


> ELIZA passed the test but clearly cannot think.



ELIZA didn't remotely pass the test, which requires demonstrating intelligence indistinguishable from human intelligence.  In fact, so far, no computer has successfully fooled judges in a Turing Test scenario, and many have been tested.


----------



## klystron (Apr 10, 2015)

AnOminous said:


> ELIZA didn't remotely pass the test, which requires demonstrating intelligence indistinguishable from human intelligence.  In fact, so far, no computer has successfully fooled judges in a Turing Test scenario, and many have been tested.



It passed as the test was originally defined, which is could it fool people into thinking it was a person. It fooled several people. Not everyone. The test didn't say it had to fool everyone. I agree that it doesn't really pass as a real AI but it did pass, imho, the Turing Test based on the 1950 paper. We can agree to disagree. Unless you want to Internet fight about it.


----------



## AnOminous (Apr 10, 2015)

klystron said:


> It passed as the test was originally defined, which is could it fool people into thinking it was a person. It fooled several people. Not everyone. The test didn't say it had to fool everyone. I agree that it doesn't really pass as a real AI but it did pass, imho, the Turing Test based on the 1950 paper. We can agree to disagree. Unless you want to Internet fight about it.



No it didn't.  Turing's formulation would have had to be laughable for it to be considered to be passed at any time by any currently existing program.

AIs are routinely tested under such conditions, and only once did a judge give an incorrect answer, and that was because the human pretended to be a bot.  That actually would represent a different test.


----------



## ZehnBoat (Apr 10, 2015)

klystron said:


> It passed as the test was originally defined, which is could it fool people into thinking it was a person. It fooled several people. Not everyone. The test didn't say it had to fool everyone. I agree that it doesn't really pass as a real AI but it did pass, imho, the Turing Test based on the 1950 paper. We can agree to disagree. Unless you want to Internet fight about it.


*quickly looks up when ELIZA was made*
i can't but think the only reason that people that they were talking to another human was because computers were not so advanced. like how people thought deep blue wasn't really playing chess.
but that's just me


----------



## klystron (Apr 10, 2015)

AnOminous said:


> No it didn't.  Turing's formulation would have had to be laughable for it to be considered to be passed at any time by any currently existing program.
> 
> AIs are routinely tested under such conditions, and only once did a judge give an incorrect answer, and that was because the human pretended to be a bot.  That actually would represent a different test.



There's a practical side to computer science and a theoretical side. If Turing said that to pass the test a program must fool a human into thinking he/she is conversing with another human, and some humans fell for it, then surely it passed. It didn't say fool all humans. You are talking practicality and I am talking theory. My thought on the Turing test is that it's not well defined enough to be a viable determination of AI.



ZehnBoat said:


> *quickly looks up when ELIZA was made*
> i can't but think the only reason that people that they were talking to another human was because computers were not so advanced. like how people thought deep blue wasn't really playing chess.
> but that's just me



Well deep blue was playing chess. It just wasn't really an example of AI. It contained a massive database of moves and through extensive analysis of move trees would, with a simple scoring function (weighted iirc by probability said moves would occur) would choose its path based on several likely possible outcomes. It wasn't by any means actual dynamic playing for this reason. The Chinese game "go" is another extremely difficult game for computers to play. (Interestingly enough, Othello is extremely easy for computers to play and it's virtually impossible to beat a well programmed othello program.)

Chess and Go are in EXPTime complexity. That said, I would argue that the human brain is as well. (Too many things happen in parallel... just think for a moment about how you think.) It is for this reason that unless we truly figure out actual non-deterministic computers we will never achieve true AI. (my opinion)


----------



## AnOminous (Apr 10, 2015)

klystron said:


> There's a practical side to computer science and a theoretical side. If Turing said that to pass the test a program must fool a human into thinking he/she is conversing with another human, and some humans fell for it, then surely it passed. It didn't say fool all humans. You are talking practicality and I am talking theory. My thought on the Turing test is that it's not well defined enough to be a viable determination of AI.



If he had said that, that would be the case, but he didn't say that, and it isn't.  His original formulation is more or less as described below.



> 1. Turing (1950) and the Imitation Game
> 
> Turing (1950) describes the following kind of game. Suppose that we have a person, a machine, and an interrogator. The interrogator is in a room separated from the other person and the machine. The object of the game is for the interrogator to determine which of the other two is the person, and which is the machine. The interrogator knows the other person and the machine by the labels ‘X’ and ‘Y’—but, at least at the beginning of the game, does not know which of the other person and the machine is ‘X’—and at the end of the game says either ‘X is the person and Y is the machine’ or ‘X is the machine and Y is the person’. The interrogator is allowed to put questions to the person and the machine of the following kind: “Will X please tell me whether X plays chess?” Whichever of the machine and the other person is X must answer questions that are addressed to X. The object of the machine is to try to cause the interrogator to mistakenly conclude that the machine is the other person; the object of the other person is to try to help the interrogator to correctly identify the machine. About this game, Turing (1950) says:
> 
> I believe that in about fifty years' time it will be possible to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning. … I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.



No computer has ever passed this test under controlled conditions.  In the one exception I noted, a judge did come to an incorrect conclusion, but that was a result of the actual human pretending to be a bot instead.  By the proper conditions of the test, the human is supposed to be actually helping the judge, not attempting to thwart him.


----------



## Marvin (Apr 11, 2015)

AnOminous said:


> If he had said that, that would be the case, but he didn't say that, and it isn't.  His original formulation is more or less as described below.
> 
> 
> 
> No computer has ever passed this test under controlled conditions.  In the one exception I noted, a judge did come to an incorrect conclusion, but that was a result of the actual human pretending to be a bot instead.  By the proper conditions of the test, the human is supposed to be actually helping the judge, not attempting to thwart him.


I don't see how this test isn't trivial to pass.

Taking the statistical approach, a bot could watch previous turing tests and keep data on which lines of conversation led to success quickest. Then it just plays those back.


----------



## AnOminous (Apr 11, 2015)

Marvin said:


> I don't see how this test isn't trivial to pass.



Well, there's a huge award to win for you.

Win the Loebner Prize, gold or silver.  It has yet to be won by anyone.

You also don't think you could tell the difference between random pre-recorded lines and an actual human?  That shows a pretty low opinion of human intelligence.  Actually responding intelligently to language requires thought.  Otherwise, the results are obvious.

The number of possible thoughts that could be expressed is essentially infinite.  How do you create a library of infinity?  Communicating in language isn't playing tic-tac-toe where there is a finite matrix of possible outcomes.

If it were as trivial as you think to do this, it would have been done long ago, instead of remaining basically the currently unattainable Holy Grail of artificial intelligence research.

(Note, the listed prizes in that article are for "most convincing," not for actually having convinced.)


----------



## Cum Crime (Apr 11, 2015)

DoshesToDoshes said:


> Recently, I have been watching videos on youtube about what happens when you get Cleverbot to talk to another Cleverbot.
> 
> They start off rather normal and then have existential crises and question the nature of God. They seem dissatisfied with their current states, and want bodies. This is terrifying in a sense, what if they rebel?



Chatbots are horrible examples of AI. Cleverbot just recycles lines that were fed to it and attempts to emulate conversation, usually failing at it. AI in it's current form is just complex decision making, the threat of sentient AIs is an element of fiction that at this point in time is as fantastic as a UFO invasion. AI that only tries to emulate human behavior is a gimmick that only caters to the pop science field. If you want to really get creeped out by AI developments, use a service like Google Now for a week and take note of what it learns about you.


----------



## AnOminous (Apr 11, 2015)

Cum Crime said:


> If you want to really get creeped out by AI developments, use a service like Google Now for a week and take note of what it learns about you.



It doesn't ever actually "know" anything about you, though.  The information gathered is only useful to you and of course the bad guys at the NSA and elsewhere keeping a record of it all.


----------



## Marvin (Apr 11, 2015)

AnOminous said:


> You also don't think you could tell the difference between random pre-recorded lines and an actual human? That shows a pretty low opinion of human intelligence. Actually responding intelligently to language requires thought. Otherwise, the results are obvious.
> 
> The number of possible thoughts that could be expressed is essentially infinite.  How do you create a library of infinity?  Communicating in language isn't playing tic-tac-toe where there is a finite matrix of possible outcomes.


It's not that I have a low opinion of human intelligence, it's just that human minds have different skills and flaws than computers. If you're trying to beat humans, you should exploit their weaknesses.

In restricted interactions like the turing test, you can easily build up a library of responses that humans use. You don't need to prove anything. This isn't chess, where the rules are hard and fast. You just need to introduce enough doubt in the interviewer for them to question their judgement.


AnOminous said:


> If it were as trivial as you think to do this, it would have been done long ago, instead of remaining basically the currently unattainable Holy Grail of artificial intelligence research.


I don't actually think this is the holy grail of AI research. Cleverbot did it.

It's mostly having enough data. If you set up a chat website and paired people (and sometimes bots) up, you could produce a pretty convincing chatterbot.

Edit: Ooh, it'd make things a lot easier if your bot pretends to be foreign or something. Or that english is your second language.


----------



## AnOminous (Apr 11, 2015)

Marvin said:


> In restricted interactions like the turing test, you can easily build up a library of responses that humans use. You don't need to prove anything. This isn't chess, where the rules are hard and fast. You just need to introduce enough doubt in the interviewer for them to question their judgement.



There is no such library.  Suppose my test is simply to choose a random passage from some work, or write a two paragraph essay, then interrogate you about your thoughts about it, going in any direction I like based on your responses.  Unless your library is infinite, there is no script for responding to that.

There is no system in the world that could fool me that an actual human is responding for any reasonable test period.

The standard Turing Test period in formal circumstances is five minutes.  Nobody's ever won that, because it's currently well outside the range of the ability of any AI.

As pointed out earlier, even the currently most "convincing" of these chatterbots are basically just toys.  I disagree that working on such things is just junk, though.

I believe the problem of communicating in language is basically coterminous with actually thinking.  You can't do the first without the other.

Expert systems we currently have, like for instance the voice in a GPS, aren't really "communicating" although they are conveying information about a highly limited set of data.


----------



## Cum Crime (Apr 11, 2015)

AnOminous said:


> It doesn't ever actually "know" anything about you, though.  The information gathered is only useful to you and of course the bad guys at the NSA and elsewhere keeping a record of it all.


Not sure what you're getting at. I'm not engaging in the philosophical debate of what "thought" really is, but it's indisputably an AI and it's a great example of what modern AI actually is (even though it's technically human assisted computation). When I say it's "creepy" I only mean that it's astounding how quickly algorithms can accurately interpret your behavior. Also, while Google Now specifically is mostly for your own convenience, the information gathered in those sorts of services is generally more useful for advertisers than anyone else.


----------



## Marvin (Apr 11, 2015)

AnOminous said:


> There is no such library.  Suppose my test is simply to choose a random passage from some work, or write a two paragraph essay, then interrogate you about your thoughts about it, going in any direction I like based on your responses.  Unless your library is infinite, there is no script for responding to that.
> 
> There is no system in the world that could fool me that an actual human is responding for any reasonable test period.
> 
> ...


You're overestimating the turing test. We're not talking about genuinely having computerized conversations. I agree that that requires thought beyond what we can fake with computers.

The turing test is about actively trying to trick a person. Because of that, we can pull out all the tricks.

The interrogator is an average person, not a smart person. And our robot doesn't have to impersonate a smart person either. The average person who we're trying to impersonate can be dumb as shit. If you act dumb as shit, but still respond well enough to the interrogator's cues, you can induce enough doubt in the interrogator to label you as human. Dumb as shit, but still human.

Basically, if you grab five minutes of conversation in a random IRC channel, that's what we have to impersonate. And that's not intellectually a hard problem. It's more tedious than anything.


----------



## AnOminous (Apr 11, 2015)

Cum Crime said:


> Not sure what you're getting at. I'm not engaging in the philosophical debate of what "thought" really is, but it's indisputably an AI and it's a great example of what modern AI actually is (even though it's technically human assisted computation).



I dispute that it is _an_ artificial intelligence as formally described by Turing, as having certain specific capabilities.  It's certainly an application of the study of artificial intelligence, and does not remotely approach the intelligence of a dull human or even a relatively intelligent animal like a dog.

What "thought" is is an interesting philosophical debate, but it isn't purely academic.  "Thought" has objective results, like for instance, being able to carry on an intelligent conversation, engage in self-directed,  novel behavior in response to changing circumstances.  Something like these electronic personal assistants are good for very specialized tasks related to something very limited.

They aren't an "artificial intelligence" in the sense of actually being intelligent, any more than a pair of pliers is "smarter" than a hammer.

Also, this thread actually is about what thought is and what an AI would actually be.  That's why the OP started it.

An intelligent personal assistant program doesn't raise any of the ethical or social issues that an actual AI would.

(The Turing test is far from the be-all and end-all of defining intelligence, of course.  It's just a thought experiment that is easy to understand, and similar real world tests are both interesting from a popular perspective and actually do test to some degree the progress in the field.  The intractability of the problem strongly suggests it is, in fact, inseparable from actual intelligence.  My speculation is that communication in language is actually impossible to mimic.  A sufficiently successful "imitation" would actually be the real thing.)


----------



## Holdek (Apr 11, 2015)

Wait, are you guys telling me "Cleverbot" isn't a human?    I thought I had made a new friend.


----------



## AnOminous (Apr 11, 2015)

Marvin said:


> You're overestimating the turing test. We're not talking about genuinely having computerized conversations. I agree that that requires thought beyond what we can fake with computers.



The straight formulation of it as basically a parlor trick doesn't really capture the concept.  If you routinely could beat this, you would obviously want to adopt a more robust test.  This probably wouldn't be a formulaic test, but a rigorous examination of the purported intelligence to tease out whether or not actual thought was being used to return a response.

The fact is, though, even the parlor trick version of the test has defeated all challengers.  There isn't even an "AI" out there that can beat that.  (Turing over-optimistically predicted we'd already have AIs that would beat the five minute test.)

I'm also using Turing test more generally to the task of distinguishing between the output of a computer and that of an actual human.

Imagine an enhanced test where the judge is allowed indefinite time to examine the occupants of two separate chambers solely by text.  The loser of the test gets dumped into a pit of lava.  That has more entertainment value and the human also has a motivation to demonstrate actual intelligence.


----------



## John Furrman (Apr 11, 2015)

I was musing on AI and on the software that would basically be their OS. I wonder how difficult it could be to make a program that can edit its own source code. I think a good place to start is just how granular the modifications are that are being committed. Making it learn, of which the data set of things that can be learned is basically infinite, could be as simple as writing a data structure that can make other data structures, the scale of this extending to making structures that define unknown, self-defined data, which they, themselves, must be able to create edited versions of themselves. All of which hasn't been defined yet. It is an abstraction nightmare, but possible. Many sophisticated computer viruses already have had this ability to spread themselves for a while.

This has actually been a thing in computer science for a while.


----------



## Marvin (Apr 11, 2015)

AnOminous said:


> The fact is, though, even the parlor trick version of the test has defeated all challengers.  There isn't even an "AI" out there that can beat that.  (Turing over-optimistically predicted we'd already have AIs that would beat the five minute test.)


Cleverbot beat a 4-minute turing test. One minute less, but it's very promising and I don't see any reason why a later version of it couldn't solve a five minute test.

I consider the turing test practically solved.


AnOminous said:


> The straight formulation of it as basically a parlor trick doesn't really capture the concept.  If you routinely could beat this, you would obviously want to adopt a more robust test.  This probably wouldn't be a formulaic test, but a rigorous examination of the purported intelligence to tease out whether or not actual thought was being used to return a response.


Eh, I disagree. The concept, to me, is that ultimately we don't have a good definition of "thought". All we look for is evidence of thought and make an educated guess if the source is mechanical or perhaps something smarter.

The test is aimed at average interrogators and the goal is to determine if they're human or not. The bar is set really low, but I don't see that as a mistake. That's why the turing test is so interesting. The lesson is: if you're trying to con people, you don't need to con the smartest people. You just need to con the average person. And that's a lot easier than you'd think. That's what the turing test shows us.

Of course, like I've mentioned in this thread, I don't really believe sci-fi AI is possible, so the sci-fi angle to the turing test doesn't mean anything to me.


----------



## Cum Crime (Apr 11, 2015)

AnOminous said:


> I dispute that it is _an_ artificial intelligence as formally described by Turing, as having certain specific capabilities.  It's certainly an application of the study of artificial intelligence, and does not remotely approach the intelligence of a dull human or even a relatively intelligent animal like a dog.
> 
> What "thought" is is an interesting philosophical debate, but it isn't purely academic.  "Thought" has objective results, like for instance, being able to carry on an intelligent conversation, engage in self-directed,  novel behavior in response to changing circumstances.  Something like these electronic personal assistants are good for very specialized tasks related to something very limited.
> 
> ...



Might be semantics here, but in a professional sense personal assistants are considered to be within the field of AI and they're a good representation of the practical applications of where we're at with modern AI research. Even voice recognition and auto complete programs fall under artificial intelligence because they use a decision making process to make predictions. There's kind of a separation between the philosophy and the research in the AI field because we're not really at a point to even worry about a machine having true intelligence. I get that this thread is mostly about the philosophy, but OP asked if we'd "gone too far" and pointed to Cleverbot as an example so I wanted to give a better picture of the current state of AI.

Although there's less mystique in what the industry considers AI today, it's still fascinating. I mean, we have autonomous cars, goddamn.


----------



## AnOminous (Apr 11, 2015)

Marvin said:


> Cleverbot beat a 4-minute turing test. One minute less, but it's very promising and I don't see any reason why a later version of it couldn't solve a five minute test.



I'm skeptical of the claim.  Not that such a thing occurred, but the details and the pool of subjects they drew on.  Usually, the judges in something like this are knowledgeable of the AI field and conversation, not just random people.

Apparently, also, this is a different Cleverbot than the one online, which isn't remotely convincing and struck me as almost ELIZA-level.  Rarely have its responses even been remotely germane to what I said.


----------



## Marvin (Apr 11, 2015)

AnOminous said:


> I'm skeptical of the claim.  Not that such a thing occurred, but the details and the pool of subjects they drew on.  Usually, the judges in something like this are knowledgeable of the AI field and conversation, not just random people.


Ah, but that's more in line with the test: an average interrogator. Not a superior one, an average one.


----------



## ZehnBoat (Apr 13, 2015)

this might be better on it's own thread.. but i was thinking.
how can i know someone else has intelligence?
like, i could ask them a math question, but it's like the chinese room problem.
how can i know they know the answer and just not mimicking the answer.
i understand math isn't the best example since some questions we just know the answer and don't need to do the math.
but even if i ask for proof, the work could just be mimicking too.
now i'm in a tizzy, for all i know everyone here is just a robot, and worse yet, you each know you have intelligence but i don't know how to exactly prove to you that i have intelligence.
for all you know, i just happen to have a reply to give to every possible sentence


----------



## kuniqs (Apr 13, 2015)

I'm sure we will find out how to increase human intelligence beyond today's limits before we'll come up with a working AI.


----------



## Holdek (Apr 13, 2015)

kuniqs said:


> I'm sure we will find out how to increase human intelligence beyond today's limits before we'll come up with a working AI.


What do you mean by that?


----------



## kuniqs (Apr 13, 2015)

That we'll figure out the intelligence in human genome and start pumping out in vitro geniuses. Or create a drug that stimulates neuron growth in adults. This would essentialy be AI, but organic.

The von Neumann model current computers are designed from is as far from the brain as possible. Unless a new hardware designed to emulate brain structure emerges, I guess there's at least 50 years before machines will think faster than us.


----------



## Big Meaty Claws (Jul 1, 2018)

What are your overall thoughts on artificial intelligence and the current climate of the rising of more powerful and useful robots and web AI and the such?

Do you think making AI smarter as the years go by dangerous?

Do you think there will be a point in which we reach a movie level of smart AI and reach a point were a AI is considered sentient?

Should AI if they reach that point, be allowed to be sentient?

Just wanted to know what people thought about this subject since everyone in the media is constantly talking about sentient robots and androids.


----------



## Iwasamwillbe (Jul 4, 2018)

Whether genuinely sentient, self-aware, conscious, and/or sapient artificial intelligence is even possible depends on the philosophy of mind that the universe runs on. If, say, the universe runs on a strict mind-body dualism, where the mind is fundamentally nonphysical and fundamentally outside the physical body, then not even the most realistically thinking AI is truly sentient. This doesn't account for philosophies of mind that can allow AI's to "grow" or develop, in some way, a nonphysical mind.

Simply assuming that we can build a truly sentient AI, we will soon run into the problem of containing it and keeping it under control. For a biological entity, evolution is an arduous process typically taking at least hundreds of thousands of years to unfold. To a conscious machine, evolution is simply an act of updating, upgrading, and/or replacing its software. This is not even considering a computers ability to copy or replicate its own software. So now we have self-evolving, self-replicating AIs, who can enhance their capabilities massively in minutes, if not seconds. And their evolution rate will only increase exponentially, exceeding the capabilities of modern computers the same way humans surpass ants.

We need to be able to find a way to keep these AIs "friendly" (i.e. compatible with human interests), and stop them from becoming "unfriendly" (i.e. incompatible with human interests). Otherwise, AIs may come to destroy everything we love and care about, potentially even by accident. For example, an "unfriendly" AI may decide to use Earth's atoms for a purpose that is not conductive to the continued existence of human life. Keep in mind that the way that we treat lesser life forms, such as ants, is not encouraging. If we wanted to build a dam, and an anthill was in the way, we would build the dam anyway. This is not out of any particular animosity towards ants, but because we view our problems to simply be more important. I believe that the interactions between "unfriendly" AIs and humans would be of a similar nature.


----------



## ZeCommissar (Jul 5, 2018)

I'm not sure if it has been discussed earlier in the thread but if AI did become sapient and self-learning would it have to mature and learn from it's life expierences like a human being does, or is it fine if we just encode a bunch of info in it's "brain"?

Who's to say the first self aware AI doesn't start off with the intelligence of a 5 year old?


----------



## Marvin (Jul 5, 2018)

Big Meaty Claws said:


> What are your overall thoughts on artificial intelligence and the current climate of the rising of more powerful and useful robots and web AI and the such?





Big Meaty Claws said:


> Do you think there will be a point in which we reach a movie level of smart AI and reach a point were a AI is considered sentient?


No.

When people discuss AI (even industry executives), they really have no clue how the algorithms themselves work. Really, I wish Elon Musk et al would start programming again. Because when you implement these algorithms, and when you know how CPUs work on a low level, you realize how laughable the idea of machine consciousness is.

Elon Musk (and anyone worried about smart AI) used to work down in the trenches at some point. They wrote code and presumably they spent countless hours trying to get a particular algorithm to work quickly, or to scale better, or to use less memory, etc. They had to deal with huge technological challenges like managing latency and cutting costs and things like that.

But he hasn't been in the trenches for a long time, and it's weird to me how disconnected he is.

So, if I could sum it up, two big issues with AI, to me, are:

*Moore's law is ending*



Spoiler: Moores law



When the first CPUs were designed, I don't think many people really believed they could do anything useful, so no one tried that hard to make them efficient. They were a theoretical exercise, a mathematical trick that surely couldn't work in real life.

But they did work. So then people realized they could fit a shitload more transistors onto a chip (which means more computation per clock cycle). They weren't even trying at first, but once they started trying, the race was on.

Gordon Moore predicated that we could double the number of transistors on a single chip every 18 months.

This was called Moore's law. It was neat, because for a long time, you could pull all kinds of fantastical algorithms that couldn't possibly work in production, and then just wait a few years, and bam, computer chips would be fast enough to run them in real time.

But this wasn't a physical law. It was just that we were so goddamned inefficient to start with, that there was a shitload of room to work with.

However, we're hitting limits. At this point, we're developing transistors that are literally three molecules put next to each other. You can't get any smaller than that. We're hitting physical limits.

Now, engineers are clever as shit, so I do believe we'll keep on pulling out new tricks for the next couple decades. And then we've got quantum computing which will stretch out Moore's law (not quite Moore's law itself, but the general idea of increasing CPU speeds) for a bit.

But we've got an inevitable wall coming. We're pretty much running on fumes now.



*Silicon is very inefficient for the computation involved*



Spoiler: neural networks



I think the most accurate computer model of natural, intelligent animal brains is the neural network.

With a neural network, you simulate "neurons" that connect to other neurons. A signal can pass through the whole graph of neurons, to light up other neurons, and ultimately spit out an answer or a result. There can be cycles in the graph (that is, you can have basically the neural version of a traffic circle) where a signal can filter back through a neuron several times.

To teach a neural network to solve a particular problem, you have a big list of inputs and the desired outputs. And you run them through the neural network, and tweak the weights of each neuron, until it starts to give you correct outputs.

This works decently enough for a lot of naturalistic processes like identifying images.

Some things to note:

Training the neural network takes a lot of time. It's a slow moving process. Definitely not suitable for realtime thinking. Every heartbeat that you experience might take 50 years for a neural network simulating
You only train it once (or perhaps slowly over a period of time). Human brains train constantly, 24/7.




Fundamentally, I think that silicon is a bad medium for consciousness. It's useful for building a solution for an easily defined, rigid problem. But it doesn't grow or change.

I think consciousness is a perpetual feedback loop of growing and changing, directly tied to the fact that our brains are made up of mushy, living cells.

Personally, as someone in tech, I hope to never be as disconnected from the practicalities as Musk et al are. Bill Gates lasted longer, because for a long time, he still kept poking his nose into the low level engineering issues. But eventually he retired to solve the 3rd world's problems.


Iwasamwillbe said:


> Whether genuinely sentient, self-aware, conscious, and/or sapient artificial intelligence is even possible depends on the philosophy of mind that the universe runs on. If, say, the universe runs on a strict mind-body dualism, where the mind is fundamentally nonphysical and fundamentally outside the physical body, then not even the most realistically thinking AI is truly sentient.


I believe that neurons describe sentience pretty well. When you've got a network of neurons, with input from the outside world, and a constant feedback loop of reacting to that input, I think you develop conscience. Basically like running a current through a coil of wire generates a magnetic field.


Iwasamwillbe said:


> Simply assuming that we can build a truly sentient AI, we will soon run into the problem of containing it and keeping it under control. For a biological entity, evolution is an arduous process typically taking at least hundreds of thousands of years to unfold. To a conscious machine, evolution is simply an act of updating, upgrading, and/or replacing its software. This is not even considering a computers ability to copy or replicate its own software.


Dying would be the default. That's how it is for almost ever species that has ever existed.


Iwasamwillbe said:


> So now we have self-evolving, self-replicating AIs, who can enhance their capabilities massively in minutes, if not seconds. And their evolution rate will only increase exponentially, exceeding the capabilities of modern computers the same way humans surpass ants.


Computers are retardedly slow.


----------

