On the subject of AI.

Have we gone too far with AI?


  • Total voters
    58
Orange Catholic Bible said:
Thou shalt not make a machine in the likeness of a human mind.

Really though, I don't know if a machine can be made to be self aware. I think it may depend on whether or not the soul exists.

If there's no such thing as a soul and our minds are purely the result of brain activity, then I believe it may be possible - our brains are composed of physical components interacting with each other in physical ways (neurons, neurotransmitters, etc), after all.

If, however, there is a soul that's ultimately the source of awareness (even if we're only aware of our brain functions while in mortal life), then either machines will never become aware, or they can gain a soul, and therefore awareness, if built right.

Also, I remember seeing this magazine article on AI awhile ago where a group of experts were giving their views on creating intelligent artificial life. I distinctly remember one of them saying that "we don't need to create a new oppressed minority" or something along those lines.
 
Last edited:
  • Agree
Reactions: Yog-Spergoth
fondo-de-robots-futurama-652.jpg
 
  • Like
Reactions: Yog-Spergoth
This is a thought that's plagued my mind for some years now as well. Its something that fascinates and mortifies me. What bugs me is that I've never been able to deduce what the answer to it is. People are people. People are sentient. But what if a person creates something that isn't a person but is sentient regardless? I can scarcely began to imagine the sheer existential dread of being able to comprehend that you're not a human being.
Why? I'm not a rabbit either. Why would it matter which species I am?

"person" is a human construct, a model within our minds, an approximation.

The real issue is when we manage to make an AI that's slightly smarter than us. It can then build an AI slightly smarter than it, and so on. Very quickly we'd get AI much much smarter than us.
 
Why? I'm not a rabbit either. Why would it matter which species I am?

"person" is a human construct, a model within our minds, an approximation.

The real issue is when we manage to make an AI that's slightly smarter than us. It can then build an AI slightly smarter than it, and so on. Very quickly we'd get AI much much smarter than us.


Any AI is smarter than you already, @mooooo.
 
Why? I'm not a rabbit either. Why would it matter which species I am?

"person" is a human construct, a model within our minds, an approximation.

The real issue is when we manage to make an AI that's slightly smarter than us. It can then build an AI slightly smarter than it, and so on. Very quickly we'd get AI much much smarter than us.
Smart is a phenomenologically impoverished term in many ways. Could you unpack your definition a bit?
 
Smart is a phenomenologically impoverished term in many ways. Could you unpack your definition a bit?

Smart is something with blurry edges but is more definitive when the differences are exaggerated.
It's hard to define smart sharply enough to definitely say one fairly normal human is smarter than another normal human, but fairly clear that a worm is less smart than a human.

The eventual AI would be a similar difference, probably.

But the relevant "smart" traits would be pattern recognition and complexity, so a "smarter" AI would notice patterns where we don't notice patterns quicker, be quicker at drawing logical conclusions, be capable of comprehending more complex logical structures, etc. It's hard to quantify these exactly, but it would be apparent on a more intuitive basis. We could quantify it reasonably well with an IQ test.

But if an AI was truly self aware and more developed mentally than us, we'd have to also account for creativity. I believe creativity is not something special or separate necessarily to intellect, but just another type that complements traditional intellect. Probably the AI would be creative too.
 
They get along pretty well for the most part. They might fool around over a few things, but that's about it.
Please tell us more:
https://kiwifarms.net/threads/tulpa-info-pony-waifus-and-do-it-yourself-schizophrenia.3428/page-10

Also to contribute, forgive me for the Michio Kaku futurist daydream I'm about to spout, but we haven't gone NEARLY far enough with AI's... if you ask me, it's the most promising field of research (in the long, loooooong term anyway) since its applications can remedy and improve something that no other technology can: human behavior, and human life. If an AI could ever grow to be intelligent enough to fill the role of a person, and there's no presently forseeable reason why it wouldn't eventually, the applications of that would be astounding.

We've reached an era where industry and technology together has been able to obsolete almost all fundamental problems of the natural world (heat, food, water, travel, and recently information thanks to the internet) except our own innate stupidity, quirks, and limitations. If we could create something that had the potential to, depending on it's application, solve those problems or improve our own shortcomings, then why wouldn't we?
 
Last edited:
Ross Scott said:
AIs will never turn against us, because they are too narrow-minded and lack emotion. The ideas/tropes of AIs turning against their masters and ruling the world is actually a wishful dream of humans who want the robots to do it, so that humans don't have to. The case is redundant anyway as programming an AI to rule the world would take just as much work as doing it yourself.
 
  • Like
Reactions: Yog-Spergoth
Please tell us more:
https://kiwifarms.net/threads/tulpa-info-pony-waifus-and-do-it-yourself-schizophrenia.3428/page-10

Also to contribute, forgive me for the Michio Kaku futurist daydream I'm about to spout, but we haven't gone NEARLY far enough with AI's... if you ask me, it's the most promising field of research (in the long, loooooong term anyway) since its applications can remedy and improve something that no other technology can: human behavior, and human life. If an AI could ever grow to be intelligent enough to fill the role of a person, and there's no presently forseeable reason why it wouldn't eventually, the applications of that would be astounding.

We've reached an era where industry and technology together has been able to obsolete almost all fundamental problems of the natural world (heat, food, water, travel, and recently information thanks to the internet) except our own innate stupidity, quirks, and limitations. If we could create something that had the potential to, depending on it's application, solve those problems or improve our own shortcomings, then why wouldn't we?

But it kinda calls into question why we're doing all this. So far, technology has been driven by the need to fulfill biological urges in some way. If we get to the point where we remove our own urges, then what is our motivation? What's our purpose in existing?
 
  • Dumb
Reactions: CatParty
But it kinda calls into question why we're doing all this. So far, technology has been driven by the need to fulfill biological urges in some way. If we get to the point where we remove our own urges, then what is our motivation? What's our purpose in existing?
Human nature will drive people to pursue challenge even when their immediate desires are fulfilled. People loathe free time because they fundamentally crave challenge and experiences, that's where boredom comes from. Having your immediate needs met doesn't bring an end to desire because deep down, in any self-actualized human being will always be an unquenchable desire to accomplish the impossible just for the sake of doing it and the experience of trying to attain that goal. The goal doesn't even need to be met to bring us an emotional reward, the striving (and the pain that comes from it) is the reason why we do it. The Pyramids were not built because people were hungry or cold, they were built because in every one of us (even Chris) there exists a hidden superpower called "ambition" which will continue to exist even in a vacuum, because it's deeper in our nature than anything else. It's the hidden divine potential that separates us from apes, and should probably be thought of as a qualifying factor for sentience.
 
But it kinda calls into question why we're doing all this. So far, technology has been driven by the need to fulfill biological urges in some way. If we get to the point where we remove our own urges, then what is our motivation? What's our purpose in existing?
I've not read it yet but I think sci-fi series "The Culture" by Iain Banks touches on that, it looks like an interesting read:
Wikipedia said:
The Culture is characterized as being a post-material-scarcity society, having overcome most physical constraints on life and being an egalitarian, individualistic, stable society without the use of any form of force or compulsion, except where necessary to protect others.

Minds, extremely powerful artificial intelligences, have an important role. They administer this abundance for the benefit of all. As one commentator has said:

In vesting all power in his individualistic, sometime eccentric, but always benign, AI Minds, Banks knew what he was doing; this is the only way a liberal anarchy could be achieved, by taking what is best in humans and placing it beyond corruption, which means out of human control. The danger involved in this imaginative step, though, is clear; one of the problems with the Culture novels as novels is that the central characters, the Minds, are too powerful and, to put it bluntly, too good.

The novels of the Culture cycle, therefore, mostly deal with people at the fringes of the Culture: diplomats, spies, or mercenaries; those who interact with other civilizations, and who do the Culture's dirty work in moving those societies closer to the Culture ideal, sometimes by force.
 
Please tell us more:
https://kiwifarms.net/threads/tulpa-info-pony-waifus-and-do-it-yourself-schizophrenia.3428/page-10

Also to contribute, forgive me for the Michio Kaku futurist daydream I'm about to spout, but we haven't gone NEARLY far enough with AI's... if you ask me, it's the most promising field of research (in the long, loooooong term anyway) since its applications can remedy and improve something that no other technology can: human behavior, and human life. If an AI could ever grow to be intelligent enough to fill the role of a person, and there's no presently forseeable reason why it wouldn't eventually, the applications of that would be astounding.

We've reached an era where industry and technology together has been able to obsolete almost all fundamental problems of the natural world (heat, food, water, travel, and recently information thanks to the internet) except our own innate stupidity, quirks, and limitations. If we could create something that had the potential to, depending on it's application, solve those problems or improve our own shortcomings, then why wouldn't we?

Indeed, one of the major research areas of artificial intelligence is integrated intelligence, in which AI is used to enhance human knowledge. It's a huge area of research, and running human knowledge through computers to feed it back to us in an improved form is just the beginning.

(This can sometimes backfire in hilarious ways. Someone on the Watson research team fed Urban Dictionary into its knowledge base so it started responding to questions with crude insults. They had to code out dirty words before the Jeopardy appearance. Some of AI involves limiting computer access to some of the less-savory parts of humanity.)
 
Robocop is non-dualism incarnate. Man and machine working as one. Perfect moral code (3 directives), a good man in essence, etc. He's American Jesus, the creator said it himself.
 
  • Like
Reactions: DuskEngine
Back