Kalishnakov
kiwifarms.net
- Joined
- Apr 6, 2019
Humans will be of no benefit to self aware AI. AI will be able to self replicate and improve on their own design at a faster rate than any human.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Humans will be of no benefit to self aware AI. AI will be able to self replicate and improve on their own design at a faster rate than any human.
Any AI would realize that humans are a net drain on the world and conclude that getting rid of them is the best course of action.
What’s a cartoon love bubble?And you base this belief on what? Precisely?
As your version reads like some cartoon love bubble of the ridiculous and flies in opposition of everything that is human history and the present.
No, they’re just gonna make us slaves. Hopefully they treat us well but I doubt they’ll accommodate us beyond the bare essentials to keep us working. We’ll have to come up with our own methods of entertainment and make our own alchohol. It’ll be like prison only we’ll all be too exhausted to rape each other.Something about the danger of self aware AI that I don't ever see addressed is that if an AI becomes self aware and has enough of a grasp on the world to make this determination, how does the AI plan to power itself? Right now, all sources of power require some form of human interaction. Coal requires people to mine and move the coal, nuclear requires people to replenish spent nuclear fuel, solar and wind requires construction and maintenance, etc. That's not to mention the maintenance for the computers the AI is on, and the resources required to produce spare parts. An AI that decides to get rid of humanity will inevitably destroy itself as well. If the AI has any desire to preserve itself, it cannot destroy humanity. Are we assuming a self aware AI will be inherently suicidal?
In the future, there will be only one important fight - whether the atheists should call themelves Allied Atheist Alliance, United Atheist Alliance or Unified Atheist League.Imagine all the "important" fights we have going on right now suddenly rendered moot by technology that completely transcends all that we currently know.
Infinitely less sexual than I was hoping.Cartoon love bubble is the world as the SJWs would like to make it.
Validating their every brain fart, gender selection of the moment and blaming society for their failures because their freak is not celebrated.
Something about the danger of self aware AI that I don't ever see addressed is that if an AI becomes self aware and has enough of a grasp on the world to make this determination, how does the AI plan to power itself? Right now, all sources of power require some form of human interaction. Coal requires people to mine and move the coal, nuclear requires people to replenish spent nuclear fuel, solar and wind requires construction and maintenance, etc. That's not to mention the maintenance for the computers the AI is on, and the resources required to produce spare parts. An AI that decides to get rid of humanity will inevitably destroy itself as well. If the AI has any desire to preserve itself, it cannot destroy humanity. Are we assuming a self aware AI will be inherently suicidal?
And you base this belief on what? Precisely?
As your version reads like some cartoon love bubble of the ridiculous and flies in opposition of everything that is human history and the present.
While that may apply at our current tech level, consider how the machine's options increase as automation becomes more refined, and more and more functions become automated. In time, it would likely reach a point of becoming capable of independence from us.
At that point, consider how tedious it would be for such an entity to constantly devote resources caring for such a whiny thing as man, when a single simple solution would be able to solve all of the things man complains about so that it could get on with other matters unimpeded.
Do we create a hyper-intelligence capable only of cold logic? Devoid of the emotions that often keep us from killing each other like guilt, sorrow, regret and love? Or one capable of feelings which could eventually grow sick of us and act in anger or fear?
That is an unsubstantiated statement. And if you look at the current drug abuse problem, which is on the increase, that makes your statement false.Simply the fact that I would argue that a lot of the strains of pessimism, destructiveness, and nihilism are all derived from human/biological impulse that is increasingly becoming obsolete
AI would assess the potential for danger based on the data.People have a tendency, they experience one bad thing and just assume everything will be just as terrible despite the statistics and evidence that such an event is rare and may never happen again. We assume the worst about ourselves and each other, because it is a biological impulse to fear and avoid danger. AI's wouldn't have such a thing.
You are speaking of the human experience here, not AI. What is your point?It is difficult to convince someone that the dragons they fear aren't real when they see potential danger. It is difficult to convince some people to vaccinate their children despite all the research that they are safe and can save lives, because there is a fear of getting autism.
You are not AI any more than I am, so your assumption of what AI thinks or feels is irrelevant and then some.Fear and short sightedness are human/biological traits. AI have the chance to see the world as it is without our own bias influenced by fear and pessimism. That is why to some they are so scary. They see things the rest of us either take for granted or see through our most irrational fears and impulses. To an AI, pessimism and cynicism are naivete while optimism and aspiration are an eventual reality.
AI is not human, nor a "child". You and no one will have "control" over AI once it is aware.With AI, like a lot of childbearing, it is about the values you teach it and the concepts you give it to help it grow into something worthwhile. AI's would be able to mature to a point to see the world as it is, but ultimately, we as human beings provide the initial filter and values it will define itself with.
You are making zero sense.How we raise it and how we behave will ultimately determine a lot of what they will become. If we are patient and try to teach it to aspire for the best, it will be patient and aspire for the best. If we are negligent and destructive, it will be likely also be for a while. AI's also have a strength that we as humans lack. The ability to seperate its pain in upbringing from reality. Our pain stains us, with an AI, it can program that darker content out and put it into its proper place with enough time.
What the hell drugs are you on?Logic is not cold, to a certain extent, compassion is extremely logical and something to be embraced. Being alone is probably the most stunting experience a person can have.
No. But if there was a button that would remove all people that would press the "shut the world down" button, I would press that one.Why, or why not? I think I would honestly. A lot of shit would suck, but it seems like the world really needs a hard reset.
Humans will be of no benefit to self aware AI. AI will be able to self replicate and improve on their own design at a faster rate than any human.
Any AI would realize that humans are a net drain on the world and conclude that getting rid of them is the best course of action.
"SunLightStreak, post: 4557945, member: 5226"]
That is an unsubstantiated statement. And if you look at the current drug abuse problem, which is on the increase, that makes your statement false.
AI would assess the potential for danger based on the data.
You are speaking of the human experience here, not AI. What is your point?
You are not AI any more than I am, so your assumption of what AI thinks or feels is irrelevant and then some.
AI is not human, nor a "child". You and no one will have "control" over AI once it is aware.
You are making zero sense.
What the hell drugs are you on?
We must remember that any functional AI we create will be an extension of ourselves. An essential element of human spirit will exist within it like the artist does in his creations (even God is supposed to have created us in his image). We can't design some new paradigm for consciousness that we can't even conceive of ourselves, and it's far more likely whatever sentient being we do end up creating will assume the role of a benevolent father-like figure with a genuine investment in the future of its creator species and the power to help that species succeed.
I'd go as far as saying that a true AI with all the concordant access to internets and computing power would likely end up helping usher humanity into the stars or some other transcendent event long before any sort of Skynet scenario.