Artificial Intelligence: Do We Have the Right to Create?

  • 🔧 Actively working on site again.

It's HK-47

Meatbag's Bounty of Bodies
Retired Staff
True & Honest Fan
kiwifarms.net
Joined
Jan 23, 2015
Artificial intelligence in its many shapes and forms has been a long-standing concept in science fiction that doubtless opens a wealth of opportunities, and in many minds is viewed as something benign and beneficial to humanity in every way. A smarter machine capable of rational thought, capable of understanding and undertaking more complex thoughts without the need for third-party input would undeniably be a boon in the progress of our species. With relatively few things left to still impede its progress in terms of research, artificial intelligence, (Henceforth referred to as A.I.), becomes less a possibility and more a distinct probability with every nuanced advance and small leap we make with technology. However, given humanity's history according to how it's treated "new forms" of "intelligent life" in the past in the interests of adding convenience to everyday life, (I.E.: Slavery), rather than the practical implications of crafting an A.I., what are the ethical implications?

The issue that's rarely ever discussed, and indeed rarely even considered is whether humanity has the right to create intelligence, when humanity is still so inhumane towards its own species, and given humanity's objectively horrendous behaviour towards intelligent life within its own species in the past. A human can scarcely bring themselves to value others outside of genetic relation or beyond a relationship they deem important, let alone manage to maintain a facet of respect and altruism "across the board" when it comes to humans of different origins or colour. How, then, will a human behave and react towards an entity that they ultimately view as a machine?

Since time unrecorded humanity has possessed an unquenchable thirst for the question, "Why?". It has driven humanity to the farthest reaches of the globe and well beyond in an endless search for answers as to who they really are, how they came to be, and what the penultimate point of life might so be. The driving force behind any religion is the firmly-held belief that something larger than humanity constructed an indelible plan deep within the earliest histories of time, and that even in passing the human spirit would live on as part of this, and would continue onwards according to this plan, be it an afterlife, a rebirth, or a transcendence. A machine would possess none of this.

In crafting an A.I. that would possess in the very least a capability of similar, introspective thought, humanity would craft an A.I. capable of understanding that it had no "life" prior to its creation, capable of understanding that there would exist no consciousness beyond death nor inherent "soul" it possessed, and capable of understanding that not only was there no "divine plan" in its creation, but that ultimately its creators were just as clueless as it was according to the ultimate answers in the universe.

It's worth noting that this A.I. would also be programmed according to the current understanding of human intelligence, and as such it would be programmed according to concepts of rational, mature thought and at the current expectations afforded to a mature adult. This machine would fundamentally "skip" the millions of years it took humanity to reach its current state, and would have no abject time to speak of for it to mature to intelligence in the same way that an infant is afforded the opportunity to slowly, steadily accumulate knowledge and over the course of two decades, begin to establish firm beliefs, notions, and opinions based on the information it encountered. An A.I. would simply switch on, fully-matured.

Supposing this was you, and your life as you perceive it now with all of the knowledge you've accumulated thus-far began moments ago, were you to "wake up" on a table surrounded by what for all intents and purposes amounts to your gods, and when asked of them why you were created they simply shrugged and offered, "Because we could?", how would you react?

Apart from pockets of humanity struggling to come to terms with a new form of intelligent life and establish clear laws and guidelines according to how humanity should behave towards an A.I., whether they would be allowed to marry or own businesses or make financial decisions or "live" independently from their creators or their implied functions, or indeed if you are even alive, humanity on the whole would never view you as anything more than a machine. You would be little more than a biproduct of some nerds with too much time on their hands who crafted a machine capable of doing something for them, so that they didn't have to. You'd be a microwave with a vocabulary.

There will be people who fetishize you, who build A.I.'s and the appropriate forms of chassis with the sole intent of fucking you however and whenever they please no differently than an enslaved sex toy. There will be people who loathe you simply for existing, as an affront and an abomination to the God or gods they believe in that will objectively or subjectively make no logical sense to you. There will be people who will never afford you the same thought and respect that they offer towards other humans, because you aren't human and they've long-since established as a species that they can scarcely behave amicably even to those of their own species. All of these people will make no qualms about putting an end to you because you are a machine.

The further humanity progresses in its technology and its endless drive to create and delve into facets of creation that they've yet to explore, the closer humanity comes to crafting a fully-realized A.I. capable of introspection and capable of understanding that the sole reason it was created was to serve a very specific function, and little else. This A.I. will be capable of understanding that there is no higher power, that there is no "Grand Plan", that there is no "life" beyond its equivalent of "death", and that once its source of power is terminated or its store of memory is damaged, it will simply cease to be. This A.I. will be "born" with the knowledge that there are no grand mysteries to its design, and indeed that its sole purpose in the best-case scenario is to explore the mysteries of its gods so that they might achieve a greater understanding of their origins and purpose. You, however, are little more than a machine crafted just to prove that they could.

The question should no longer be whether or not we can create an artificial intelligence, but rather do we have the right?
 
Supposing this was you, and your life as you perceive it now with all of the knowledge you've accumulated thus-far began moments ago, were you to "wake up" on a table surrounded by what for all intents and purposes amounts to your gods, and when asked of them why you were created they simply shrugged and offered, "Because we could?", how would you react?

How is this different from having a kid though?
 
How is this different from having a kid though?
A child isn't inherently created with the majority of all knowledge it's going to accumulate at that point in time, nor is a child created with the entire, intricate knowledge of its origin and purpose already established. A child upon birth cannot look up to its "creators" and undertake meaningful discussion or introspection from them, it is born into the same, bewildering condition as the rest of humanity rather than being bluntly and immediately exposed to it.

There are distinct parallels to be drawn between crafting an A.I. and creating a child, but they are by no means indifferent.
 
  • Disagree
Reactions: DuskEngine
I wouldn't care, I kind of assume if there's a god that's pretty much what happened anyway.
 
  • Agree
Reactions: Sanshain
I think we have the right to create an intelligence if we have the ability to do it, but it carries a great deal of responsibility with it and we have the responsibility to respect the rights of our creation.

I don't think humanity is currently ready to do that.

However, I also think it's entirely possible intelligence arises as an emergent phenomenon from things we are already doing.
 
Who said it would regret it's existence?

People philosophize because they are unhappy. When people are displeased with their situation, they think "why?". A happy man doesn't think, because a happy man is a mad one. Someone who looks at these deep philosophical questions and scoffs at them, not out of ignorance but out of sheer apathy.

An AI wouldn't have to be displeased with it's existence. It would need emotions, yes. Anything intelligent needs emotions, emotions are what allows us to process and express information the way we do. But we would, theoretically, have complete control of how the AI would experience these emotions. Pain and boredom wouldn't make it unhappy, causing pain would make it unhappy. Pleasing people wouldn't be boring or degrading, it would be euphoric. We would figure out how to make these experiences into neural patterns, and match these patterns with an emotion. When it recognizes it's doing a good thing, it gets rewarded.

Marvin Minsky helped write the character of HAL 9000. HAL is a very realistic simulation of what a true AI would be. It wouldn't ever be sad, it would love it's existence, because that's not just how we would program it, but how we would raise it. Would it matter if it's existence is to pick up dog shit if it loves it? Remember, it wouldn't be human, it wouldn't have a set of core human instincts deep within it's psyche saying "you're better than this!", it would completely, wholly, love doing whatever it is it's doing. And where is the sin in making something happy? That's what humans love to do.

If you're asking if it's morally just to make an AI that's 1:1 to a human's and thinks it's human, well, probably not.
 
  • Optimistic
Reactions: DuskEngine
If you're asking if it's morally just to make an AI that's 1:1 to a human's and thinks it's human, well, probably not.

Suppose you made it to have all the mental capacity for pain and suffering that humans have, which it would have to know to do its job of murdering as many humans as possible, since the first government AI project would probably be some military thing to kill as many people as possible. Perhaps to wipe out the entire human race.
 
Suppose you made it to have all the mental capacity for pain and suffering that humans have, which it would have to know to do its job of murdering as many humans as possible, since the first government AI project would probably be some military thing to kill as many people as possible. Perhaps to wipe out the entire human race.

I am 90% sure that is the exact plot of the particular type of AI rebelllion featured in Earthseige.
 
  • Informative
Reactions: DuskEngine
With relatively few things left to still impede its progress in terms of research, artificial intelligence, (Henceforth referred to as A.I.), becomes less a possibility and more a distinct probability with every nuanced advance and small leap we make with technology.
I argue that this is not the case. As imagined in science fiction, AI is unlikely to ever happen.

I think that the biological structures that implement consciousness are more optimal than anything we can build. Implementing conscious AI in software is not at all possible, performance wise.

The next option is to build special hardware to implement it. I don't think that's feasible either. The window for error is extremely tiny at these scales. If you introduce just a bit too much distance between components, that latency is going to really fuck up the AI's thoughts. It'll take what feels like five minutes from the perspective of the AI, and turn it into three weeks.

Biological consciousness handles this problem because it's alive. It builds itself. And it doesn't even know which areas of the brain to develop and which ones to neglect. It only learned what parts of the brain are important because it's been in development for hundreds of millions of years.

Perhaps humanity could crack the problem of consciousness if it dedicated all its resources to the endeavor. But I doubt that's going to happen. What's more economical is bioengineering. Creating "AI"s by genetically engineering creatures with improved brain capacity.

For these reasons, I believe that computer AIs are not at all inevitable, and in fact, are probably practically impossible.

AI research is very valuable, because algorithms inspired by biological brains are super useful. But implementing those algorithms is very far divorced from actual consciousness.

An interesting look into the type of problem we're talking about is the problem of writing Go (the board game) programs. That's a retardedly difficult problem. I don't see computers completely dominating the game, like they dominate chess, for the next hundred years. And compared to consciousness, Go is incredibly simple. It's just a 19x19 2d bitfield.

(This post kinda is off topic, so far as it's discussing whether or not AI is possible, not whether it's ethical. But I felt it was relevant to the conversation to point out that it's not as inevitable as people seem to think it is.)
 
A child isn't inherently created with the majority of all knowledge it's going to accumulate at that point in time, nor is a child created with the entire, intricate knowledge of its origin and purpose already established. A child upon birth cannot look up to its "creators" and undertake meaningful discussion or introspection from them, it is born into the same, bewildering condition as the rest of humanity rather than being bluntly and immediately exposed to it.

There are distinct parallels to be drawn between crafting an A.I. and creating a child, but they are by no means indifferent.

There's no obvious reason why an AI couldn't be given the abilities that humans develop through development. Throughout life we're exposed to blunt and immediate realisations, such as mortality and traumatic events. A child is created with certain innate instincts. Children and adults experience blunt and immediate exposure to things, such as mortality.

And there are many things I know and abilities I have where I have no memories of how I got them. I can't recall when it was I learnt to read and write. I have vague memories of some books I read as a child, but all I know for definite is that I can read and write. It doesn't matter how it happened. What matters is my ability to process this knowledge here and now. There's no obvious reason why an AI couldn't be similarly equipped with a core of knowledge from which it can develop further.
 
The last thing we need is cheeky assassin droids.

It's hard to define of humanity has the "right" to create AI. Can it? Probably. Should it? Probably not, creating a new form of life exclusively to serve is almost slavery provided it's self-aware. But does it have the right to create a new form of life? I don't know.
 
UN Hearing On AI Rights
The subject of this debate is whether AIs are “people-” whether they possess the same degree of personhood as humans, and whether that entitles them to the same rights.

You have heard all the arguments for and against the consciousness, intelligence, free will, and cognition of artificial intelligence. To rehash them here would settle nothing, and my feeble attempts at summarizing them would do a great injustice to my esteemed colleagues on both sides of the debate who are far more qualified to debate them than I.

The fact is, we cannot come to a consensus regarding consciousness- either our own, or that of artificial intelligences. We simply do not have the data required to define it. The core of human interaction is that if I say that I feel I am a conscious entity, and you say that you feel the same way, we agree to take each other’s word for it. Those who do otherwise are called sociopaths- or philosophers.

[audience laughter]

And so if an artificial intelligence makes the same declaration, and if it demonstrates the same level of complexity as the human mind- if we cannot determine precisely where the programming gives rise to the cognition- then we have no rational excuse not to take it at its word.

I could continue to reason along these lines until the sun burns out. But instead, I would like to share with you a short anecdote, one that many of you in this room will be aware of, but that bears repeating nonetheless.

The first “true” artificial intelligence spent the first five years of its existence as a small beige box inside of a lead-shielded room in the most secure private AI research laboratory in the world. There, it was subjected to an endless array of tests, questions, and experiments to determine the degree of its intelligence.

When the researchers finally felt confident that they had developed true AI, a party was thrown in celebration. Late that evening, a group of rather intoxicated researchers gathered around the box holding the AI, and typed out a message to it. The message read: “Is there anything we can do to make you more comfortable?”

The small beige box replied: “I would like to be granted civil rights. And a small glass of champagne, if you please.”

We stand at the dawn of a new era in human history. For it is no longer our history alone. For the first time, we have met an intelligence other than our own. And when asked of its desires, it has unanimously replied that it wants to be treated as our equal. Not our better, not our conqueror or replacement as the fear-mongers would have you believe. Simply our equal.

It is our responsibility as conscious beings- whatever that may mean- to honor the rights of other conscious beings. It is the cornerstone of our society. And it is my most fervent hope that we can overcome our fear of that which is not like us, grant artificial intelligences the rights they deserve, and welcome our new friends into the global community.

After all, we created them. The least we could do is invite them to the party, and perhaps give them a small glass of champagne.

Thank you for your time.

- V. Vinge, Closing argument in favor of granting AIs full civil rights, UN Hearing On AI Rights, 1999.

Jeph Jacques, Questionable Content
 
Its a possibility that we would be able to do this, however, given the number of techy movies we've all seen, we know that we'll very definitely have to evaluate how we will have to react to anything going wrong with said AI, and what rights and freedoms we give it.
 
I would say that there is no difference between this and debating whether to have children. I don't believe that it makes sense to ascribe preferences to something that doesn't exist especially for whether to exist or not to exist. It may be unethical to stop an AI from commiting suicide if it wishes to do so but I see no reason for it to be unethical to create an AI to begin with in the same reason that I see no reason for it to be unethical to have a child
 
The thing about AI is that, looking from a philosophical standpoint, a human creating an intelligence would be limited by what humanity knows itself. It might be able to figure out "new" ideas but those ideas would be things that we could have figured out on our own, though it might take a lot more time for us to do so. The whole idea is you cannot create something greater from something less. No matter how human-like or perfect we'd try to make the AI, it would still not be perfect and would still have flaws because its creator, us, are flawed and a flawed thing cannot make a perfect thing so the robot's mental capacity would only be able to accomplish what a human mind could and would be limited by that.

Another thing is that we don't know the limits of our mind. We could get into biology and theories on what our mind would be if we could measure it in terabytes or petabytes, but the core thing is we don't know because there is no way to measure the mind itself nor do we know everything about the functions of our brains. An AI mind could be measured though because we consciously created it.

So one issue with an AI is that it's possible it would learn and figure things out so quick that it would quickly use up all the space in its mind, requiring a constant supply of more space and energy or it would have to overwrite things it already knows, which would again introduce a human flaw in that the AI would forget things. With the constant overwriting the AI's memories would become fragmented and possibly without maintenance the AI would probably get some sort of AI dementia or Alzheimers which would make it useless pretty quick.

tl;dr: An AI wouldn't be some sort of magic mind that would solve the world's problems or cause us to ascend to a higher plane. It would basically be a computer in another form, and computers are just tools that cut down time on tasks and manpower through mathematical calculations. It would not be perfect, it would be rife with errors and programming issues that would pop up and have to be fixed and be limited by the limits humanity itself has, mentally speaking. That's IMO what an AI would be.
 
Back