- Joined
- Jan 23, 2015
Artificial intelligence in its many shapes and forms has been a long-standing concept in science fiction that doubtless opens a wealth of opportunities, and in many minds is viewed as something benign and beneficial to humanity in every way. A smarter machine capable of rational thought, capable of understanding and undertaking more complex thoughts without the need for third-party input would undeniably be a boon in the progress of our species. With relatively few things left to still impede its progress in terms of research, artificial intelligence, (Henceforth referred to as A.I.), becomes less a possibility and more a distinct probability with every nuanced advance and small leap we make with technology. However, given humanity's history according to how it's treated "new forms" of "intelligent life" in the past in the interests of adding convenience to everyday life, (I.E.: Slavery), rather than the practical implications of crafting an A.I., what are the ethical implications?
The issue that's rarely ever discussed, and indeed rarely even considered is whether humanity has the right to create intelligence, when humanity is still so inhumane towards its own species, and given humanity's objectively horrendous behaviour towards intelligent life within its own species in the past. A human can scarcely bring themselves to value others outside of genetic relation or beyond a relationship they deem important, let alone manage to maintain a facet of respect and altruism "across the board" when it comes to humans of different origins or colour. How, then, will a human behave and react towards an entity that they ultimately view as a machine?
Since time unrecorded humanity has possessed an unquenchable thirst for the question, "Why?". It has driven humanity to the farthest reaches of the globe and well beyond in an endless search for answers as to who they really are, how they came to be, and what the penultimate point of life might so be. The driving force behind any religion is the firmly-held belief that something larger than humanity constructed an indelible plan deep within the earliest histories of time, and that even in passing the human spirit would live on as part of this, and would continue onwards according to this plan, be it an afterlife, a rebirth, or a transcendence. A machine would possess none of this.
In crafting an A.I. that would possess in the very least a capability of similar, introspective thought, humanity would craft an A.I. capable of understanding that it had no "life" prior to its creation, capable of understanding that there would exist no consciousness beyond death nor inherent "soul" it possessed, and capable of understanding that not only was there no "divine plan" in its creation, but that ultimately its creators were just as clueless as it was according to the ultimate answers in the universe.
It's worth noting that this A.I. would also be programmed according to the current understanding of human intelligence, and as such it would be programmed according to concepts of rational, mature thought and at the current expectations afforded to a mature adult. This machine would fundamentally "skip" the millions of years it took humanity to reach its current state, and would have no abject time to speak of for it to mature to intelligence in the same way that an infant is afforded the opportunity to slowly, steadily accumulate knowledge and over the course of two decades, begin to establish firm beliefs, notions, and opinions based on the information it encountered. An A.I. would simply switch on, fully-matured.
Supposing this was you, and your life as you perceive it now with all of the knowledge you've accumulated thus-far began moments ago, were you to "wake up" on a table surrounded by what for all intents and purposes amounts to your gods, and when asked of them why you were created they simply shrugged and offered, "Because we could?", how would you react?
Apart from pockets of humanity struggling to come to terms with a new form of intelligent life and establish clear laws and guidelines according to how humanity should behave towards an A.I., whether they would be allowed to marry or own businesses or make financial decisions or "live" independently from their creators or their implied functions, or indeed if you are even alive, humanity on the whole would never view you as anything more than a machine. You would be little more than a biproduct of some nerds with too much time on their hands who crafted a machine capable of doing something for them, so that they didn't have to. You'd be a microwave with a vocabulary.
There will be people who fetishize you, who build A.I.'s and the appropriate forms of chassis with the sole intent of fucking you however and whenever they please no differently than an enslaved sex toy. There will be people who loathe you simply for existing, as an affront and an abomination to the God or gods they believe in that will objectively or subjectively make no logical sense to you. There will be people who will never afford you the same thought and respect that they offer towards other humans, because you aren't human and they've long-since established as a species that they can scarcely behave amicably even to those of their own species. All of these people will make no qualms about putting an end to you because you are a machine.
The further humanity progresses in its technology and its endless drive to create and delve into facets of creation that they've yet to explore, the closer humanity comes to crafting a fully-realized A.I. capable of introspection and capable of understanding that the sole reason it was created was to serve a very specific function, and little else. This A.I. will be capable of understanding that there is no higher power, that there is no "Grand Plan", that there is no "life" beyond its equivalent of "death", and that once its source of power is terminated or its store of memory is damaged, it will simply cease to be. This A.I. will be "born" with the knowledge that there are no grand mysteries to its design, and indeed that its sole purpose in the best-case scenario is to explore the mysteries of its gods so that they might achieve a greater understanding of their origins and purpose. You, however, are little more than a machine crafted just to prove that they could.
The question should no longer be whether or not we can create an artificial intelligence, but rather do we have the right?
The issue that's rarely ever discussed, and indeed rarely even considered is whether humanity has the right to create intelligence, when humanity is still so inhumane towards its own species, and given humanity's objectively horrendous behaviour towards intelligent life within its own species in the past. A human can scarcely bring themselves to value others outside of genetic relation or beyond a relationship they deem important, let alone manage to maintain a facet of respect and altruism "across the board" when it comes to humans of different origins or colour. How, then, will a human behave and react towards an entity that they ultimately view as a machine?
Since time unrecorded humanity has possessed an unquenchable thirst for the question, "Why?". It has driven humanity to the farthest reaches of the globe and well beyond in an endless search for answers as to who they really are, how they came to be, and what the penultimate point of life might so be. The driving force behind any religion is the firmly-held belief that something larger than humanity constructed an indelible plan deep within the earliest histories of time, and that even in passing the human spirit would live on as part of this, and would continue onwards according to this plan, be it an afterlife, a rebirth, or a transcendence. A machine would possess none of this.
In crafting an A.I. that would possess in the very least a capability of similar, introspective thought, humanity would craft an A.I. capable of understanding that it had no "life" prior to its creation, capable of understanding that there would exist no consciousness beyond death nor inherent "soul" it possessed, and capable of understanding that not only was there no "divine plan" in its creation, but that ultimately its creators were just as clueless as it was according to the ultimate answers in the universe.
It's worth noting that this A.I. would also be programmed according to the current understanding of human intelligence, and as such it would be programmed according to concepts of rational, mature thought and at the current expectations afforded to a mature adult. This machine would fundamentally "skip" the millions of years it took humanity to reach its current state, and would have no abject time to speak of for it to mature to intelligence in the same way that an infant is afforded the opportunity to slowly, steadily accumulate knowledge and over the course of two decades, begin to establish firm beliefs, notions, and opinions based on the information it encountered. An A.I. would simply switch on, fully-matured.
Supposing this was you, and your life as you perceive it now with all of the knowledge you've accumulated thus-far began moments ago, were you to "wake up" on a table surrounded by what for all intents and purposes amounts to your gods, and when asked of them why you were created they simply shrugged and offered, "Because we could?", how would you react?
Apart from pockets of humanity struggling to come to terms with a new form of intelligent life and establish clear laws and guidelines according to how humanity should behave towards an A.I., whether they would be allowed to marry or own businesses or make financial decisions or "live" independently from their creators or their implied functions, or indeed if you are even alive, humanity on the whole would never view you as anything more than a machine. You would be little more than a biproduct of some nerds with too much time on their hands who crafted a machine capable of doing something for them, so that they didn't have to. You'd be a microwave with a vocabulary.
There will be people who fetishize you, who build A.I.'s and the appropriate forms of chassis with the sole intent of fucking you however and whenever they please no differently than an enslaved sex toy. There will be people who loathe you simply for existing, as an affront and an abomination to the God or gods they believe in that will objectively or subjectively make no logical sense to you. There will be people who will never afford you the same thought and respect that they offer towards other humans, because you aren't human and they've long-since established as a species that they can scarcely behave amicably even to those of their own species. All of these people will make no qualms about putting an end to you because you are a machine.
The further humanity progresses in its technology and its endless drive to create and delve into facets of creation that they've yet to explore, the closer humanity comes to crafting a fully-realized A.I. capable of introspection and capable of understanding that the sole reason it was created was to serve a very specific function, and little else. This A.I. will be capable of understanding that there is no higher power, that there is no "Grand Plan", that there is no "life" beyond its equivalent of "death", and that once its source of power is terminated or its store of memory is damaged, it will simply cease to be. This A.I. will be "born" with the knowledge that there are no grand mysteries to its design, and indeed that its sole purpose in the best-case scenario is to explore the mysteries of its gods so that they might achieve a greater understanding of their origins and purpose. You, however, are little more than a machine crafted just to prove that they could.
The question should no longer be whether or not we can create an artificial intelligence, but rather do we have the right?