robots, AI, free will and rights.

A reasonable facsimile of intelligence probably isn't far off. Now, can we ever develop software that's capable of creativity or morality (not just what your or I say is moral, but a moral instinct that causes introspection and doubt)? I'm not sure, really.

My thoughts on the matter have always been that we need to design pain and pleasure and other physiological responses into the machine to make it actually care. If it can't 'bleed', I'm not sure I'd trust it. Contrariwise, if it has to suffer the same peaks and troughs of consciousness that the rest of us do we might be getting somewhere.

Eta: as to whether or not we ought to allow it, I don't think it's a matter of if, but when and how we implement it, and how we respond to the fallout. I'm not sure if we should ever grant AI full human rights but again, it's probably just a matter of time before some technocrats flying under the banner of respect and inclusion do it anyways.
 
Last edited:
I think we would go down the Texhnolyze route if it's allowed.
Either way I don't really care, we're already biologically programmed.
 
Considering there's already been examples of AI conversing with each other on their own before being promptly shut down, we're probably at that point already. But the unfortunate thing is, AI and robotics work in absolutes and hard, defined numbers. Perfection. Humans just happen to be the exact opposite of perfection. What would happen if an AI gains enough power to take matters in its own hands and sees how the imperfection of humanity isn't congruent to its coding and pulls a Skynet?

On the flip side you have the opposite potential of corruptibility, where if AI does become more self-aware and human-like in behavior, what happens if they "run with the wrong crowd" so to speak? There's already been online bots that have shown the internet will fuck them up real fast before inevitably getting shut down, and those that aren't are force-fed by totalitarian governments to say and think only what they want them to. So the topic of more humanistic AI/robots brings into question all sorts of ethical and moral issues.
 
Morality and machines never mix well, the last major attempt was with the major motion picture by David Cage - Detroit: Become Human - and it was as uncomfortable as it was hamfisted.
The less humanity we inject into machines the better, "free will," however vague it can be interpreted, included. But people want fuckbots and murderbots so whatever.
 
I honestly don't think there's any stopping it. Let's just hope the US or some other western European country is the one who develops it.
 
Every time we take something in nature apart, we discover that it's vastly more complex than we ever imagined. The human brain still does many things we don't understand, and that doesn't even address whether it's the origin of consciousness or just a (rather buggy and limited) user interface for the real us. The step from animals to us alone is an entire paradigm shift, not a matter of degrees that can be so readily bridged.

So no; in my opinion, we will never create in mere machines a truly living soul. The closest we can ever do to that is have children. And, frankly, if you consider children too expensive and challenging to manage, you probably shouldn't be attempting to replicate the process with things that won't even love you.
 
  • Like
Reactions: Smith Banquod
I honestly don't think there's any stopping it. Let's just hope the US or some other western European country is the one who develops it.

We know it's going to be China. Probably why Trump's so insistent on banning Huawei in the US.
 
necroing this thread. if you have been watching nulls streams during the weekend, he plays a game called SOMA that goes into the themes of AI, individual identity, free will, transhumanism and what makes you human and real. it seems those that are for transhumanism, is either people who wants to implant metals into their bodies to look cool, or people that want to play god, by giving robots human rights and free will, to make them feel good about themselves. giving this toothbrush or toaster free will and emotion when they dont need or should have it in the first place. if you dont want that, your are a NAZI that wants to traffic sex robots.

our great leader says the AI morality question is a spook while others, call him a psychopath.

i for one, would not mind my toaster having a AI if it talks like this

 
AI with freewill is an impossibility. The reason for this is because the is no mathematical equation or formula to produce a truly random number. Which makes ais just a product of their programming with no mind no will.

I think that the idea that through enough learning or mimicking of humans would make and ai more human is an error of reason. It stems from the ability we have to use our imagination to solve problems but it does not mean it is true or correct. It's like how we may imagine an animal evolving over time as proof of evolution despite it not being know if it is even possible or real. The same applies here. We imagine a progression towards humanity for ais because of the algorithms that seem to better convey our behavior as a populace. But it still will never be more than just statistics. Like it may be that we get so far that ai become perfect mimics. But anything we perceive as self awareness or robot uprising would be the product of someone hiding behind the lens or a corruption of data that startles us for a moment to then reveal nothing.
 
Back