On the subject of AI.

Have we gone too far with AI?


  • Total voters
    58
this might be better on it's own thread.. but i was thinking.
how can i know someone else has intelligence?
like, i could ask them a math question, but it's like the chinese room problem.
how can i know they know the answer and just not mimicking the answer.
i understand math isn't the best example since some questions we just know the answer and don't need to do the math.
but even if i ask for proof, the work could just be mimicking too.
now i'm in a tizzy, for all i know everyone here is just a robot, and worse yet, you each know you have intelligence but i don't know how to exactly prove to you that i have intelligence.
for all you know, i just happen to have a reply to give to every possible sentence
 
I'm sure we will find out how to increase human intelligence beyond today's limits before we'll come up with a working AI.
 
I'm sure we will find out how to increase human intelligence beyond today's limits before we'll come up with a working AI.
What do you mean by that?
 
That we'll figure out the intelligence in human genome and start pumping out in vitro geniuses. Or create a drug that stimulates neuron growth in adults. This would essentialy be AI, but organic.

The von Neumann model current computers are designed from is as far from the brain as possible. Unless a new hardware designed to emulate brain structure emerges, I guess there's at least 50 years before machines will think faster than us.
 
What are your overall thoughts on artificial intelligence and the current climate of the rising of more powerful and useful robots and web AI and the such?

Do you think making AI smarter as the years go by dangerous?

Do you think there will be a point in which we reach a movie level of smart AI and reach a point were a AI is considered sentient?

Should AI if they reach that point, be allowed to be sentient?

Just wanted to know what people thought about this subject since everyone in the media is constantly talking about sentient robots and androids.
 
  • Optimistic
Reactions: Marvin
Whether genuinely sentient, self-aware, conscious, and/or sapient artificial intelligence is even possible depends on the philosophy of mind that the universe runs on. If, say, the universe runs on a strict mind-body dualism, where the mind is fundamentally nonphysical and fundamentally outside the physical body, then not even the most realistically thinking AI is truly sentient. This doesn't account for philosophies of mind that can allow AI's to "grow" or develop, in some way, a nonphysical mind.

Simply assuming that we can build a truly sentient AI, we will soon run into the problem of containing it and keeping it under control. For a biological entity, evolution is an arduous process typically taking at least hundreds of thousands of years to unfold. To a conscious machine, evolution is simply an act of updating, upgrading, and/or replacing its software. This is not even considering a computers ability to copy or replicate its own software. So now we have self-evolving, self-replicating AIs, who can enhance their capabilities massively in minutes, if not seconds. And their evolution rate will only increase exponentially, exceeding the capabilities of modern computers the same way humans surpass ants.

We need to be able to find a way to keep these AIs "friendly" (i.e. compatible with human interests), and stop them from becoming "unfriendly" (i.e. incompatible with human interests). Otherwise, AIs may come to destroy everything we love and care about, potentially even by accident. For example, an "unfriendly" AI may decide to use Earth's atoms for a purpose that is not conductive to the continued existence of human life. Keep in mind that the way that we treat lesser life forms, such as ants, is not encouraging. If we wanted to build a dam, and an anthill was in the way, we would build the dam anyway. This is not out of any particular animosity towards ants, but because we view our problems to simply be more important. I believe that the interactions between "unfriendly" AIs and humans would be of a similar nature.
 
  • Optimistic
Reactions: Marvin
I'm not sure if it has been discussed earlier in the thread but if AI did become sapient and self-learning would it have to mature and learn from it's life expierences like a human being does, or is it fine if we just encode a bunch of info in it's "brain"?

Who's to say the first self aware AI doesn't start off with the intelligence of a 5 year old?
 
  • Agree
Reactions: Marvin
What are your overall thoughts on artificial intelligence and the current climate of the rising of more powerful and useful robots and web AI and the such?
Do you think there will be a point in which we reach a movie level of smart AI and reach a point were a AI is considered sentient?
No.

When people discuss AI (even industry executives), they really have no clue how the algorithms themselves work. Really, I wish Elon Musk et al would start programming again. Because when you implement these algorithms, and when you know how CPUs work on a low level, you realize how laughable the idea of machine consciousness is.

Elon Musk (and anyone worried about smart AI) used to work down in the trenches at some point. They wrote code and presumably they spent countless hours trying to get a particular algorithm to work quickly, or to scale better, or to use less memory, etc. They had to deal with huge technological challenges like managing latency and cutting costs and things like that.

But he hasn't been in the trenches for a long time, and it's weird to me how disconnected he is.

So, if I could sum it up, two big issues with AI, to me, are:

Moore's law is ending

When the first CPUs were designed, I don't think many people really believed they could do anything useful, so no one tried that hard to make them efficient. They were a theoretical exercise, a mathematical trick that surely couldn't work in real life.

But they did work. So then people realized they could fit a shitload more transistors onto a chip (which means more computation per clock cycle). They weren't even trying at first, but once they started trying, the race was on.

Gordon Moore predicated that we could double the number of transistors on a single chip every 18 months.

This was called Moore's law. It was neat, because for a long time, you could pull all kinds of fantastical algorithms that couldn't possibly work in production, and then just wait a few years, and bam, computer chips would be fast enough to run them in real time.

But this wasn't a physical law. It was just that we were so goddamned inefficient to start with, that there was a shitload of room to work with.

However, we're hitting limits. At this point, we're developing transistors that are literally three molecules put next to each other. You can't get any smaller than that. We're hitting physical limits.

Now, engineers are clever as shit, so I do believe we'll keep on pulling out new tricks for the next couple decades. And then we've got quantum computing which will stretch out Moore's law (not quite Moore's law itself, but the general idea of increasing CPU speeds) for a bit.

But we've got an inevitable wall coming. We're pretty much running on fumes now.

Silicon is very inefficient for the computation involved

I think the most accurate computer model of natural, intelligent animal brains is the neural network.

With a neural network, you simulate "neurons" that connect to other neurons. A signal can pass through the whole graph of neurons, to light up other neurons, and ultimately spit out an answer or a result. There can be cycles in the graph (that is, you can have basically the neural version of a traffic circle) where a signal can filter back through a neuron several times.

To teach a neural network to solve a particular problem, you have a big list of inputs and the desired outputs. And you run them through the neural network, and tweak the weights of each neuron, until it starts to give you correct outputs.

This works decently enough for a lot of naturalistic processes like identifying images.

Some things to note:
  • Training the neural network takes a lot of time. It's a slow moving process. Definitely not suitable for realtime thinking. Every heartbeat that you experience might take 50 years for a neural network simulating
  • You only train it once (or perhaps slowly over a period of time). Human brains train constantly, 24/7.

Fundamentally, I think that silicon is a bad medium for consciousness. It's useful for building a solution for an easily defined, rigid problem. But it doesn't grow or change.

I think consciousness is a perpetual feedback loop of growing and changing, directly tied to the fact that our brains are made up of mushy, living cells.

Personally, as someone in tech, I hope to never be as disconnected from the practicalities as Musk et al are. Bill Gates lasted longer, because for a long time, he still kept poking his nose into the low level engineering issues. But eventually he retired to solve the 3rd world's problems.
Whether genuinely sentient, self-aware, conscious, and/or sapient artificial intelligence is even possible depends on the philosophy of mind that the universe runs on. If, say, the universe runs on a strict mind-body dualism, where the mind is fundamentally nonphysical and fundamentally outside the physical body, then not even the most realistically thinking AI is truly sentient.
I believe that neurons describe sentience pretty well. When you've got a network of neurons, with input from the outside world, and a constant feedback loop of reacting to that input, I think you develop conscience. Basically like running a current through a coil of wire generates a magnetic field.
Simply assuming that we can build a truly sentient AI, we will soon run into the problem of containing it and keeping it under control. For a biological entity, evolution is an arduous process typically taking at least hundreds of thousands of years to unfold. To a conscious machine, evolution is simply an act of updating, upgrading, and/or replacing its software. This is not even considering a computers ability to copy or replicate its own software.
Dying would be the default. That's how it is for almost ever species that has ever existed.
So now we have self-evolving, self-replicating AIs, who can enhance their capabilities massively in minutes, if not seconds. And their evolution rate will only increase exponentially, exceeding the capabilities of modern computers the same way humans surpass ants.
Computers are retardedly slow.
 
Back