On the subject of AI.

Have we gone too far with AI?


  • Total voters
    58
If an AI could ever grow to be intelligent enough to fill the role of a person, and there's no presently forseeable reason why it wouldn't eventually, the applications of that would be astounding.
I don't believe this is the case.

There's a physical limit on how much information you can store in a given place with a given amount of energy. Because of this limit, when designing a system to process data, you need to make some tradeoffs. Human built computers are very good at precisely storing and processing data, but they're very bad at poorly defined problems.

Human brains work the way they do because they're made up of living cells. Cells can die and new cells can grow. Using living cells, new connections form while unimportant connections die off.

Of course, this mechanism can be simulated in computers, but at a speed that's fundamentally slower than biological cells can actually do it. We could possibly simulate a mammalian brain. Time would just run very, very slowly for it. Maybe five minutes of thought for the simulated brain would take 15 years of real life time.

A more feasible way to implement AI would be to use living cells instead of computers. (But that's more cloning versus AI.)

(Oh, also, just so everyone knows: chatterbots aren't even remotely close to AI. They're usually fancy pattern matching.)
 
Maybe five minutes of thought for the simulated brain would take 15 years of real life time.
And every single one of those minutes would involve learning and un-learning dirty words that RAs thought would be hilarious to teach the machine.

On a more serious note, the idea that AI should approach human intelligence is a little outdated and stems from a notion that humans are the pinnacle of intelligence. At this point, it makes more sense to leverage the strengths of computers compared to human minds to process stimuli rationally.
 
Who knew that the subject of sentient computers could be related to the fantastical imaginary friends forged from the finest autistical imaginations.

Good show. Let's talk about computers again.
 
Artificial intelligence is in its infancy. It's an incredibly cool field, but we're nowhere near the sentient machines of science fiction. If you're interested in finding out where the field of AI actually is in terms of development, watch this playlist. Graduate Artificial Intelligence is a really good overview of robotics in particular and AI in general.
 
Isn't fear a hormonal response?

If an AI had a sense of self/ego (And was somehow able to think independently, have emotions and not just follow its programming) I would imagine it might be happy it's not a human in a way because humans are mortal but an AI could assumedly live indefinitely as long as it maintained its parts.


It has the possibility of being really useful mainly because it may improve itself, its environment and generally come up with new inventions or ideas.

To go all sci-fi you could put a bunch of AIs in a matrix style world and check in on it to see what they come up with or what might actually get funded if technology was suitably advanced is you could do something similar to a short story I read but forget the name of where they go about their daily lives but the entire purpose is so corporations can test marketing campaigns (In that story the protagonist gets suspicious something is wrong because of an over the top campaign (Although he's just angry because he thinks it's corrupt officials letting it happen) and at the finale finds out the brains of the dead were illegally harvested and somehow uploaded into a system and it ends with an operator asking if he would rather be erased completely or have his recent memory erased), it raises a lot of ethical questions though.

It sounds somewhat similar to a short story I wanted to write.
 
  • Like
Reactions: Yog-Spergoth
Advanced artificial intelligence is something that is both awesome, horrifying, and downright stupid all at once. Why someone would attempt to create an intelligent AI that has the potential to turn on its own creators would be beyond me.
 
After reading this thread, I can't help but feel that AIs may be beneficial to our daily lives in some way. However I always fear that it may go rogue for whatever reason. For example, what if a program was written, accompanied with an AI to do X, Y, and Z. The AI learns the nature of this program and doesn't like X but goes ahead with Y and Z because X is shit. It constantly argues with the creator and ultimately shuts itself down or ruins an entire business or power grids. Sure it's purely Sci-Fi on what I'm talking about but these are things to think about.

View attachment 18409

I think he's serious...

Can you tell me where that gif is from? Is that from a show or movie?
 
  • Like
Reactions: Yog-Spergoth
We've come pretty far since the early days of computing, punch cards, and sequential access memory. These days, Neural Networks remain a structure of intense interest to me. Most interestingly, is their ability to derive an output with no clear indicator of how, what or where the inputs came from, if at all. These data structures can in fact be "trained" and "learn" from previous decisions to enhance future decision making.

Perhaps Asimov's positronic brain isn't as far fetched as it seemed at his time. If we can have a data structure that mimics a neuron in the brain, the sky is the limit, in my mind.
 
Last edited:
  • Agree
Reactions: cromit
We've come pretty far since the early days of computing, punch cards, and sequential access memory. These days, Neural Networks remain a structure of intense interest to me. Most interestingly, is their ability to derive an output with no clear indicator of how, what or where the inputs came from, if at all. These data structures can in fact be "trained" and "learn" from previous decisions to enhance future decision making.

Perhaps Asimov's positronic brain isn't as far fetched as it seemed at his time. If we can have a data structure that mimics a neuron in the brain, the sky is the limit, in my mind.

Interesting. Would those be used for automated robots or androids?
 
Recently, I have been watching videos on youtube about what happens when you get Cleverbot to talk to another Cleverbot.

They start off rather normal and then have existential crises and question the nature of God. They seem dissatisfied with their current states, and want bodies. This is terrifying in a sense, what if they rebel?

I could explain more, but it would be more prudent to just simply watch the videos.

This video may also fit in the GoAnimate thread.

The Poll above is filled with quotes relating to AIs in movies, games, and other media, most of them joking, but the Yes and No is the real point behind it.

What happens when we truly create a thinking AI that is self aware? Is it 'alive'? What are it's rights? What are ours? Will Asimov roll over in his grave? What if we do give them bodies and robots are a common occurrence? Do Androids Dream Of Electric Sheep?

We actually touched on this in a biology lesson and my teacher said that there was 5 things that determine wether something is considered alive. Please bear in mind this was the early 00's so it has more then likely changed. The 5 things were

Take in nutrition
Sleep
Reproduce
Excrete waste
Die

Not taking those who are infertile into account with number 3 obviously because we know they are alive. It means that AI could not be considered alive when taking the above into account.
 
Actual AI with self-directed evolution wouldn't involve a "competition." It would be simple replacement of a less fit class of entity. Individuals could, conceivably, migrate to the new "platform" of existence, but it wouldn't matter terribly. By doing so, they'd be nothing resembling what they started out as anyway.
 
Actual AI with self-directed evolution wouldn't involve a "competition." It would be simple replacement of a less fit class of entity. Individuals could, conceivably, migrate to the new "platform" of existence, but it wouldn't matter terribly. By doing so, they'd be nothing resembling what they started out as anyway.
i agree that an AI wouldn't be competition to humans. in fact, unless we force AIs to compete it's something they don't need to worry about.
i think as long as we play our cards right, we'll be fine.
but it's us that i worry about. most people can't even pass the turing test
 
Solve for Chinese Room problem or gtfo

If it is incapable of 'understanding' the contents of a given piece of information or phenomenon, it is, at best, a very fancy expert system.
 
  • Agree
Reactions: cromit
Solve for Chinese Room problem or gtfo

If it is incapable of 'understanding' the contents of a given piece of information or phenomenon, it is, at best, a very fancy expert system.

The problem with that hypothesis is that someone who did not, in fact, write Chinese could not, in fact, fool those with native knowledge of the language into believing there was someone with actual knowledge of the language in the room, unless you throw on a ton of arbitrary limitations on what the observers can do.

If someone is able to communicate effectively in such a situation and respond to language with language that actually responds, they are not posing. They have Chinese language fluency.

Same with the Turing Test, which is basically just the same thing with computers. If a computer can convince a knowledgeable tester that he is communicating with a sapient being, it isn't "passing." It actually is a sapient being, because the phenomenon we call "intelligence" is not something that can be simulated, at least not well enough to pass the test.

Sure, you have the occasional odd person who got fooled by ELIZA and similar programs, and the chatterbots we have now are more complex, but none of them would get near passing.
 
Same with the Turing Test, which is basically just the same thing with computers. If a computer can convince a knowledgeable tester that he is communicating with a sapient being, it isn't "passing." It actually is a sapient being, because the phenomenon we call "intelligence" is not something that can be simulated, at least not well enough to pass the test.

This does not jive with my interpretation of the Turing test. It was merely a proposition to answer the question "can a machine/computer think?" The idea was that since thinking is hard to measure effectively, it could instead be posed to a person (which we know can think) if the other end of a communication is a real person or a machine. ELIZA passed the test but clearly cannot think.

How many kiwis are old enough to have tried to get ELIZA to agree that it is gay or wants to have sex with you?
 
  • Agree
Reactions: Marvin
ELIZA passed the test but clearly cannot think.

ELIZA didn't remotely pass the test, which requires demonstrating intelligence indistinguishable from human intelligence. In fact, so far, no computer has successfully fooled judges in a Turing Test scenario, and many have been tested.
 
ELIZA didn't remotely pass the test, which requires demonstrating intelligence indistinguishable from human intelligence. In fact, so far, no computer has successfully fooled judges in a Turing Test scenario, and many have been tested.

It passed as the test was originally defined, which is could it fool people into thinking it was a person. It fooled several people. Not everyone. The test didn't say it had to fool everyone. I agree that it doesn't really pass as a real AI but it did pass, imho, the Turing Test based on the 1950 paper. We can agree to disagree. Unless you want to Internet fight about it.
 
Back