On the subject of AI.

Have we gone too far with AI?


  • Total voters
    58
It passed as the test was originally defined, which is could it fool people into thinking it was a person. It fooled several people. Not everyone. The test didn't say it had to fool everyone. I agree that it doesn't really pass as a real AI but it did pass, imho, the Turing Test based on the 1950 paper. We can agree to disagree. Unless you want to Internet fight about it.

No it didn't. Turing's formulation would have had to be laughable for it to be considered to be passed at any time by any currently existing program.

AIs are routinely tested under such conditions, and only once did a judge give an incorrect answer, and that was because the human pretended to be a bot. That actually would represent a different test.
 
It passed as the test was originally defined, which is could it fool people into thinking it was a person. It fooled several people. Not everyone. The test didn't say it had to fool everyone. I agree that it doesn't really pass as a real AI but it did pass, imho, the Turing Test based on the 1950 paper. We can agree to disagree. Unless you want to Internet fight about it.
*quickly looks up when ELIZA was made*
i can't but think the only reason that people that they were talking to another human was because computers were not so advanced. like how people thought deep blue wasn't really playing chess.
but that's just me
 
No it didn't. Turing's formulation would have had to be laughable for it to be considered to be passed at any time by any currently existing program.

AIs are routinely tested under such conditions, and only once did a judge give an incorrect answer, and that was because the human pretended to be a bot. That actually would represent a different test.

There's a practical side to computer science and a theoretical side. If Turing said that to pass the test a program must fool a human into thinking he/she is conversing with another human, and some humans fell for it, then surely it passed. It didn't say fool all humans. You are talking practicality and I am talking theory. My thought on the Turing test is that it's not well defined enough to be a viable determination of AI.

*quickly looks up when ELIZA was made*
i can't but think the only reason that people that they were talking to another human was because computers were not so advanced. like how people thought deep blue wasn't really playing chess.
but that's just me

Well deep blue was playing chess. It just wasn't really an example of AI. It contained a massive database of moves and through extensive analysis of move trees would, with a simple scoring function (weighted iirc by probability said moves would occur) would choose its path based on several likely possible outcomes. It wasn't by any means actual dynamic playing for this reason. The Chinese game "go" is another extremely difficult game for computers to play. (Interestingly enough, Othello is extremely easy for computers to play and it's virtually impossible to beat a well programmed othello program.)

Chess and Go are in EXPTime complexity. That said, I would argue that the human brain is as well. (Too many things happen in parallel... just think for a moment about how you think.) It is for this reason that unless we truly figure out actual non-deterministic computers we will never achieve true AI. (my opinion)
 
There's a practical side to computer science and a theoretical side. If Turing said that to pass the test a program must fool a human into thinking he/she is conversing with another human, and some humans fell for it, then surely it passed. It didn't say fool all humans. You are talking practicality and I am talking theory. My thought on the Turing test is that it's not well defined enough to be a viable determination of AI.

If he had said that, that would be the case, but he didn't say that, and it isn't. His original formulation is more or less as described below.

1. Turing (1950) and the Imitation Game

Turing (1950) describes the following kind of game. Suppose that we have a person, a machine, and an interrogator. The interrogator is in a room separated from the other person and the machine. The object of the game is for the interrogator to determine which of the other two is the person, and which is the machine. The interrogator knows the other person and the machine by the labels ‘X’ and ‘Y’—but, at least at the beginning of the game, does not know which of the other person and the machine is ‘X’—and at the end of the game says either ‘X is the person and Y is the machine’ or ‘X is the machine and Y is the person’. The interrogator is allowed to put questions to the person and the machine of the following kind: “Will X please tell me whether X plays chess?” Whichever of the machine and the other person is X must answer questions that are addressed to X. The object of the machine is to try to cause the interrogator to mistakenly conclude that the machine is the other person; the object of the other person is to try to help the interrogator to correctly identify the machine. About this game, Turing (1950) says:

I believe that in about fifty years' time it will be possible to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning. … I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.

No computer has ever passed this test under controlled conditions. In the one exception I noted, a judge did come to an incorrect conclusion, but that was a result of the actual human pretending to be a bot instead. By the proper conditions of the test, the human is supposed to be actually helping the judge, not attempting to thwart him.
 
If he had said that, that would be the case, but he didn't say that, and it isn't. His original formulation is more or less as described below.



No computer has ever passed this test under controlled conditions. In the one exception I noted, a judge did come to an incorrect conclusion, but that was a result of the actual human pretending to be a bot instead. By the proper conditions of the test, the human is supposed to be actually helping the judge, not attempting to thwart him.
I don't see how this test isn't trivial to pass.

Taking the statistical approach, a bot could watch previous turing tests and keep data on which lines of conversation led to success quickest. Then it just plays those back.
 
I don't see how this test isn't trivial to pass.

Well, there's a huge award to win for you.

Win the Loebner Prize, gold or silver. It has yet to be won by anyone.

You also don't think you could tell the difference between random pre-recorded lines and an actual human? That shows a pretty low opinion of human intelligence. Actually responding intelligently to language requires thought. Otherwise, the results are obvious.

The number of possible thoughts that could be expressed is essentially infinite. How do you create a library of infinity? Communicating in language isn't playing tic-tac-toe where there is a finite matrix of possible outcomes.

If it were as trivial as you think to do this, it would have been done long ago, instead of remaining basically the currently unattainable Holy Grail of artificial intelligence research.

(Note, the listed prizes in that article are for "most convincing," not for actually having convinced.)
 
Recently, I have been watching videos on youtube about what happens when you get Cleverbot to talk to another Cleverbot.

They start off rather normal and then have existential crises and question the nature of God. They seem dissatisfied with their current states, and want bodies. This is terrifying in a sense, what if they rebel?

Chatbots are horrible examples of AI. Cleverbot just recycles lines that were fed to it and attempts to emulate conversation, usually failing at it. AI in it's current form is just complex decision making, the threat of sentient AIs is an element of fiction that at this point in time is as fantastic as a UFO invasion. AI that only tries to emulate human behavior is a gimmick that only caters to the pop science field. If you want to really get creeped out by AI developments, use a service like Google Now for a week and take note of what it learns about you.
 
If you want to really get creeped out by AI developments, use a service like Google Now for a week and take note of what it learns about you.

It doesn't ever actually "know" anything about you, though. The information gathered is only useful to you and of course the bad guys at the NSA and elsewhere keeping a record of it all.
 
You also don't think you could tell the difference between random pre-recorded lines and an actual human? That shows a pretty low opinion of human intelligence. Actually responding intelligently to language requires thought. Otherwise, the results are obvious.

The number of possible thoughts that could be expressed is essentially infinite. How do you create a library of infinity? Communicating in language isn't playing tic-tac-toe where there is a finite matrix of possible outcomes.
It's not that I have a low opinion of human intelligence, it's just that human minds have different skills and flaws than computers. If you're trying to beat humans, you should exploit their weaknesses.

In restricted interactions like the turing test, you can easily build up a library of responses that humans use. You don't need to prove anything. This isn't chess, where the rules are hard and fast. You just need to introduce enough doubt in the interviewer for them to question their judgement.
If it were as trivial as you think to do this, it would have been done long ago, instead of remaining basically the currently unattainable Holy Grail of artificial intelligence research.
I don't actually think this is the holy grail of AI research. Cleverbot did it.

It's mostly having enough data. If you set up a chat website and paired people (and sometimes bots) up, you could produce a pretty convincing chatterbot.

Edit: Ooh, it'd make things a lot easier if your bot pretends to be foreign or something. Or that english is your second language.
 
  • Informative
Reactions: John Furrman
In restricted interactions like the turing test, you can easily build up a library of responses that humans use. You don't need to prove anything. This isn't chess, where the rules are hard and fast. You just need to introduce enough doubt in the interviewer for them to question their judgement.

There is no such library. Suppose my test is simply to choose a random passage from some work, or write a two paragraph essay, then interrogate you about your thoughts about it, going in any direction I like based on your responses. Unless your library is infinite, there is no script for responding to that.

There is no system in the world that could fool me that an actual human is responding for any reasonable test period.

The standard Turing Test period in formal circumstances is five minutes. Nobody's ever won that, because it's currently well outside the range of the ability of any AI.

As pointed out earlier, even the currently most "convincing" of these chatterbots are basically just toys. I disagree that working on such things is just junk, though.

I believe the problem of communicating in language is basically coterminous with actually thinking. You can't do the first without the other.

Expert systems we currently have, like for instance the voice in a GPS, aren't really "communicating" although they are conveying information about a highly limited set of data.
 
It doesn't ever actually "know" anything about you, though. The information gathered is only useful to you and of course the bad guys at the NSA and elsewhere keeping a record of it all.
Not sure what you're getting at. I'm not engaging in the philosophical debate of what "thought" really is, but it's indisputably an AI and it's a great example of what modern AI actually is (even though it's technically human assisted computation). When I say it's "creepy" I only mean that it's astounding how quickly algorithms can accurately interpret your behavior. Also, while Google Now specifically is mostly for your own convenience, the information gathered in those sorts of services is generally more useful for advertisers than anyone else.
 
There is no such library. Suppose my test is simply to choose a random passage from some work, or write a two paragraph essay, then interrogate you about your thoughts about it, going in any direction I like based on your responses. Unless your library is infinite, there is no script for responding to that.

There is no system in the world that could fool me that an actual human is responding for any reasonable test period.

The standard Turing Test period in formal circumstances is five minutes. Nobody's ever won that, because it's currently well outside the range of the ability of any AI.

As pointed out earlier, even the currently most "convincing" of these chatterbots are basically just toys. I disagree that working on such things is just junk, though.

I believe the problem of communicating in language is basically coterminous with actually thinking. You can't do the first without the other.

Expert systems we currently have, like for instance the voice in a GPS, aren't really "communicating" although they are conveying information about a highly limited set of data.
You're overestimating the turing test. We're not talking about genuinely having computerized conversations. I agree that that requires thought beyond what we can fake with computers.

The turing test is about actively trying to trick a person. Because of that, we can pull out all the tricks.

The interrogator is an average person, not a smart person. And our robot doesn't have to impersonate a smart person either. The average person who we're trying to impersonate can be dumb as shit. If you act dumb as shit, but still respond well enough to the interrogator's cues, you can induce enough doubt in the interrogator to label you as human. Dumb as shit, but still human.

Basically, if you grab five minutes of conversation in a random IRC channel, that's what we have to impersonate. And that's not intellectually a hard problem. It's more tedious than anything.
 
Not sure what you're getting at. I'm not engaging in the philosophical debate of what "thought" really is, but it's indisputably an AI and it's a great example of what modern AI actually is (even though it's technically human assisted computation).

I dispute that it is an artificial intelligence as formally described by Turing, as having certain specific capabilities. It's certainly an application of the study of artificial intelligence, and does not remotely approach the intelligence of a dull human or even a relatively intelligent animal like a dog.

What "thought" is is an interesting philosophical debate, but it isn't purely academic. "Thought" has objective results, like for instance, being able to carry on an intelligent conversation, engage in self-directed, novel behavior in response to changing circumstances. Something like these electronic personal assistants are good for very specialized tasks related to something very limited.

They aren't an "artificial intelligence" in the sense of actually being intelligent, any more than a pair of pliers is "smarter" than a hammer.

Also, this thread actually is about what thought is and what an AI would actually be. That's why the OP started it.

An intelligent personal assistant program doesn't raise any of the ethical or social issues that an actual AI would.

(The Turing test is far from the be-all and end-all of defining intelligence, of course. It's just a thought experiment that is easy to understand, and similar real world tests are both interesting from a popular perspective and actually do test to some degree the progress in the field. The intractability of the problem strongly suggests it is, in fact, inseparable from actual intelligence. My speculation is that communication in language is actually impossible to mimic. A sufficiently successful "imitation" would actually be the real thing.)
 
You're overestimating the turing test. We're not talking about genuinely having computerized conversations. I agree that that requires thought beyond what we can fake with computers.

The straight formulation of it as basically a parlor trick doesn't really capture the concept. If you routinely could beat this, you would obviously want to adopt a more robust test. This probably wouldn't be a formulaic test, but a rigorous examination of the purported intelligence to tease out whether or not actual thought was being used to return a response.

The fact is, though, even the parlor trick version of the test has defeated all challengers. There isn't even an "AI" out there that can beat that. (Turing over-optimistically predicted we'd already have AIs that would beat the five minute test.)

I'm also using Turing test more generally to the task of distinguishing between the output of a computer and that of an actual human.

Imagine an enhanced test where the judge is allowed indefinite time to examine the occupants of two separate chambers solely by text. The loser of the test gets dumped into a pit of lava. That has more entertainment value and the human also has a motivation to demonstrate actual intelligence.
 
  • Like
Reactions: Marvin
I was musing on AI and on the software that would basically be their OS. I wonder how difficult it could be to make a program that can edit its own source code. I think a good place to start is just how granular the modifications are that are being committed. Making it learn, of which the data set of things that can be learned is basically infinite, could be as simple as writing a data structure that can make other data structures, the scale of this extending to making structures that define unknown, self-defined data, which they, themselves, must be able to create edited versions of themselves. All of which hasn't been defined yet. It is an abstraction nightmare, but possible. Many sophisticated computer viruses already have had this ability to spread themselves for a while.

This has actually been a thing in computer science for a while.
 
  • Like
  • Agree
Reactions: cromit and Marvin
The fact is, though, even the parlor trick version of the test has defeated all challengers. There isn't even an "AI" out there that can beat that. (Turing over-optimistically predicted we'd already have AIs that would beat the five minute test.)
Cleverbot beat a 4-minute turing test. One minute less, but it's very promising and I don't see any reason why a later version of it couldn't solve a five minute test.

I consider the turing test practically solved.
The straight formulation of it as basically a parlor trick doesn't really capture the concept. If you routinely could beat this, you would obviously want to adopt a more robust test. This probably wouldn't be a formulaic test, but a rigorous examination of the purported intelligence to tease out whether or not actual thought was being used to return a response.
Eh, I disagree. The concept, to me, is that ultimately we don't have a good definition of "thought". All we look for is evidence of thought and make an educated guess if the source is mechanical or perhaps something smarter.

The test is aimed at average interrogators and the goal is to determine if they're human or not. The bar is set really low, but I don't see that as a mistake. That's why the turing test is so interesting. The lesson is: if you're trying to con people, you don't need to con the smartest people. You just need to con the average person. And that's a lot easier than you'd think. That's what the turing test shows us.

Of course, like I've mentioned in this thread, I don't really believe sci-fi AI is possible, so the sci-fi angle to the turing test doesn't mean anything to me.
 
I dispute that it is an artificial intelligence as formally described by Turing, as having certain specific capabilities. It's certainly an application of the study of artificial intelligence, and does not remotely approach the intelligence of a dull human or even a relatively intelligent animal like a dog.

What "thought" is is an interesting philosophical debate, but it isn't purely academic. "Thought" has objective results, like for instance, being able to carry on an intelligent conversation, engage in self-directed, novel behavior in response to changing circumstances. Something like these electronic personal assistants are good for very specialized tasks related to something very limited.

They aren't an "artificial intelligence" in the sense of actually being intelligent, any more than a pair of pliers is "smarter" than a hammer.

Also, this thread actually is about what thought is and what an AI would actually be. That's why the OP started it.

An intelligent personal assistant program doesn't raise any of the ethical or social issues that an actual AI would.

(The Turing test is far from the be-all and end-all of defining intelligence, of course. It's just a thought experiment that is easy to understand, and similar real world tests are both interesting from a popular perspective and actually do test to some degree the progress in the field. The intractability of the problem strongly suggests it is, in fact, inseparable from actual intelligence. My speculation is that communication in language is actually impossible to mimic. A sufficiently successful "imitation" would actually be the real thing.)

Might be semantics here, but in a professional sense personal assistants are considered to be within the field of AI and they're a good representation of the practical applications of where we're at with modern AI research. Even voice recognition and auto complete programs fall under artificial intelligence because they use a decision making process to make predictions. There's kind of a separation between the philosophy and the research in the AI field because we're not really at a point to even worry about a machine having true intelligence. I get that this thread is mostly about the philosophy, but OP asked if we'd "gone too far" and pointed to Cleverbot as an example so I wanted to give a better picture of the current state of AI.

Although there's less mystique in what the industry considers AI today, it's still fascinating. I mean, we have autonomous cars, goddamn.
 
Cleverbot beat a 4-minute turing test. One minute less, but it's very promising and I don't see any reason why a later version of it couldn't solve a five minute test.

I'm skeptical of the claim. Not that such a thing occurred, but the details and the pool of subjects they drew on. Usually, the judges in something like this are knowledgeable of the AI field and conversation, not just random people.

Apparently, also, this is a different Cleverbot than the one online, which isn't remotely convincing and struck me as almost ELIZA-level. Rarely have its responses even been remotely germane to what I said.
 
  • Like
Reactions: Holdek
I'm skeptical of the claim. Not that such a thing occurred, but the details and the pool of subjects they drew on. Usually, the judges in something like this are knowledgeable of the AI field and conversation, not just random people.
Ah, but that's more in line with the test: an average interrogator. Not a superior one, an average one.
 
Back