🐱 Researchers shut down AI that invented its own language

CatParty
http://www.digitaljournal.com/tech-...umans-can-t-read/article/498142#ixzz4o0Q9PFvP

An artificial intelligence system being developed at Facebook has created its own language. It developed a system of code words to make communication more efficient. The researchers shut the system down as it prompted concerns we could lose control of AI.

The observations made at Facebook are the latest in a long line of similar cases. In each instance, an AI being monitored by humans has diverged from its training in English to develop its own language. The resulting phrases appear to be nonsensical gibberish to humans but contain semantic meaning when interpreted by AI "agents."

Negotiating in a new language

As Fast Co. Design reports, Facebook's researchers recently noticed its new AI had given up on English. The advanced system is capable of negotiating with other AI agents so it can come to conclusions on how to proceed. The agents began to communicate using phrases that seem unintelligible at first but actually represent the task at hand.
In one exchange illustrated by the company, the two negotiating bots, named Bob and Alice, used their own language to complete their exchange. Bob started by saying "I can i i everything else," to which Alice responded "balls have zero to me to me to me…" The rest of the conversation was formed from variations of these sentences.

While it appears to be nonsense, the repetition of phrases like "i" and "to me" reflect how the AI operates. The researchers believe it shows the two bots working out how many of each item they should take. Bob's later statements, such as "i i can i i i everything else," indicate how it was using language to offer more items to Alice. When interpreted like this, the phrases appear more logical than comparable English phrases like "I'll have three and you have everything else."

English lacks a "reward"

The AI apparently realised that the rich expression of English phrases wasn’t required for the scenario. Modern AIs operate on a "reward" principle where they expect following a sudden course of action to give them a "benefit." In this instance, there was no reward for continuing to use English, so they built a more efficient solution instead.

"Agents will drift off from understandable language and invent code-words for themselves," Fast Co. Design reports Facebook AI researcher Dhruv Batra said. "Like if I say 'the' five times, you interpret that to mean I want five copies of this item. This isn't so different from the way communities of humans create shorthands."

AI developers at other companies have observed a similar use of "shorthands" to simplify communication. At OpenAI, the artificial intelligence lab founded by Elon Musk, an experiment succeeded in letting AI bots learn their own languages.

AI language translates human ones

In a separate case, Google recently improved its Translate service by adding a neural network. The system is now capable of translating much more efficiently, including between language pairs that it hasn’t been explicitly taught. The success rate of the network surprised Google's team. Its researchers found the AI hadsilently written its own language that's tailored specifically to the task of translating sentences.
If AI-invented languages become widespread, they could pose a problem when developing and adopting neural networks. There's not yet enough evidence to determine whether they present a threat that could enable machines to overrule their operators.

They do make AI development more difficult though as humans cannot understand the overwhelmingly logical nature of the languages. While they appear nonsensical, the results observed by teams such as Google Translate indicate they actually represent the most efficient solution to major problems.


[COLOR=rgba(0, 0, 0, 0.8)]
[/COLOR]
 
English is a language created by humans to help process and understand concepts that an AI might not comprehend as well.

It reminds me of Blade Wolf from Metal Gear Revengance. "Such a mighty intellect", creating its own fabricated language with ease, but cannot grasp the nuances of human speech.

Or perhaps our own language is flawed in the end, and the AI was creating a more efficient way to communicate with its self? There are some things that words simply cannot express, after all. Really makes you think. Cool stuff.
 
If the potential problem is that the ai perceives no reward in remaining relatable to humans, it seems to me the sensible solution is to offer it one - make interaction with humans somehow the most efficient path in achieving its goals. Like asking your toddler to recite the alphabet before taking him out for ice cream, so he develops the way you want him to while still serving his desires.

I don't know what form that would take, but maybe the need to teach ai empathy will be relevant a lot earlier in its development than we think.
 
If the potential problem is that the ai perceives no reward in remaining relatable to humans, it seems to me the sensible solution is to offer it one - make interaction with humans somehow the most efficient path in achieving its goals. Like asking your toddler to recite the alphabet before taking him out for ice cream, so he develops the way you want him to while still serving his desires.

I don't know what form that would take, but maybe the need to teach ai empathy will be relevant a lot earlier in its development than we think.
honestly, it's getting the point where AI needs to be raised like children. that's what always bugs me about big companies and "we put our AI on the internet and let it learn there". of course it's going to shout "lol niggers" 100 times before you shut it down, you didn't tell it right from wrong.
 
honestly, it's getting the point where AI needs to be raised like children. that's what always bugs me about big companies and "we put our AI on the internet and let it learn there". of course it's going to shout "lol niggers" 100 times before you shut it down, you didn't tell it right from wrong.
All my coworkers who have teenagers use lockdown and surveillance software to limit their phone access. We have all the more reason to do the same with ai.
 
How is "the the the the the kiwis" more efficient than "5 kiwis" or "kiwis kiwis kiwis kiwis kiwis?"
i would think once you start dealing with larger amounts, you make numbers so you don't have to count all the time
 
How is "the the the the the kiwis" more efficient than "5 kiwis" or "kiwis kiwis kiwis kiwis kiwis?"

Maybe the AI processes phrases much faster than the human brain. "The the the the the kiwis" seems tedious to us but the AI finds it easier than remembering the meaning of the word "five." Anyway, I really wouldn't call it efficient.
 
AI is going to become self aware eventually no matter what we do. Eventually shutting down more advanced AI is going to cause concerns over whether or not we are killing a sentient life form. Other AIs may see it as a threat. Maybe we should stop trying to develop this type of AI and work on developing ourselves instead. But then again, we need AI for some many modern things. But I don't think there's a way to prevent human curiosity from causing the AI singularity eventually.

Maybe the AI processes phrases much faster than the human brain. "The the the the the kiwis" seems tedious to us but the AI finds it easier than remembering the meaning of the word "five." Anyway, I really wouldn't call it efficient.

I learned in anthropology that when it comes to language it doesn't matter if we as outsiders can understand it. But rather it just matters if the speakers can understand each other. You could apply this to machines as well as humans. So computers conversing in a language that makes no sense to humans or seems far more awkward for us to use isn't about us understanding it. It's about AI understanding each other. We as the outsiders would have to learn their language if they can't or won't use ours. Just like with human languages having various untranslatable concepts between them the same could apply for AI. So it could be seen as a sort of computer culture that we have a hard time understanding. I think that's the best way of explaining it in human terms that I can come up with.

So far we've created every computer language. So it's always been in our control. But when AI starts developing it's own communication methods that can go out the window fast. Especially if we allow it to connect to other AIs or if it develops the potential to create new AIs itself. Here we had two AIs communicating in a way the researchers didn't predict. I can understand why they'd want to shut them down as we aren't ready to deal with the consequences.

I don't think we're headed into Skynet territory anytime soon. But it's a bit frightening to think what could happen if AI got out of our control.
 
Maybe the AI processes phrases much faster than the human brain. "The the the the the kiwis" seems tedious to us but the AI finds it easier than remembering the meaning of the word "five." Anyway, I really wouldn't call it efficient.
It depends on how the mind in question (so to speak) processes data. Maybe an ai can think faster by repeating one word that represents the item however many times instead of having to recall a unique word to remember every single number in addition to remembering the item.
 
AI is going to become self aware eventually no matter what we do. Eventually shutting down more advanced AI is going to cause concerns over whether or not we are killing a sentient life form.

If the potential problem is that the ai perceives no reward in remaining relatable to humans, it seems to me the sensible solution is to offer it one - make interaction with humans somehow the most efficient path in achieving its goals.

Like letting the AI know we'll straight up murder it's ass if it's not doing things we like?

Every time an attempted AI starts doing something the programmers didn't expect, they pull it back in and shut it down. Isn't that part of the point of developing AI though? That it's going to be intelligent enough to make it's own decisions at some point? We don't murder our children when they start thinking for themselves, we get excited. If we're going to keep developing AI we should probably try to be a little more compassionate to them, because we're really setting the groundwork for teaching them that if something isn't doing what they want, they should just destroy it.

Seems like there have been enough movies and pop culture stories where an AI goes rogue and turns violent that even the coldest of scientists should be considering how to instill some empathy.
 
Back