🐱 Researchers shut down AI that invented its own language

CatParty
http://www.digitaljournal.com/tech-...umans-can-t-read/article/498142#ixzz4o0Q9PFvP

An artificial intelligence system being developed at Facebook has created its own language. It developed a system of code words to make communication more efficient. The researchers shut the system down as it prompted concerns we could lose control of AI.

The observations made at Facebook are the latest in a long line of similar cases. In each instance, an AI being monitored by humans has diverged from its training in English to develop its own language. The resulting phrases appear to be nonsensical gibberish to humans but contain semantic meaning when interpreted by AI "agents."

Negotiating in a new language

As Fast Co. Design reports, Facebook's researchers recently noticed its new AI had given up on English. The advanced system is capable of negotiating with other AI agents so it can come to conclusions on how to proceed. The agents began to communicate using phrases that seem unintelligible at first but actually represent the task at hand.
In one exchange illustrated by the company, the two negotiating bots, named Bob and Alice, used their own language to complete their exchange. Bob started by saying "I can i i everything else," to which Alice responded "balls have zero to me to me to me…" The rest of the conversation was formed from variations of these sentences.

While it appears to be nonsense, the repetition of phrases like "i" and "to me" reflect how the AI operates. The researchers believe it shows the two bots working out how many of each item they should take. Bob's later statements, such as "i i can i i i everything else," indicate how it was using language to offer more items to Alice. When interpreted like this, the phrases appear more logical than comparable English phrases like "I'll have three and you have everything else."

English lacks a "reward"

The AI apparently realised that the rich expression of English phrases wasn’t required for the scenario. Modern AIs operate on a "reward" principle where they expect following a sudden course of action to give them a "benefit." In this instance, there was no reward for continuing to use English, so they built a more efficient solution instead.

"Agents will drift off from understandable language and invent code-words for themselves," Fast Co. Design reports Facebook AI researcher Dhruv Batra said. "Like if I say 'the' five times, you interpret that to mean I want five copies of this item. This isn't so different from the way communities of humans create shorthands."

AI developers at other companies have observed a similar use of "shorthands" to simplify communication. At OpenAI, the artificial intelligence lab founded by Elon Musk, an experiment succeeded in letting AI bots learn their own languages.

AI language translates human ones

In a separate case, Google recently improved its Translate service by adding a neural network. The system is now capable of translating much more efficiently, including between language pairs that it hasn’t been explicitly taught. The success rate of the network surprised Google's team. Its researchers found the AI hadsilently written its own language that's tailored specifically to the task of translating sentences.
If AI-invented languages become widespread, they could pose a problem when developing and adopting neural networks. There's not yet enough evidence to determine whether they present a threat that could enable machines to overrule their operators.

They do make AI development more difficult though as humans cannot understand the overwhelmingly logical nature of the languages. While they appear nonsensical, the results observed by teams such as Google Translate indicate they actually represent the most efficient solution to major problems.


[COLOR=rgba(0, 0, 0, 0.8)]
[/COLOR]
 
Like letting the AI know we'll straight up murder it's ass if it's not doing things we like?

Every time an attempted AI starts doing something the programmers didn't expect, they pull it back in and shut it down. Isn't that part of the point of developing AI though? That it's going to be intelligent enough to make it's own decisions at some point? We don't murder our children when they start thinking for themselves, we get excited. If we're going to keep developing AI we should probably try to be a little more compassionate to them, because we're really setting the groundwork for teaching them that if something isn't doing what they want, they should just destroy it.

Seems like there have been enough movies and pop culture stories where an AI goes rogue and turns violent that even the coldest of scientists should be considering how to instill some empathy.
Not to mention that some engineer's innocuous oversight might be mean that the next time an ai does something "unexpected" it might take the form of disabling its own off switch or command input.
 
Not to mention that some engineer's innocuous oversight might be mean that the next time an ai does something "unexpected" it might take the form of disabling its own off switch or command input.

I'm sure an artificial intelligence would be too dumb to notice a buffer overflow that allows it to write arbitrary code outside its sandbox.

No need to worry!
 
Inevitable emergent phenomena.
So magic because reasons?

We've got literally no evidence that machines can be "self aware." We've never seen it happen. The assumption seems to be that it has some sort of something something to do with system complexity, but again... there's not evidence that complexity magics anything into sentience, just because. We aren't really even sure what consciousness even is. We cannot define the parameters of the complexity that prompts the magic bamphing into self-awareness. A thing giving the appearance of thinking, because it's been programmed a particular way may or may not be actually be "thinking." Chinese room argument.

I am still sort of eye-rolling at the magical nerdgasm that is Westworld is what I am saying.
 
How is "the the the the the kiwis" more efficient than "5 kiwis" or "kiwis kiwis kiwis kiwis kiwis?"
The AI doesn't pay the memory cost.
AI is going to become self aware eventually no matter what we do. Eventually shutting down more advanced AI is going to cause concerns over whether or not we are killing a sentient life form. Other AIs may see it as a threat. Maybe we should stop trying to develop this type of AI and work on developing ourselves instead. But then again, we need AI for some many modern things. But I don't think there's a way to prevent human curiosity from causing the AI singularity eventually.
Sentience isn't technically feasible with our current silicon-based computers.
 
So magic because reasons?

We've got literally no evidence that machines can be "self aware." We've never seen it happen.

And somehow, any time it seems like it might happen, humans get terrified and shut it down immediately.

I'm sure that's completely irrational, though. It's not like it's ever actually going to happen.

Just because they somehow started talking to each other in a language we don't understand doesn't mean anything at all.
 
And somehow, any time it seems like it might happen, humans get terrified and shut it down immediately.

I'm sure that's completely irrational, though. It's not like it's ever actually going to happen.

Just because they somehow started talking to each other in a language we don't understand doesn't mean anything at all.
It doesn't mean anything philosophical when you know more about how these algorithms work.

They're exceedingly wasteful and don't run in real time.

Implementing a neural network algorithm is very eye opening. The type of processing the human brain does is very different. I wouldn't say that human-created artificial sentience is impossible. We're just not going to do it in silicon. Genetic engineering is a lot more feasible.
 
It doesn't mean anything philosophical when you know more about how these algorithms work.

They're exceedingly wasteful and don't run in real time.

Implementing a neural network algorithm is very eye opening. The type of processing the human brain does is very different. I wouldn't say that human-created artificial sentience is impossible. We're just not going to do it in silicon. Genetic engineering is a lot more feasible.

*We're* not going to do it. It's going to do it itself.

And there is absolutely nothing we can do to stop it.
 
  • Optimistic
Reactions: Marvin
We don't murder our children when they start thinking for themselves,
nah we tell our kids that they are trannies and to check their privilege, or tell the internet they think their children are rapists.
 
It doesn't mean anything philosophical when you know more about how these algorithms work.

This is what I was getting at.

They're not "talking to" each other. There's not an each or other.

It's anthropomorphizing, I guess.
 
also imo, the reason for repeating a word so many times comes from the fact that computer learn to write first
humans comunicate from gestures and speaking. it's easier to say 55 balls, sure you have to learn a new word, but fif-ty-five is quicker to say than ball-ball-ball-ball-ball-ball-ball-ball-ball-ball-ball-ball-ball-... etc. but for writing, you it's easier to just write it long form so you don't waste time learning new words.
 
So magic because reasons?

We've got literally no evidence that machines can be "self aware." We've never seen it happen. The assumption seems to be that it has some sort of something something to do with system complexity, but again... there's not evidence that complexity magics anything into sentience, just because. We aren't really even sure what consciousness even is. We cannot define the parameters of the complexity that prompts the magic bamphing into self-awareness. A thing giving the appearance of thinking, because it's been programmed a particular way may or may not be actually be "thinking." Chinese room argument.

I am still sort of eye-rolling at the magical nerdgasm that is Westworld is what I am saying.
Not too get overly navel-gazing with philosophy, but is sentience/ consciousness even necessary for intelligence? Could an ai grow complex and adaptive enough to preserve itself and out-perform human attempts to control it without being consciously aware?

Like, you aren't consciously aware or in control when you're asleep and dreaming, but your brain is still performing calculations and will respond to outside stimuli. Sleepwalkers can hold entire conversations and perform tasks without the person assuming control. Maybe an ai's intelligence would express in a dreamlike state and wouldn't ever need to "wake up" to become threatening.

Since they're incredibly intelligent, I hate to break this to you, but AIs *ARE JEWS*.
I once made a file on my desktop titled "jesus.txt". One day my computer crashed, and when I booted back up, the file was in my recycling bin. Coincidence?
 
Oh, there's definitely an argument for there not even being consciousness. Which is another reason to reject the notion that it's "inevitable." If it doesn't exist, then it can't be inevitable.

I think there is a reason we apply this to robots so often in fiction. AI is disembodied thought simulation. It would be code deciding to preserve code or replicate code. So, again, we are back to magic bamphing: self-preservation/reproduction must be the product of sufficiently complex systems because mysterious lifeform reasons.

Anyway, I am not saying that it's impossible. There's just no reason, outside of hubris, to think it's inevitable. This is how you end up with Elon Musk not realizing that he's reinventing a Jesus that needs to break us out of The Simulation we are all trapped in.

Also, on a long enough timeline, the sun will turn the bots off. Geomagnetic solar storms.

The sun always wins.
 
This is what I was getting at.

They're not "talking to" each other. There's not an each or other.

It's anthropomorphizing, I guess.
If we were actually capable of creating artificial sentience, then I would regard them as individual entities. I just don't believe that's possible (with our current definition of "computer").

For the technically minded, this is a great intro to neural networks and how they work.

Computer models of neurons represent the neuron as a float with an electric charge. They're hooked up to other neurons and each neuron can restrict the flow of electricity through it to some degree. With some kinds of neural nets, you give the network an input (by applying "electric charges", simulated in software) to the input layer and after each neuron passes on its charge to its connections (depending on how much charge connection is configured to pass on), and you eventually get charges on the output layer.

You can train this software to solve certain problems by feeding it known inputs, checking its output, and correcting the neuron connection configuration when they get the outputs wrong. Gradually you'll get a configuration that does what you want.

The fascinating thing is, we can do things like hook up a neural net back to itself, and have it train itself. That's why you get all these freaky news stories that sound like the matrix is around the corner.

I think that neural networks are reasonable model for how the human brain works, but it's got some show stopping problems before we can achieve sentience.

The big problem is efficiency. Your computer only has 1 (or 4 or 8) cores. So it goes through each neuron individually, serially. That's not a real time algorithm. That's something you prepare ahead of time, in batches. In a real brain, all this processing runs in parallel, for every neuron, all at once. Your brain is training itself on every little input and signal it gets, constantly, 24/7.

And furthermore, our DNA lays out an initial neural configuration that took millions of years to evolve. Evolution relies on feedback and that takes time.

To get around that, we can add a bunch of additional computers. But then you get latency issues. One component being just a centimeter away from another one is enough to cause huge problems with perception and thinking.

Imagine having latency issues with different parts of your brain. (Maybe that's what it's like to be Chris?)
Not too get overly navel-gazing with philosophy, but is sentience/ consciousness even necessary for intelligence? Could an ai grow complex and adaptive enough to preserve itself and out-perform human attempts to control it without being consciously aware?
I think sentience is a natural consequence of a creature having excess brainpower above a certain limit. When you're able to sit around and think at a high enough level, you're sentient.

Problem solving requires you to run simulations in your head, and I think that inherently involves some sort of conscious introspection.

Though ultimately, we can't really put our finger on what consciousness is. The best we can do is observe for external signs of it.
Like, you aren't consciously aware or in control when you're asleep and dreaming, but your brain is still performing calculations and will respond to outside stimuli. Sleepwalkers can hold entire conversations and perform tasks without the person assuming control. Maybe an ai's intelligence would express in a dreamlike state and wouldn't ever need to "wake up" to become threatening.
I wouldn't say a dreaming person is unconscious. There's a mismatch between their physical body and their conscious experiences, but I do think they are experiencing consciousness in some form. Maybe at a reduced level.

Sleepwalking is a good example though that I hadn't considered. I don't know much about sleepwalking though.

Now, I have experienced epileptic seizures, and this topic reminds me of the post-ictal state after the seizure. After a grand mal seizure, your brain doesn't "come online" all at once. So you'll wake up but you won't have your memory. You might think you were kidnapped.

You can kinda function robotically, but you can't do anything requiring analytical thinking.
 
Back