Artificial Intelligence Bot Subjected to Reddit; Becomes Psychopath - In other news, Autists subjected to Kiwi Farms becoming transgender

  • Thread starter Thread starter GS 281
  • Start date Start date
https://www.bbc.com/news/technology-44040008

This is best read through the website direct.
When a "normal" algorithm generated by artificial intelligence is asked what it sees in an abstract shape it chooses something cheery: "A group of birds sitting on top of a tree branch."

Norman sees a man being electrocuted.

And where "normal" AI sees a couple of people standing next to each other, Norman sees a man jumping from a window.

The psychopathic algorithm was created by a team at the Massachusetts Institute of Technology, as part of an experiment to see what training AI on data from "the dark corners of the net" would do to its world view.

The software was shown images of people dying in gruesome circumstances, culled from a group on the website Reddit.

Then the AI, which can interpret pictures and describe what it sees in text form, was shown inkblot drawings and asked what it saw in them.

These abstract images are traditionally used by psychologists to help assess the state of a patient's mind, in particular whether they perceive the world in a negative or positive light.

Norman's view was unremittingly bleak - it saw dead bodies, blood and destruction in every image.

_101606112_norman-1-nc.png

Alongside Norman, another AI was trained on more normal images of cats, birds and people.

It saw far more cheerful images in the same abstract blots.

The fact that Norman's responses were so much darker illustrates a harsh reality in the new world of machine learning, said Prof Iyad Rahwan, part of the three-person team from MIT's Media Lab which developed Norman.

"Data matters more than the algorithm.

"It highlights the idea that the data we use to train AI is reflected in the way the AI perceives the world and how it behaves."

_101606115_norman-2-nc.png

Artificial intelligence is all around us these days - Google recently showed off AI making a phone call with a voice virtually indistinguishable from a human one, while fellow Alphabet firm Deepmind has made algorithms that can teach themselves to play complex games.

And AI is already being deployed across a wide variety of industries, from personal digital assistants, email filtering, search, fraud prevention, voice and facial recognition and content classification.

_101608952_norman-3-nc.png

It can generate news, create new levels in video games, act as a customer service agent, analyse financial and medical reports and offer insights into how data centres can save energy.

But if the experiment with Norman proves anything it is that AI trained on bad data can itself turn bad.

Racist AI
Norman is biased towards death and destruction because that is all it knows and AI in real-life situations can be equally biased if it is trained on flawed data.

In May last year, a report claimed that an AI-generated computer program used by a US court for risk assessment was biased against black prisoners.

_101608955_norman-4-nc.png

The program flagged that black people were twice as likely as white people to reoffend, as a result of the flawed information that it was learning from.

Predictive policing algorithms used in the US were also spotted as being similarly biased, as a result of the historical crime data on which they were trained.

Sometimes the data that AI "learns" from comes from humans intent on mischief-making so when Microsoft's chatbat Tay was released on Twitter in 2016, the bot quickly proved a hit with racists and trolls who taught it to defend white supremacists, call for genocide and express a fondness for Hitler.

Norman, it seems, is not alone when it comes to easily suggestible AI.

And AI hasn't stopped at racism.

One study showed that software trained on Google News became sexist as a result of the data it was learning from. When asked to complete the statement, "Man is to computer programmer as woman is to X", the software replied 'homemaker".

Dr Joanna Bryson, from the University of Bath's department of computer science said that the issue of sexist AI could be down to the fact that a lot of machines are programmed by "white, single guys from California" and can be addressed, at least partially, by diversifying the workforce.

She told the BBC it should come as no surprise that machines are picking up the opinions of the people who are training them.

"When we train machines by choosing our culture, we necessarily transfer our own biases," she said.

"There is no mathematical way to create fairness. Bias is not a bad word in machine learning. It just means that the machine is picking up regularities."

What she worries about is the idea that some programmers would deliberately choose to hard-bake badness or bias into machines.

To stop this, the process of creating AI needs more oversight and greater transparency, she thinks.

Prof Rahwan said his experiment with Norman proved that "engineers have to find a way of balancing data in some way," but, he acknowledges the ever-expanding and important world of machine learning cannot be left to the programmers alone.

"There is a growing belief that machine behaviour can be something you can study in the same way as you study human behaviour," he said.

This new era of "AI psychology" would take the form of regular audits of the systems being developed, rather like those that exist in the banking world already, he said.

Microsoft's ex-chief envisioning officer Dave Coplin thinks Norman is a great way to start an important conversation with the public and businesses who are coming to rely on AI more and more.

It must start, he said, with "a basic understanding of how these things work".

"We are teaching algorithms in the same way as we teach human beings so there is a risk that we are not teaching everything right," he said.

"When I see an answer from an algorithm, I need to know who made that algorithm," he added.

"For example, if I use a tea-making algorithm made in North America then I know I am going to get a splash of milk in some lukewarm water."

From bad tea to dark thoughts about pictures, AI still has a lot to learn but Mr Coplin remains hopeful that, as algorithms become embedded in everything we do, humans will get better at spotting and eliminating bias in the data that feeds them.
 
This is ridiculous, from the perspective of the AI, every image it is shown is just random blotches of different colors arranged in some meaningless pattern.
Whether you showed images of kittens or the Manson-murders doesn't matter. All you do is feed the AI with random descriptions that it will apply to the next image that you show to it.

If you took images of people torn asunder in traffic crashes but gave cheerful descriptions, the AI wouldn't know the difference and give cheerful descriptions to the blotch-images, too.
 
This is ridiculous, from the perspective of the AI, every image it is shown is just random blotches of different colors arranged in some meaningless pattern.
Whether you showed images of kittens or the Manson-murders doesn't matter. All you do is feed the AI with random descriptions that it will apply to the next image that you show to it.

If you took images of people torn asunder in traffic crashes but gave cheerful descriptions, the AI wouldn't know the difference and give cheerful descriptions to the blotch-images, too.
In other words, there's no such thing as true "A.I."
 
The program flagged that black people were twice as likely as white people to reoffend, as a result of the flawed information that it was learning from.

Predictive policing algorithms used in the US were also spotted as being similarly biased, as a result of the historical crime data on which they were trained.

In what way was the data flawed? Have we really reached a point where an uncomfortable outcome from an algorithm means the "data or algorithm must be wrong, the result can't possibly be right"?

If you train a machine on historical data, and historical data shows blacks are more likely to reoffend, the machine will say blacks are more likely to reoffend. That's not racism, that's the machine doing what you told it to do. You told it to find patterns linked to reoffense. It found a strong correlation between race and recidivism. Why do you then insist the machine is wrong instead of accepting a historical correlation between race and recidivism?

This raises a troubling spectre of fine-tuning. Algorithms are going to be compromised with hard-coded "affirmative action" biases, to prevent uncomfortable truths.
 
In other words, there's no such thing as true "A.I."
I mean it's an article about bias that's kind of based on bias. It's a study that just confirms, "yes, things have bias!"
This is almost kind of like asking the AI if it hears "yanny" or "laurel".

I mean it learns from the data you feed it and expose it to, of course it's going to have these biases.
 
Last edited:
In what way was the data flawed? Have we really reached a point where an uncomfortable outcome from an algorithm means the "data or algorithm must be wrong, the result can't possibly be right"?

Yes.

Although I'd love to see an A.I. that only took data from blue checkmark Twitter accounts and find that they advocate for White Genocide and Spaceship Pyramids.
 
In what way was the data flawed? Have we really reached a point where an uncomfortable outcome from an algorithm means the "data or algorithm must be wrong, the result can't possibly be right"?

If you train a machine on historical data, and historical data shows blacks are more likely to reoffend, the machine will say blacks are more likely to reoffend. That's not racism, that's the machine doing what you told it to do. You told it to find patterns linked to reoffense. It found a strong correlation between race and recidivism. Why do you then insist the machine is wrong instead of accepting a historical correlation between race and recidivism?

This raises a troubling spectre of fine-tuning. Algorithms are going to be compromised with hard-coded "affirmative action" biases, to prevent uncomfortable truths.

Therein lies the problem. AI is only good at drawing probably conclusions based upon the data you feed it. If you give an AI a thousand pictures of horse porn, it will see horse porn everywhere. The AI lacks the values ingrained to us by society. Programming that is going to be interesting, especially when you hit the question of "who's values do we program in?" Of course, then the AI takes those values to their logical extreme, and things get a little genocidal.
 
In what way was the data flawed? Have we really reached a point where an uncomfortable outcome from an algorithm means the "data or algorithm must be wrong, the result can't possibly be right"?

If you train a machine on historical data, and historical data shows blacks are more likely to reoffend, the machine will say blacks are more likely to reoffend. That's not racism, that's the machine doing what you told it to do. You told it to find patterns linked to reoffense. It found a strong correlation between race and recidivism. Why do you then insist the machine is wrong instead of accepting a historical correlation between race and recidivism?

This raises a troubling spectre of fine-tuning. Algorithms are going to be compromised with hard-coded "affirmative action" biases, to prevent uncomfortable truths.
Well for one the ink blot test is a godawful way of predicting psychopathy. Two, the other one was raised on “more normal images of cats and people”. If you taught a blank slate computer with normal images, of course it’s responses would be normal compared to the one who was exposed to a wider array of data to inference with.
 
If you train a machine on historical data, and historical data shows blacks are more likely to reoffend, the machine will say blacks are more likely to reoffend. That's not racism, that's the machine doing what you told it to do. You told it to find patterns linked to reoffense. It found a strong correlation between race and recidivism. Why do you then insist the machine is wrong instead of accepting a historical correlation between race and recidivism?
the machine is wrong because it got hacked into the dark side of history through voldemort's midichlorians.
#woke
 
Back