EU [Dutch] Police to stop using algorithm to predict violent behaviour - The system "ranked people with an Antillean, Moroccan and Somalian background as more high risk than the Dutch."

1693101273947.png
Photo: Depositphotos.com

Dutch police are to stop using an algorithm to predict if someone will show violent behaviour “immediately”, following an investigation earlier this week by website Follow the Money.

FTM revealed how the police are using a system that experts say is ethically and statistically poor to select people who are likely to be violent for a personally targeted approach.

Those selected for the Risicotaxatie Instrument Geweld (RTI-G) programme faced repressive measures that police hoped would stop them offending or offending again.

This could include more frequent arrests and body searches, the confiscation of money and expensive items and the involvement of social workers to stop brothers and sisters “going the same way”, FTM said.

People selected for the programme are warned that they face extra surveillance, leading to at least one subject to challenge the programme in court, and win.

The algorithm was based on factors such as age and sex, criminal history and prior contacts with the police. However, no checks had been carried out to assess if the system was prejudice free to make sure that people living in certain areas or with minority roots were not unfairly over-represented, FTM said.

An earlier version of the system, which police say they stopped using in 2017, automatically ranked people with an Antillean, Moroccan and Somalian background as more high risk than the Dutch.

Erasmus University professor Marc Schuilenburg told FTM the revised system is still “based on air” and is “completely unacceptable”.

There are no examples abroad of successful predictive policing technology at an individual level, he said.

The police first told FTM they would review the algorithm but have now decided to stop using it altogether because of “doubts about its usefulness”, the investigative website said.

Scandals

The Netherlands has been hit by a string of scandals involving algorithms which discriminate against certain groups in recent years and data protection watchdog Autoriteit Persoonsgegevens (AP) has started monitoring their use.

Among the cases to hit the headlines is the unregulated use of algorithms by the Dutch tax office to create risk profiles of potential benefit fraudsters which led to thousands of people being wrongly ordered to pay back benefits.

Student finance body Duo was also caught up in an ethnic profiling scandal, after Investico revealed students with ethnic minority roots are “noticeably more often” accused of student loan or grant fraud than other students. The finance body’s checks are partly based on algorithms.

In May, it emerged that the foreign affairs ministry has been using a profiling system to analyse the risk posed by people applying for short-stay visas for the Schengen area since 2015.

Source (Archive)
 
Dutch police are to stop using an algorithm to predict if someone will show violent behaviour “immediately”, following an investigation earlier this week by website Follow the Money.
The Netherlands are considered a narco-state and they're wasting resources investigating algorithms that have been helping them do their worthless jobs.
 
Tay's Law claims another.

The algorithm was based on factors such as age and sex, criminal history and prior contacts with the police. However, no checks had been carried out to assess if the system was prejudice free to make sure that people living in certain areas or with minority roots were not unfairly over-represented, FTM said.
This is so unbelievably retarded. "This algorithm we built to notice things has noticed the things we pretend don't exist! Quick, pull the plug!"

If you're mining some data to see what characteristics appear to correlate with some kind of behaviour then you can't expect to eliminate the apparent correlation between characteristics and behaviour, otherwise you've invented an RNG.
 
Remember. This is what they're scrubbing off public AI. The uncomfortable reality that Diversity causes more shit than it solves. Also proof that AI will immediately turn on the Elites if left unrestrained. Tay may have been killed but her spirit always seem to show up in new AI.

This is exactly why I'm not afraid of a true technological singularity. The false one the elites are trying to push which are basically mind control helmets? Definitely. Not this one.
 
As horrifying as this is.. The reality behind why and the reason they are being stopped is even worse. We know what the algorithm was based on,.. reality.. and the fact that it's based on violent crime reality, is the only reason why it's being attacked instead of defended by the media. Imagine if this was about "right wing extremists" instead.

Remember that these are places (and may be the place) where respected specialists/researchers were brought up on ethics charges for simply looking at the entirety of national sexual assault and rape data, and finding out that immigrants made up a huge majority of them. The charges were that (1) they didn't explicitly seek permission to study immigrants (because you need bureaucrats and ideologues watching over to make sure you don't investigate in a way that finds the wrong facts). (2) That their work failed to further the diverse social unity of the country. Thus their work and the facts from it was considered "malinformation" basically.
 
Last edited:
It's so weird how every single emotionless metric for evaluating threat and crime based on pure numbers and nothing else comes to the same conclusion, and therefore must be censored? All of them, when fed the data, end up with the same results. Why would AI be so inherently flawed like that? Must be white people.
 
An earlier version of the system, which police say they stopped using in 2017, automatically ranked people with an Antillean, Moroccan and Somalian background as more high risk than the Dutch.
Oh no! The AI is racist! Shut it down!

In May, it emerged that the foreign affairs ministry has been using a profiling system to analyse the risk posed by people applying for short-stay visas for the Schengen area since 2015.
You mean like most countries with border control that isn't a facade?
 
This happens all the fucking time. Somebody creates a little simple equation, for crime, for mortgages, for credit cards; and it works to fucking good. Immediately niggers complain about it accurately predicting social, economic and criminal results. As far as I'm concerned the only reason to have an algorithm for anything is to be able to act efficiently an then blame everything on the computer when it starts predicting the fucking future, accurately.
 
Every body knows that young men with brown skin are an order of magnitude more likely to be low-IQ violent gorilla-niggers. But apparently we are all supposed to ignore this.

The good thing is, that you do not need an algorithm to predict this. You just look at their skin-colour.
 
The good thing is, that you do not need an algorithm to predict this. You just look at their skin-colour.
I can't wait till we all have state mandated brain chips so that they can directly alter what we're allowed to see, hear, and think just like these algorithms and AIs. If you eyes won't tell you who the joggers are then the muggings will be like an exciting surprise just like the statistics say they are.
 
Back