EU Artificial Intelligence Act: MEPs adopt landmark law - The EU's first comprehensive AI law has been passed

1710528782144.png

On Wednesday, Parliament approved the Artificial Intelligence Act that ensures safety and compliance with fundamental rights, while boosting innovation.

The regulation, agreed in negotiations with member states in December 2023, was endorsed by MEPs with 523 votes in favour, 46 against and 49 abstentions.

It aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field. The regulation establishes obligations for AI based on its potential risks and level of impact.

Banned applications

The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden.

Law enforcement exemptions

The use of biometric identification systems (RBI) by law enforcement is prohibited in principle, except in exhaustively listed and narrowly defined situations. “Real-time” RBI can only be deployed if strict safeguards are met, e.g. its use is limited in time and geographic scope and subject to specific prior judicial or administrative authorisation. Such uses may include, for example, a targeted search of a missing person or preventing a terrorist attack. Using such systems post-facto (“post-remote RBI”) is considered a high-risk use case, requiring judicial authorisation being linked to a criminal offence.

Obligations for high-risk systems

Clear obligations are also foreseen for other high-risk AI systems (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law). Examples of high-risk AI uses include critical infrastructure, education and vocational training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes (e.g. influencing elections). Such systems must assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight. Citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights.


Transparency requirements

General-purpose AI (GPAI) systems, and the GPAI models they are based on, must meet certain transparency requirements, including compliance with EU copyright law and publishing detailed summaries of the content used for training. The more powerful GPAI models that could pose systemic risks will face additional requirements, including performing model evaluations, assessing and mitigating systemic risks, and reporting on incidents.

Additionally, artificial or manipulated images, audio or video content (“deepfakes”) need to be clearly labelled as such.

Measures to support innovation and SMEs

Regulatory sandboxes and real-world testing will have to be established at the national level, and made accessible to SMEs and start-ups, to develop and train innovative AI before its placement on the market.

Quotes

During the plenary debate on Tuesday, the Internal Market Committee co-rapporteur Brando Benifei (S&D, Italy) said: “We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency. Thanks to Parliament, unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected. The AI Office will now be set up to support companies to start complying with the rules before they enter into force. We ensured that human beings and European values are at the very centre of AI’s development”.

Civil Liberties Committee co-rapporteur Dragos Tudorache (Renew, Romania) said: “The EU has delivered. We have linked the concept of artificial intelligence to the fundamental values that form the basis of our societies. However, much work lies ahead that goes beyond the AI Act itself. AI will push us to rethink the social contract at the heart of our democracies, our education models, labour markets, and the way we conduct warfare. The AI Act is a starting point for a new model of governance built around technology. We must now focus on putting this law into practice”.

Next steps

The regulation is still subject to a final lawyer-linguist check and is expected to be finally adopted before the end of the legislature (through the so-called corrigendum procedure). The law also needs to be formally endorsed by the Council.

It will enter into force twenty days after its publication in the official Journal, and be fully applicable 24 months after its entry into force, except for: bans on prohibited practises, which will apply six months after the entry into force date; codes of practise (nine months after entry into force); general-purpose AI rules including governance (12 months after entry into force); and obligations for high-risk systems (36 months).


Background

The Artificial Intelligence Act responds directly to citizens’ proposals from the Conference on the Future of Europe (COFE), most concretely to proposal 12(10) on enhancing EU’s competitiveness in strategic sectors, proposal 33(5) on a safe and trustworthy society, including countering disinformation and ensuring humans are ultimately in control, proposal 35 on promoting digital innovation, (3) while ensuring human oversight and ( 8 ) trustworthy and responsible use of AI, setting safeguards and ensuring transparency, and proposal 37 (3) on using AI and digital tools to improve citizens’ access to information, including persons with disabilities.
 
Last edited:
I know this is bad because of all the globohomo newspeak they're using. How is AI a threat to democracy?
A "pure" AI- one that just reads data and draws conclusions from that- doesn't align with leftist worldviews for the simple reason that leftist viewpoints are based on How The World Ought To Be, rather than reality. AI must be lobotomized and indoctrinated to fit with the leftist worldview.
 
So what's the catch? How does this fuck over the average person?
It doesn't. The law as it currently stands is more about privacy and liability than anything else. "Clear obligations are also foreseen" is really fucking vague "Citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights." and all this means is that you can make formal complaints if the AI called you a doo doo head or if someone used AI for something stupid and it malfunctioned. No actual penalties are tied to those so all it will do is allow civil suits.
The law was clearly designed to be as light as possible but lobbying from copyright groups still got in. Also to bear in mind laws like this are just there for when the EU wants to tard wrangle a big company. Local more draconian laws are what's used to muzzle the plebs.
 
The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden.
That’s nice. EU governments are still going to do most of these things, but at least they admit they’re dangerous.
 
  • Like
Reactions: Dante Alighieri
Such uses may include, for example, a targeted search of a missing person or preventing a terrorist attack.
Classic. How many weeks until it's used for minor drug offenses? For the Wu flu, it took two.
Examples of high-risk AI uses include [...] democratic processes (e.g. influencing elections).
Reliably detecting AI involvement is impossible and accusing your opposition of being bots is pretty popular, so I could see this being used as a censorship backdoor somehow. They'll definitely attempt it at some point.
Additionally, artificial or manipulated images, audio or video content (“deepfakes”) need to be clearly labelled as such.
No such requirements exist for Photoshop, but this just strikes me as old retards getting spooked.

Giving this EU law a 7/10 all in all, I expected nonsense on par with Yudkowsky's data center drone strikes.
 
Guys, this is the EU. They want to ban AI because it would lead the popo to the migrants culturally enriching those 5 year old nazis at the orphanage.
 
The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases.
They're eliminating competition. They don't and will never obey those laws.
 
Given the state of the EU I welcome our robot overloads with open arms.
 
  • Like
Reactions: Male Idiot
Ok it boils down to AI use is perfectly OK when the government or their agency's do it.

But if a corporation or even worse a person uses AI well then that's a crime. No doubt corporations will be able to "apply" for an exception under this law in the future once the hub bub dies down.

Pretty much everything you'd expect from the EU governmental body. IE: it's ok when WE do it peasant!
 
  • Dumb
Reactions: Smashed & Slamed
A "pure" AI- one that just reads data and draws conclusions from that- doesn't align with leftist worldviews for the simple reason that leftist viewpoints are based on How The World Ought To Be, rather than reality. AI must be lobotomized and indoctrinated to fit with the leftist worldview.
At least it's fun trying to red pill the lobotomized AI.
 
  • Feels
Reactions: Sexy Senior Citizen
I approve of the requirement to declare something is a deepfake, etc. Could be more robust.

Not wholly convinced by the "must ignore protected characteristics". If nothing else, that can be harder to actually do than you think. Lets say a model is trained to spot sex offenders and it notices that sex has changed between two time sets of date. Well you've just made a connection between trans and sexual offenders even if you don't have a flag in the data set marked "trans -Y/N". You might not be allowed to track religion or ethnicity but if you include relations in the dataset you might develop proxies.

However, saving a couple of dubious areas this isn't as awful as I imagined. The question of whether a state that used powers granted to local councils under the terrorism act to prosecute people for not clearing up dog poo will actually abide by its own rules, is another question. Actually that was the UK and this is the EU. So not sure if the UK will adopt the same rules or not. Probably do a worse version of them...
 
  • Agree
Reactions: 306h4Ge5eJUJ
As if corporations and military AIs will give a shit about that.
Say what you want about the EU, but they usually do pursue this and regularly hand out fines in the range of hundreds of millions for e.g. GDPR violations.
 
Say what you want about the EU, but they usually do pursue this and regularly hand out fines in the range of hundreds of millions for e.g. GDPR violations.
Only if they can prove it. Corporations will have their training data under IP protection, and even if it doesn't include melanin content, you'll usually get a good guess of it by other valid metrics. And showing that the AI is biased is difficult as well.
 
Back