Welcome to Your Nightmare: The Externalization of the Agentic State - The Agentic State is an instrument for an even tighter control of all activities on earth, and a dream come true for every aspiring dictator who wants to play god


archive link: https://archive.is/NWkTR
- - -

Jacob Nordangård

Jul 15, 2025


In October last year World Economic Forum opened the Global Government Technology Centre (GGTC) at GovTech Campus Deutschland in Berlin with the mission to “accelerate the digital transformation of government” together with “founding partner”, French IT-consultant Capgemini. The latter is an aspiring “global leader in Agentic AI-powered Intelligent Operations”.

The initiative is a part of WEFs network Centre for the Fourth Industrial Revolution, and the ongoing technocratic world revolution, and fulfills G20's goal of developing a Digital Public Infrastructureto accelerate the Sustainable Development Goals (an initiative launched during India's presidency in 2023).

Why they chose Berlin—a city with such a troublesome history regarding governmental totalitarian control—one can only speculate. But it is fitting.


The goal is to “make the GGTC the global platform for driving GovTech forward.”

GovTech is developed to improve the “efficiency, transparency, and responsiveness of government operations and public services”, which also happens to be an integral part of Donald Trump’s Department of Government Efficiency (DOGE).

GovTech Campus Deutschland is a collaboration between the German Government, German state agencies, the science community, and is also supported by the Eclipse Foundation.

Founded as the Eclipse Project by IBM in November 2001, with support from a consortium of software vendors, it builds on a dark legacy regarding the company's role in providing “GovTech” (punched cards) to Nazi Germany (as described in Edwin Black's book IBM and the Holocaust: The Strategic Alliance between Nazi Germany and America's Most Powerful Corporation). Strategic member companies include IBM, Microsoft, SAP, Oracle, Huawei, Fujitsu, Bosch, and Red Hat.

The GovTech they will now provide is something the Nazis could only dream of. The technological development has since changed the game and created the conditions for a control state with chilling consequences for the freedom of mankind.

GGTC have since its founding started another center in Kiev, while the technocratic role model country, the United Arab Eremites, has been affiliated as a “founding partner”.



Both Ukraine and UAE can be viewed as WEF vassals and are testing beds for GovTech solutions, with GGTC as a contributor to the Ukrainian Digital State UA (a Global Platform for Digital Solutions and GovTech Innovation).


The Agentic State​

The Global Government Technology Centre recently released a white paper called The Agentic State which outlines a vision for an AI-governed future. Main author is Luukas Ilves, a Stanford University graduate who is the Chief Information Officer of the Government of Estonia, and the founder of the Tech Green Pledge, “a network of tech companies committed to combatting climate change”.

The Agentic State is basically a recipe for large scale population control with little or no human agency. As stated in the paper,

The Agentic State signifies a fundamental shift in governance, where AI systems can perceive, reason, and act with minimal human intervention to deliver public value.
1754427658518.webp
The choice of title is quite revealing.1 The Agentic State theory, coined by psychologist Stanley Milgram after his famous conformity tests, states that,

…the essence of obedience consists in the fact that a person comes to view themselves as the instrument for carrying out another person's wishes, and they therefore no longer see themselves as responsible for their actions. Once this critical shift of viewpoint has occurred in the person, all of the essential features of obedience follow.2

According to the International Organization for Standardization, “an AI agent can be broadly defined as an entity that senses percepts (sound, text, image, pressure etc.) using sensors and responds (using effectors) to its environment.”3

The technocratic manual describes ten “functional layers” of government ordered in three dimensions.

Operations

  1. Service Delivery, From fragmented portals to self-composing public services and
    personal concierges.
  2. Internal Workflows, From manual casework to outcome-driven agents with humans on the loop.
  3. Data Governance, Management, and Operations, An operational asset for agents and a strategic asset for society as a whole.
  4. Crisis Response and Resilience, From legacy responses to agentic readiness in an era of polycrisis.
Regulation and Governance

  1. Compliance and Reporting, Continuous compliance that preserves confidentiality and
    supports lighter regulation.
  1. Policy and Rulemaking, From static rulebooks to living policies, continuously monitored
    and adapted by AI agents.
Foundations

  1. Leadership, The skills and behaviours of outcome-driven government leaders
    who will build agentic states.
  2. Workforce and Culture, Towards broad tech fluency, elite talent, and high-performance culture.
  3. Tech Stack, Leapfrogging to tomorrow’s enterprise stack.
  4. Public Procurement, How Agentic AI redefines how governments buy and what they get.

Polycrisis management​

Of these, the chapters on “Crisis Response” and “Compliance” are of utmost importance in understanding the diabolical nature of this new governance regime.

The white paper concludes that our world isn’t prepared or equipped for “a world of cascading geopolitical, technological, environmental, and social shocks.” A phenomenon that is labeled as a polycrisis.

This polycrisis scenario is essentially what actors like climate tzar and frequent WEF-attendee Johan Rockström constantly has warned about.4 As described in the paper:

We are in an era increasingly characterised by polycrisis: interconnected and cascading shocks ranging from pandemics and extreme weather events to cyber-physical attacks, financial instability, disinformation campaigns and even conventional warfare — traditional crisis management models are under strain. Threat actors are already adapting. With AI, they can automate, scale, and personalise attacks at unprecedented speed. Governments, by contrast, are often still operating with institutional reflexes shaped for a slower, more linear world.5
But this can be countered with the deployment of “intelligent systems”.

Agentic government means equipping the state with intelligent systems that can anticipate, respond, and adapt across the entire crisis lifecycle: from prevention and preparedness to response, recovery, and continuous learning.6
A part of this is “simulation infrastructure” with the claimed ability to predict the future. Everything we do will be analysed and simulated with the help of “virtual twins”.

Agentic models can continuously simulate crisis scenarios across domains, producing synthetic datasets that reveal systemic fragilities. More than stress tests, these simulations become generative foresight mechanisms.7
If a problem is detected, this could trigger a “hyper-aware AI-orchestrated first response”. As described in the report,

When a crisis begins to unfold, AI initiates the first steps in crisis response before human-in-the-loop structures have time to react. These agents will work alongside increasingly autonomous physical systems such as drones and robots, forming the backbone of a responsive, adaptive crisis infrastructure.8
This is starting to border on a science fiction dystopia. Just like Skynet in The Terminator, the machines will take the first step in order to solve the crisis, while predetermined protocols will ensure cohesive and coordinated crisis management.9

1754427814354.webp


https://substackcdn.com/image/fetch...053-5d8f-4659-bf3f-e1b97a1db3d5_1500x750.jpeg
This echoes the proposed United Nations Emergency Platform, which has been advocated by Johan Rockström and the Mary Robinson-led Climate Governance Commission.

Human agency will take a back seat. The authors states that,

Human oversight will increasingly move from making decisions to supervising them. This shift may feel uncomfortable, but those who let machines take the first step, especially in fast-moving situations, will gain an edge, with mental capacity freed up for critical thinking under pressure.10
They are damn right that it feels uncomfortable!

Another part of the Agentic State with likely chilling consequences is “Compliance and Reporting”. This concerns the scan of live data and real-time compliance checks.

Compliance monitoring and reporting is one of the most obvious cases for agentic AI. Instead of depending on quarterly reports or scheduled inspections, a statement of conformity becomes a live reflection of the present. Compliance is no longer a retrospective snapshot; it is an active, up-to-date status. A well-governed AI compliance agent, running on a verifiable, tamper-resistant algorithm, could issue YES/NO compliance attestations without transmitting sensitive internal data.11
What can possibly go wrong? “No more hiding” as WEF-trustee Al Gore said in his sales pitch video to the project Climate Trace.


The Agentic State is an instrument for even tighter control of all activities on earth, and a dream come true for every aspiring dictator who wants to play god.

In this model, reporting becomes proof, not just paperwork. With more precise, real-time monitoring and verifiable reporting, regulators could cautiously recalibrate requirements toward socially optimal levels. In the long run, we can envision a situation where compliance in domains like health, safety, financial, environmental, cybersecurity, and ethics become a component of overall quality management, with less, not more, internal information crossing organisational boundaries.12
In other words, “either you obey or you face the consequences”. But the authors actually ask some valid questions about needed safeguards.

If AI agents can issue fines or trigger legal action in real time, what safeguards are needed to prevent runaway enforcement or unjust penalties? What role should human oversight play in preserving legitimacy?13
But this comes without the obvious conclusion of shutting down this totalitarian nightmare before it runs amok. A world with agentic AI in charge will lead to a dramatic change in how we are governed.

Laws, currently static code written once and amended rarely, can develop into a far more dynamic living system, continuously interpreted, tested, and refined by agents operating within clearly defined public mandates.14
This means that an AI agent “will have the capability to rewrite laws as easily as agents rewrite code.”

AI agents can simulate complex systems, run policy scenarios, test and red team alternative designs at staggering volume and speed; where quantum computing will unlock a further leap. Moreover, AI can course-correct ‘at runtime’, detecting drift, bias, and systemic failures.15
This new system will leave human legislators only to define “broad societal goals”, “while specific rules, thresholds and requirements are adjusted dynamically by agents with limited or no human intervention”! The question is who will be tasked of overseeing the goals for the Agentic State?

The authors rightly conclude that this new regime of “challenges contemporary notions of inclusion and participatory decision making” but don’t think it has to be “less democratic”.

So how do they think democracy and citizen participation should be accommodated?The solution has a bitter taste of cold logical technocratic efficiency, as it basically means that we humans will be connected to a cybernetic system that can detect and analyse human responses and adjust its decisions based on the feedback loops. As described in the paper,

Rather than operating solely through top-down regulatory adjustments, agentic policy systems could also learn from citizen signals. Feedback loops, such as appeals, time-to-resolution metrics, or even emotion detection in digital interactions, could become inputs for agent-guided policy refinement. In this model, the boundary between policy implementation and adjustment becomes porous: agents adjust rules not only based on macro-level KPIs but also from bottom-up input and friction indicators.16
The White Paper essentially confirms the techniques that will be used to govern the global technocratic regime described in my books. Below is my take on of how this new world system will ultimately work with GovTech as an integrated part, and the United Nations as the definer of “broad societal goals”.

1754427967640.webp

The World System, from my book Temple of Solomon.

A briefing about the white paper was held July 1, 2025, where the authors presented and discussed how agentic AI will transform government operations.



________________________________________________________________________________________________________________________________________________________
Order my book Temple of Solomon from Pharos Media.

1
Thanks to the reader who shared the video and drew my attention to the connection to Milgram's theory.
2
Stanley Milgram (2020), Obedience to Authority: An Experimental View, Harper Perennial Modern Classics
3
World Economic Forum (2024), Navigating the AI Frontier: A Primer on the Evolution and Impact of AI Agents, White Paper, December 2024
4
Thomas Homer-Dixon, Ortwin Renn, Johan Rockström, Jonathan F. Donges, and Scott Janzwood (2022), “A call for an international research program on the risk of a global polycrisis”, cascadeinstitute.org/technical-paper/a-call-for-an-international-research-program-on-the-risk-of-a-global-polycrisis/; Climate Governance Commission (2023), Governing Our Planetary Emergency, earthgovernance.org/wp-content/uploads/2023/11/Governing-Our-Planetary-Emergency-Report_WEB.pdf
5
Global Government Technology Center (2025), The Agentic State, How Agentic AI Will Revamp 10 Functional Layers of Government and Public Administration, p. 20
6
Global Government Technology Center (2025), p. 20
7
Ibid.
8
Ibid.
9
Global Government Technology Center (2025), p. 21
10
Ibid.
11
Global Government Technology Center (2025), p. 22
12
Global Government Technology Center (2025), p. 23
13
Ibid.
14
Global Government Technology Center (2025), p. 24
15
Ibid.
16
Ibid.
 
Since the WEF are playing that NWO game, here's something very appropriate and was made in the 90s.


Would be extra hilarious if their own security forces turn on them. Good luck in making Abdul and Ubuntu from not acting like shitskins in their own countries.
Ive always loved the plot to the first game tho ive never been able to beat it (gameplays just not my favorite)
1754440995068.webp
simply reading WEC as WEF is funny yet morbid
 
Ive always loved the plot to the first game tho ive never been able to beat it (gameplays just not my favorite)
View attachment 7742138
simply reading WEC as WEF is funny yet morbid
The game itself is a bit clunky in its gameplay but the story is fairly solid. Good stuff overall.

They need to do a remake or something.
 
The reason why this is happening is because the ruling class are a bunch of lazy, retarded inbreds who want mass surveillance and cybernetic biomonitoring of everything to such an asinine level that they know how many dingleberries are in your ass-crack because they don't know how to govern anymore. They want a post-political, pseudo-capitalist, pseudo-socialist technocratic system where nobody has any agency, any voice of their own, any vote that matters, and the fucking thing just runs itself. It is Cybersyn on steroids.

 
Ive always loved the plot to the first game tho ive never been able to beat it (gameplays just not my favorite)
View attachment 7742138
simply reading WEC as WEF is funny yet morbid
Much like Deus Ex, it accidentally predicted the future. Considering what the WEF is trying to do mirrors their fictional counterpart, the people who made Crusader made something spectacular.

That said, Crusader is quite influential on later games. Warframe comes to mind, even Helldivers makes a not-Silencer set for players to earn.

1754455152548.webp
Back to our IRL Kamen Rider villain group, they have been around for a very long time. Even created a set of videos that details their vision of the future... back when they were called 'Forum for the Future'.


All are dystopian and definitely the setting for 90s anti-heroes with an attitude is meant to fuck up. Demolition Man after all came out in the 90s as well... along with a whole slew of anti-Megacorp media.
 
a dream come true for every aspiring dictator who wants to play god
That one nog was right when she said, "the master's tools will never bring down the master's house". Look at this fag crying about the WEF but blaming "dicktators". "Dicktator" is a slur for an enemy of the WEF.
 
  • Dumb
Reactions: Cnidarian
It's all fucking nonsense that will be switched off the moment the first court case lands.

The Powers That Be consider the plebs to be overstepping the line when we demand that they do their fucking jobs, with AI slop hawked at them as a means to throw up more barriers to direct communication with them.
 
  • Optimistic
Reactions: Vitrealis
I don't hate him because there are several functions in the bodies and minds of humans that come from this entity. We're able to reason and have bones thanks to him, for example. At the same time, I do hope humanity is able to choose the Christ being and overcome slavery by this entity, which is his goal for humanity; bind us to him and have complete power over us.
We're constantly receiving aid from the spiritual realm. I think in the end we will overcome this challenge.
Just say the Demiurge and stop being coy.
 
This idea that the AI does all the low level stuff and this frees humans to do the critical thinking? Well it’s bullshit.
You end up with a bunch of black-box systems forming a layer of obfuscation between the human and the problem.
At work, one of the things I have to do is model out scenarios for things happening. We know long it’ll take roughly to go through whatever regulatory process in each country, we know how long roughly each individual bit of a program will be, so when do we think this program as a whole will be generating revenue, through the process etc?
Previously, I did this with a nice excel sheet workbook, a big ol’ database of information and humans to speak to who would point out that Bulgaria just did something weird so we need to add an extra four weeks there etc. It worked well. Crucially, I can understand each part of the process. I can trace each number back in my spreadsheet and see where it comes from. This means that if there’s an error or something odd it immediately pops out to me and I can trace, identify and examine the source. The machine is still crunching the numbers for me but I’m there. The scale is human
In this situation, I am using tools (tech and people) to generate a scenario that I understand and thus can exercise critical thinking on,
Post AI, I now have ‘a system’ to do this, they fired the entire group who I used to talk to. I now input my starting parameters into the system and it spits out a date.
That date may be rather off what my gut and experience tell me but I now have no way of tracing back any of the assumptions the system has made. It’s a black box. It’s like no longer being able to pop the hood and look at what’s under it, but having a light come in and needing a laptop to diagnose the engine.
I fucking hate it.
And I know it’s not right, but I can’t point this out any higher up because they paid millions for this from a fancy company and management are invested in it. So now I am responsible for creating projections but I’m not creating them, the AI is. I’m responsible for something I have zero understanding of or control over.
The same will happen here. It’s good to have very very base level stuff done mechanically I’m sure, like my spreadsheet did , but as soon as that reaches that complexity where the disconnect happens you have a huge problem.

I know @Overly Serious is fond of saying that cybernetic systems don’t scale, and this is another example. It simply cannot work, because the human scale is gone. There can be no connection between the event at the human scale and the enforcement globally, and so the feedback loop that says ‘hang on, somethings not right’ is gone.

The butlerian Jihad started when a human baby was killed to enforcer a probable 0.0001% chance of a better outcome.

Learn to govern, you lazy fucks.
 
Very :optimistic: of the WEF to think that all countries on Earth are going to be willing to partake in the same AI government control system... when UN bodies are considered a laughable, powerless eunuch by many nations, and they can (and had) ignored UN rulings before.

The more realistic result is that many competing governments will have their own agentic systems, making them dumber just by the nature of fragmentation, and in some cases, worse than human-ran bureaucracies.
You're both right and wrong. Look at the purpose behind USAID if you want to know how rich governments can impose their will over the rest of the world without any direct, hands-on action or threats of violence.
We supposedly put a halt to USAID though, so this tactic will likely be taken up by whoever fills the power vacuum left by the US, and from the outside looking in, probably China.
 
Learn to govern, you lazy fucks.
But don't you see? With such a system, those who truly rule, keep to sit in their thrones without accountability of any kind. Any mistake, any premeditated crisis, any worsening of the human conditions under we live in, they all can be blamed on AI now. Whether they came from it or not, because like you rightly point out, it's completely opaque.
Even at very low levels. And even if someone manages to do something (take someone to court, or whatever) about a mistake or "mistake" that resulted from AI, the most likely outcome is that the verdict will be something along the lines of "you need to upgrade your AI to ensure this never happens again", as opposed to getting rid of the AI systems that caused the problem in the first place. Once these systems are put in place it'll be virtually impossible to go back to human systems.
That's why we need to actively resist these things while we still can. That network of satellites that Musk was allowed to fill the sky with, that pairs up beautifully with this kind of Skynet hellscape. Internet all over the planet, to track everyone everywhere at all times.

I was recently looking for a temporary job and I came across this very well paid position in customer service. I applied, and then they got in contact and told me it would involve working with AI, "to automate healthcare systems", and I said no, even though the pay was great and the work hours were quite few. Could work from home as well. But when it comes to this, I feel like we need to really throw a wrench in the takeover of AI, or we're fucked.
 
  • Like
Reactions: Otterly
I think we're going to be fine. Not because these asshats won't try to push the most mind-shatteringly fucked-up shit ever conceived - god no - they will go forward with a brick on the accelerator and a hose on the tailpipe - but because the people pushing this are hilariously, almost parodically incompetent.
 
Back