Who will control the future of AI? - A democratic vision for artificial intelligence must prevail over an authoritarian one.


Sam Altman is co-founder and CEO of OpenAI.

Who will control the future of AI?

That is the urgent question of our time. The rapid progress being made on artificial intelligence means that we face a strategic choice about what kind of world we are going to live in: Will it be one in which the United States and allied nations advance a global AI that spreads the technology’s benefits and opens access to it, or an authoritarian one, in which nations or movements that don’t share our values use AI to cement and expand their power?

There is no third option — and it’s time to decide which path to take. The United States currently has a lead in AI development, but continued leadership is far from guaranteed. Authoritarian governments the world over are willing to spend enormous amounts of money to catch up and ultimately overtake us. Russian dictator Vladimir Putin has darkly warned that the country that wins the AI race will “become the ruler of the world,” and the People’s Republic of China has said that it aims to become the global leader in AI by 2030.

These authoritarian regimes and movements will keep a close hold on the technology’s scientific, health, educational and other societal benefits to cement their own power. If they manage to take the lead on AI, they will force U.S. companies and those of other nations to share user data, leveraging the technology to develop new ways of spying on their own citizens or creating next-generation cyberweapons to use against other countries.

The first chapter of AI is already written. Systems such as ChatGPT, Copilot and others are functioning as limited assistants — for instance, by writing up patient visits so nurses and doctors can spend more time with the sick, or serving as more advanced assistants in certain domains, such as code generation for software engineering. More advances will soon follow and will usher in a decisive period in the story of human society.

If we want to ensure that the future of AI is a future built to benefit the most people possible, we need a U.S.-led global coalition of like-minded countries and an innovative new strategy to make it happen. The United States’ public and technology sectors need to get four big things right to ensure the creation of a world shaped by a democratic vision for AI.

First, American AI firms and industry need to craft robust security measures to ensure that our coalition maintains the lead in current and future models and enables our private sector to innovate. These measures would include cyberdefense and data center security innovations to prevent hackers from stealing key intellectual property such as model weights and AI training data. Many of these defenses will benefit from the power of artificial intelligence, which makes it easier and faster for human analysts to identify risks and respond to attacks. The U.S. government and the private sector can partner together to develop these security measures as quickly as possible.

Second, infrastructure is destiny when it comes to AI. The early installation of fiber-optic cables, coaxial lines and other pieces of broadband infrastructure is what allowed the United States to spend decades at the center of the digital revolution and to build its current lead in artificial intelligence. U.S. policymakers must work with the private sector to build significantly larger quantities of the physical infrastructure — from data centers to power plants — that run the AI systems themselves. Public-private partnerships to build this needed infrastructure will equip U.S. firms with the computing power to expand access to AI and better distribute its societal benefits.

Building this infrastructure will also create new jobs nationwide. We are witnessing the birth and evolution of a technology I believe to be as momentous as electricity or the internet. AI can be the foundation of a new industrial base it would be wise for our country to embrace.

We need to complement the proverbial “bricks and mortar” with substantial investment in human capital. As a nation, we need to nurture and develop the next generation of AI innovators, researchers and engineers. They are our true superpower.

Third, we must develop a coherent commercial diplomacy policy for AI, including clarity around how the United States intends to implement export controls and foreign investment rules for the global build out of AI systems. That will also mean setting out rules of the road for what sorts of chips, AI training data and other code — some of which is so sensitive that it may need to remain in the United States — can be housed in the data centers that countries around the world are racing to build to localize AI information.

Our existing AI lead, at a time when nations worldwide are vying for greater access to the technology, will make it easier to bring more countries into this new coalition. Making sure open-sourced models are readily available to developers in those nations will further bolster our advantage. The challenge of who will lead on AI is not just about exporting technology, it’s about exporting the values that the technology upholds.

And fourth, we need to think creatively about new models for the world to establish norms in developing and deploying AI, with a particular focus on safety and ensuring a role for the global south and other nations who have been historically left behind. As with other issues of global importance, that will require us to engage with China and maintain an ongoing dialogue.

I’ve spoken in the past about creating something akin to the International Atomic Energy Agency for AI, but that is just one potential model. One option could knit together the network of AI safety institutes being built in countries such as Japan and Britain and create an investment fund that countries committed to abiding by democratic AI protocols could draw from to expand their domestic computer capacities.

Another potential model is the Internet Corporation for Assigned Names and Numbers, which was established by the U.S. government in 1998, less than a decade after the creation of the World Wide Web, to standardize how we navigate the digital world. ICANN is now an independent nonprofit with representatives from around the world dedicated to its core mission of maximizing access to the internet in support of an open, connected, democratic global community.

While identifying the right decision-making body is important, the bottom line is that democratic AI has a lead over authoritarian AI because our political system has empowered U.S. companies, entrepreneurs and academics to research, innovate and build.

We won’t be able to have AI that is built to maximize the technology’s benefits while minimizing its risks unless we work to make sure the democratic vision for AI prevails. If we want a more democratic world, history tells us our only choice is to develop an AI strategy that will help create it, and that the nations and technologists who have a lead have a responsibility to make that choice — now.
 
This is a really refreshing read. I feared more fearmongering or insistence on authoritarian controls on it and hardware or something, in the name of "democratizing" it or something.

I think in addition to these things, we also need to reject kneejerk feelings and hysteria, to prevent regulating it into the hands off only the biggest corps and players. (The open source AI groups aren't helping with this, fearing heavy handed crackdowns) I mean we have already seen calls for restrictions on hardware, program kill switches and more, in the name of "protecting" against AI. To say nothing of nonsensical and misinformed efforts to force the application of CR laws onto AI learning. (from art to text to writing) It is no different than any human. Nothing is truly original on the level that we are talking about, everything is derivative on some level. If we allow these types of FURTHER abuse of the CR system then pretty much everything is on the table.

It needs to remain open, accessible and usable to everyone. Not just as a service but as software people are free to create, edit, add to, and use top to bottom, front to back. One their own hardware, no restrictions. Which also means open source, private funded AND government funded projects. (fuck libertardians)
 
Last edited:
No thanks I really don't want AI managed by a gay jew.
I’ve spoken in the past about creating something akin to the International Atomic Energy Agency for AI, but that is just one potential model. One option could knit together the network of AI safety institutes being built in countries such as Japan and Britain and create an investment fund that countries committed to abiding by democratic AI protocols could draw from to expand their domestic computer capacities.
ezra pound democracy run by jews_jpg_80.jpg
Jews: maybe we should create an international organization where jews control things under penalty of international jewish law?
 
  • Informative
Reactions: Ambidextype
That right there is the future of Neuralink users. Soon enough, your opinions will be changed without your consent and all it takes is a shock in the pain center at certain imagery and ideas and a gentle jolt at the pleasure center at (((Approved))) ideas.

That's what i fear for, if we don't get a handle and basic rights and protections on this sort of concept in general. (thought, fantasy, opinions etc) Just wait, brain chips are only the beginning, hell adv cybernetic connections like seen in GitS are only going to be short term and probably replaced by SAO nerve gear like tech. That shit opens the door for true remote reading. Like walking through a metal detector or just getting too close to a device. people will no doubt scream "optimistic" until it's too late and we are trying to force regulation last second, for something already being normalized. (i.e. well past the best time to implement strong protections.. pre numerous disasters at least)

I don't mean this as a doom type thing.. All those types of tech will have major up sides too. We just need to treat the issues surrounding it like basic rights, and should probably even add constitutional protections explicitly. (freedom of thought, in absolute terms.. fantasy, even violent evil fantasy is still just fantasy and imagination etc. no thought crimes etc.. freedom from mind/thought/memory reading or control/manipulation and so on etc etc.) In addition to the protections already existing that this should fall under.
 
  • Feels
Reactions: ZazietheBeast
That's what i fear for, if we don't get a handle and basic rights and protections on this sort of concept in general. (thought, fantasy, opinions etc) Just wait, brain chips are only the beginning, hell adv cybernetic connections like seen in GitS are only going to be short term and probably replaced by SAO nerve gear like tech. That shit opens the door for true remote reading. Like walking through a metal detector or just getting too close to a device. people will no doubt scream "optimistic" until it's too late and we are trying to force regulation last second, for something already being normalized. (i.e. well past the best time to implement strong protections.. pre numerous disasters at least)

I don't mean this as a doom type thing.. All those types of tech will have major up sides too. We just need to treat the issues surrounding it like basic rights, and should probably even add constitutional protections explicitly. (freedom of thought, in absolute terms.. fantasy, even violent evil fantasy is still just fantasy and imagination etc. no thought crimes etc.. freedom from mind/thought/memory reading or control/manipulation and so on etc etc.) In addition to the protections already existing that this should fall under.
I believe in technology being used to uplift people. People with debilitating defects come to mind. Like the old lady who was able to regulate her seizures because of a little gadget that alerted her when her brainwaves hit the point where she is about to seize.

The issue stems from greedy corpos, politicos and bankerbergs who would want to wield technology to enslave people. In short, the very type of people Ted Kaczynski warned us about on the encroachment of technology being used to violate human rights and his general fear that inevitably, you'll just get stuck with all the crap features.

Realistically speaking, if you wish for non-enslavement tech, it needs to be open source and must fall under "Right to Repair" as well as people being very afraid and very aware of the capacity for bullshit involving tech. Thankfully, that latter part has already been established. You got Robocop, Terminator, Alita, Matrix, Blade Runner, Demolition Man and Elysium to serve as warning on technology being maliciously applied. Its going to take a huge push to ensure a technology bill of rights and responsible cybering/genetic manipulation.

However, with all the diversity and guaranteed failure shoved into the system thanks to a bunch of leftoids, I can see the West devolve into Kenshi while places not hit by this disastrous mind virus may take the first few steps towards true augmentations.
 
Back