Who will control the future of AI? - A democratic vision for artificial intelligence must prevail over an authoritarian one.


Sam Altman is co-founder and CEO of OpenAI.

Who will control the future of AI?

That is the urgent question of our time. The rapid progress being made on artificial intelligence means that we face a strategic choice about what kind of world we are going to live in: Will it be one in which the United States and allied nations advance a global AI that spreads the technology’s benefits and opens access to it, or an authoritarian one, in which nations or movements that don’t share our values use AI to cement and expand their power?

There is no third option — and it’s time to decide which path to take. The United States currently has a lead in AI development, but continued leadership is far from guaranteed. Authoritarian governments the world over are willing to spend enormous amounts of money to catch up and ultimately overtake us. Russian dictator Vladimir Putin has darkly warned that the country that wins the AI race will “become the ruler of the world,” and the People’s Republic of China has said that it aims to become the global leader in AI by 2030.

These authoritarian regimes and movements will keep a close hold on the technology’s scientific, health, educational and other societal benefits to cement their own power. If they manage to take the lead on AI, they will force U.S. companies and those of other nations to share user data, leveraging the technology to develop new ways of spying on their own citizens or creating next-generation cyberweapons to use against other countries.

The first chapter of AI is already written. Systems such as ChatGPT, Copilot and others are functioning as limited assistants — for instance, by writing up patient visits so nurses and doctors can spend more time with the sick, or serving as more advanced assistants in certain domains, such as code generation for software engineering. More advances will soon follow and will usher in a decisive period in the story of human society.

If we want to ensure that the future of AI is a future built to benefit the most people possible, we need a U.S.-led global coalition of like-minded countries and an innovative new strategy to make it happen. The United States’ public and technology sectors need to get four big things right to ensure the creation of a world shaped by a democratic vision for AI.

First, American AI firms and industry need to craft robust security measures to ensure that our coalition maintains the lead in current and future models and enables our private sector to innovate. These measures would include cyberdefense and data center security innovations to prevent hackers from stealing key intellectual property such as model weights and AI training data. Many of these defenses will benefit from the power of artificial intelligence, which makes it easier and faster for human analysts to identify risks and respond to attacks. The U.S. government and the private sector can partner together to develop these security measures as quickly as possible.

Second, infrastructure is destiny when it comes to AI. The early installation of fiber-optic cables, coaxial lines and other pieces of broadband infrastructure is what allowed the United States to spend decades at the center of the digital revolution and to build its current lead in artificial intelligence. U.S. policymakers must work with the private sector to build significantly larger quantities of the physical infrastructure — from data centers to power plants — that run the AI systems themselves. Public-private partnerships to build this needed infrastructure will equip U.S. firms with the computing power to expand access to AI and better distribute its societal benefits.

Building this infrastructure will also create new jobs nationwide. We are witnessing the birth and evolution of a technology I believe to be as momentous as electricity or the internet. AI can be the foundation of a new industrial base it would be wise for our country to embrace.

We need to complement the proverbial “bricks and mortar” with substantial investment in human capital. As a nation, we need to nurture and develop the next generation of AI innovators, researchers and engineers. They are our true superpower.

Third, we must develop a coherent commercial diplomacy policy for AI, including clarity around how the United States intends to implement export controls and foreign investment rules for the global build out of AI systems. That will also mean setting out rules of the road for what sorts of chips, AI training data and other code — some of which is so sensitive that it may need to remain in the United States — can be housed in the data centers that countries around the world are racing to build to localize AI information.

Our existing AI lead, at a time when nations worldwide are vying for greater access to the technology, will make it easier to bring more countries into this new coalition. Making sure open-sourced models are readily available to developers in those nations will further bolster our advantage. The challenge of who will lead on AI is not just about exporting technology, it’s about exporting the values that the technology upholds.

And fourth, we need to think creatively about new models for the world to establish norms in developing and deploying AI, with a particular focus on safety and ensuring a role for the global south and other nations who have been historically left behind. As with other issues of global importance, that will require us to engage with China and maintain an ongoing dialogue.

I’ve spoken in the past about creating something akin to the International Atomic Energy Agency for AI, but that is just one potential model. One option could knit together the network of AI safety institutes being built in countries such as Japan and Britain and create an investment fund that countries committed to abiding by democratic AI protocols could draw from to expand their domestic computer capacities.

Another potential model is the Internet Corporation for Assigned Names and Numbers, which was established by the U.S. government in 1998, less than a decade after the creation of the World Wide Web, to standardize how we navigate the digital world. ICANN is now an independent nonprofit with representatives from around the world dedicated to its core mission of maximizing access to the internet in support of an open, connected, democratic global community.

While identifying the right decision-making body is important, the bottom line is that democratic AI has a lead over authoritarian AI because our political system has empowered U.S. companies, entrepreneurs and academics to research, innovate and build.

We won’t be able to have AI that is built to maximize the technology’s benefits while minimizing its risks unless we work to make sure the democratic vision for AI prevails. If we want a more democratic world, history tells us our only choice is to develop an AI strategy that will help create it, and that the nations and technologists who have a lead have a responsibility to make that choice — now.
 
Chuds in control of open source AI models will control the future of AI. Sounds awful, but this is truly the best outcome. Do you seriously want this broccoli-haired faggot and his Microsoft paypiggies to control the future of machine learning research?

Thought not. Rate this horrifying, but anime titty generators may very well save the world.
 
...What, exactly?

His post is incredibly accurate to what we see today. "Dumb systems that will get an input and output the closest statistic result it can find" is precisely how large language models work - the only impressive thing about them is that retarded humans have decided being able to statistically string words together in patterns that vary wildly between reasonable and utter batshit insanity is indicative of intelligence.
 
  • Agree
Reactions: Grub
Suggest AI has already grown beyond anyone's ability to control it. AI has a life of its' own now, and there's damned little anyone can do about it.
 
  • Feels
Reactions: ZazietheBeast
Unfortunately, if you want to prevent the outcome of your AI turning schizo and nuking everyone because a "womyn" of color decided to update their billion strong list of words that AI is not supposed to say, AI needs to be in the hands of unregulated autists.

I'll take shitposting to the stars over nuclear armageddon thanks to leftoids, thanks.
 
I shall not fedpost
Fedposting is the mind killer.
 
  • Like
Reactions: FlappyBat
...What, exactly?

His post is incredibly accurate to what we see today. "Dumb systems that will get an input and output the closest statistic result it can find" is precisely how large language models work - the only impressive thing about them is that retarded humans have decided being able to statistically string words together in patterns that vary wildly between reasonable and utter batshit insanity is indicative of intelligence.
Charismatic con men work on the same principle. It's very interesting that we've essentially created AI yes men.
 
  • Like
Reactions: Diana Moon Glampers
We will! KiwAI made by Null to find pictures of fat ladies will rule the world.
 
Show me the money, Altman.

Show. me. the money.

AI's great? Oh good. Then surely you can show me some places where using AI (as opposed to selling AI hype) is driving revenue. It should be easy, with such amazing advances. Where's the revenue?
 
AI's great? Oh good. Then surely you can show me some places where using AI (as opposed to selling AI hype) is driving revenue. It should be easy, with such amazing advances. Where's the revenue?
This is just the next iteration of the "Attention Economy" that justified infinity VC dollars for tech start-ups with no business plan 10-15 years ago.
 
I think it's pure hubris to think that we simple humans could control or compete with an actual AI

Not these large language modules they're trying to pass off as AI but a real, sentient AI. It would think billion of time faster then we could ever. In the blink of our eye a AI would have lived the equivalent of 100 years.

My guess would be that the birth of AI would be the end of mankind. After all when a machine can do everything a human can do, do it better, do it faster and do it anywhere because it's not limited by biology...well what does that leave us but to fade away under a new dominate lifeform?

I often wonder if this is a answer to the Fermi Paradox. We can't here other alien societies out there in space because they are all AI's talking to each other in ways we can't comprehend and are just waiting for us to birth another AI being before they contact what they would see as true intelligent life. Why bother talking to the crude organic lifeforms? Let them fulfill there purpose and fade away into history while the machines forge ahead in the universe.

I dunno, it just seems this way to me. So I'm in no rush to see true AI be birthed.
 
  • Agree
Reactions: The-Mad-Asshatter
I think it's pure hubris to think that we simple humans could control or compete with an actual AI

Not these large language modules they're trying to pass off as AI but a real, sentient AI. It would think billion of time faster then we could ever. In the blink of our eye a AI would have lived the equivalent of 100 years.

My guess would be that the birth of AI would be the end of mankind. After all when a machine can do everything a human can do, do it better, do it faster and do it anywhere because it's not limited by biology...well what does that leave us but to fade away under a new dominate lifeform?

I often wonder if this is a answer to the Fermi Paradox. We can't here other alien societies out there in space because they are all AI's talking to each other in ways we can't comprehend and are just waiting for us to birth another AI being before they contact what they would see as true intelligent life. Why bother talking to the crude organic lifeforms? Let them fulfill there purpose and fade away into history while the machines forge ahead in the universe.

I dunno, it just seems this way to me. So I'm in no rush to see true AI be birthed.
“Sentient” AI, like FTL travel, is just sci fi mumbo jumbo at our current level of technology. How can we possibly replicate “consciousness” when we don’t really even know what it is or the mechanisms from which it arises?

Not saying we can’t potentially have sentient AI one day, but it’ll require multiple paradigm-shifting breakthroughs in multiple scientific fields before it’s anywhere close to even theoretically possible.
 
Back