US Biden to Issue First Regulations on Artificial Intelligence Systems - “President Biden is rolling out the strongest set of actions any government in the world has ever taken on A.I. safety, security and trust"

1698680826811.png

President Biden will issue an executive order on Monday outlining the federal government’s first regulations on artificial intelligence systems. They include requirements that the most advanced A.I. products be tested to assure that they cannot be used to produce biological or nuclear weapons, with the findings from those tests reported to the federal government.

The testing requirements are a small but central part of what Mr. Biden, in a speech scheduled for Monday afternoon, is expected to describe as the most sweeping government action to protect Americans from the potential risks brought by the huge leaps in A.I. over the past several years.

The regulations will include recommendations, but not requirements, that photos, videos and audio developed by such systems be watermarked to make clear that they were created by A.I. That reflects a rising fear that A.I. will make it far easier to create “deep fakes” and convincing disinformation, especially as the 2024 presidential campaign accelerates.

The United States recently restricted the export of high-performing chips to China to slow its ability to produce so-called large language models, the massing of data that has made programs like ChatGPT so effective at answering questions and speeding tasks. Similarly, the new regulations will require companies that run cloud services to tell the government about their foreign customers.

Mr. Biden’s order will be issued days before a gathering of world leaders on A.I. safety organized by Britain’s prime minister, Rishi Sunak. On the issue of A.I. regulation, the United States has trailed the European Union, which has been drafting new laws, and other nations, like China and Israel, that have issued proposals for regulations. Ever since ChatGPT, the A.I.-powered chatbot, exploded in popularity last year, lawmakers and global regulators have grappled with how artificial intelligence might alter jobs, spread disinformation and potentially develop its own kind of intelligence.

“President Biden is rolling out the strongest set of actions any government in the world has ever taken on A.I. safety, security and trust,” said Bruce Reed, a White House deputy chief of staff. “It’s the next step in an aggressive strategy to do everything on all fronts to harness the benefits of A.I. and mitigate the risks.”

The new U.S. rules, some of which are set to go into effect in the next 90 days, are likely to face many challenges, some legal and some political. But the order is aimed at the most advanced future systems, and it largely does not address the immediate threats of existing chatbots that could be used to spread disinformation related to Ukraine, Gaza or the presidential campaign.

The administration did not release the language of the executive order on Sunday, but officials said that some of the steps in the order would require approval by independent agencies, like the Federal Trade Commission.

The order affects only American companies, but because software development happens around the world, the United States will face diplomatic challenges enforcing the regulations, which is why the administration is attempting to encourage allies and adversaries alike to develop similar rules. Vice President Kamala Harris is representing the United States at the conference in London on the topic this week.

The regulations are also intended to influence the technology sector by setting first-time standards for safety, security and consumer protections. By using the power of its purse strings, the White House’s directives to federal agencies aim to force companies to comply with standards set by their government customers.

“This is an important first step and, importantly, executive orders set norms,” said Lauren Kahn, a senior research analyst at the Center for Security and Emerging Technology at Georgetown University.

The order instructs the Department of Health and Human Services and other agencies to create clear safety standards for the use of A.I. and to streamline systems to make it easier to purchase A.I. tools. It orders the Department of Labor and the National Economic Council to study A.I.’s effect on the labor market and to come up with potential regulations. And it calls for agencies to provide clear guidance to landlords, government contractors and federal benefits programs to prevent discrimination from algorithms used in A.I. tools.

But the White House is limited in its authority, and some of the directives are not enforceable. For instance, the order calls for agencies to strengthen internal guidelines to protect personal consumer data, but the White House also acknowledged the need for privacy legislation to fully ensure data protection.

To encourage innovation and bolster competition, the White House will request that the F.T.C. step up its role as the watchdog on consumer protection and antitrust violations. But the White House does not have authority to direct the F.T.C., an independent agency, to create regulations.

Lina Khan, the chair of the trade commission, has already signaled her intent to act more aggressively as an A.I. watchdog. In July, the commission opened an investigation into OpenAI, the maker of ChatGPT, over possible consumer privacy violations and accusations of spreading false information about individuals.

“Although these tools are novel, they are not exempt from existing rules, and the F.T.C. will vigorously enforce the laws we are charged with administering, even in this new market,” Ms. Khan wrote in a guest essay in The New York Times in May.

The tech industry has said it supports regulations, though the companies disagree on the level of government oversight. Microsoft, OpenAI, Google and Meta are among 15 companies that have agreed to voluntary safety and security commitments, including having third parties stress-test their systems for vulnerabilities.

Mr. Biden has called for regulations that support the opportunities of A.I. to help in medical and climate research, while also creating guardrails to protect against abuses. He has stressed the need to balance regulations with support for U.S. companies in a global race for A.I. leadership. And toward that end, the order directs agencies to streamline the visa process for highly skilled immigrants and nonimmigrants with expertise in A.I. to study and work in the United States.

The central regulations to protect national security will be outlined in a separate document, called the National Security Memorandum, to be produced by next summer. Some of those regulations will be public, but many are expected to remain classified — particularly those concerning steps to prevent foreign nations, or nonstate actors, from exploiting A.I. systems.

A senior Energy Department official said last week that the National Nuclear Security Administration had already begun exploring how these systems could speed nuclear proliferation, by solving complex issues in building a nuclear weapon. And many officials have focused on how these systems could enable a terror group to assemble what is needed to produce biological weapons.

Still, lawmakers and White House officials have cautioned against moving too quickly to write laws for A.I. technologies that are swiftly changing. The E.U. did not consider large language models in its first legislative drafts.

“If you move too quickly in this, you may screw it up,” Senator Chuck Schumer, Democrat of New York and the majority leader, said last week.

https://www.nytimes.com/2023/10/30/us/politics/biden-artificial-intelligence.html (Archive)

Today, President Biden is issuing a landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI). The Executive Order establishes new standards for AI safety and security, protects Americans' privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.

As part of the Biden-Harris Administration's comprehensive strategy for responsible innovation, the Executive Order builds on previous actions the President has taken, including work that led to voluntary commitments from 15 leading companies to drive safe, secure, and trustworthy development of AI.

The Executive Order directs the following actions:

New Standards for AI Safety and Security

As AI's capabilities grow, so do its implications for Americans' safety and security. With this Executive Order, the President directs the most sweeping actions ever taken to protect Americans from the potential risks of AI systems:

  • Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests. These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public.
  • Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems' threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks. Together, these are the most significant actions ever taken by any government to advance the field of AI safety.
  • Protect against the risks of using AI to engineer dangerous biological materials by developing strong new standards for biological synthesis screening. Agencies that fund life-science projects will establish these standards as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI.
  • Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content. Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic---and set an example for the private sector and governments around the world.
  • Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, building on the Biden-Harris Administration's ongoing AI Cyber Challenge. Together, these efforts will harness AI's potentially game-changing cyber capabilities to make software and networks more secure.
  • Order the development of a National Security Memorandum that directs further actions on AI and security, to be developed by the National Security Council and White House Chief of Staff. This document will ensure that the United States military and intelligence community use AI safely, ethically, and effectively in their missions, and will direct actions to counter adversaries' military use of AI.
Protecting Americans' Privacy

Without safeguards, AI can put Americans' privacy further at risk. AI not only makes it easier to extract, identify, and exploit personal data, but it also heightens incentives to do so because companies use data to train AI systems. To better protect Americans' privacy, including from the risks posed by AI, the President calls on Congress to pass bipartisan data privacy legislation to protect all Americans, especially kids, and directs the following actions:

  • Protect Americans' privacy by prioritizing federal support for accelerating the development and use of privacy-preserving techniques---including ones that use cutting-edge AI and that let AI systems be trained while preserving the privacy of the training data.
  • Strengthen privacy-preserving research and technologies, such as cryptographic tools that preserve individuals' privacy, by funding a Research Coordination Network to advance rapid breakthroughs and development. The National Science Foundation will also work with this network to promote the adoption of leading-edge privacy-preserving technologies by federal agencies.
  • Evaluate how agencies collect and use commercially available information---including information they procure from data brokers---and strengthen privacy guidance for federal agencies to account for AI risks. This work will focus in particular on commercially available information containing personally identifiable data.
  • Develop guidelines for federal agencies to evaluate the effectiveness of privacy-preserving techniques, including those used in AI systems. These guidelines will advance agency efforts to protect Americans' data.
Advancing Equity and Civil Rights

Irresponsible uses of AI can lead to and deepen discrimination, bias, and other abuses in justice, healthcare, and housing. The Biden-Harris Administration has already taken action by publishing the Blueprint for an AI Bill of Rights and issuing an Executive Order directing agencies to combat algorithmic discrimination, while enforcing existing authorities to protect people's rights and safety. To ensure that AI advances equity and civil rights, the President directs the following additional actions:

  • Provide clear guidance to landlords, Federal benefits programs, and federal contractors to keep AI algorithms from being used to exacerbate discrimination.
  • Address algorithmic discrimination through training, technical assistance, and coordination between the Department of Justice and Federal civil rights offices on best practices for investigating and prosecuting civil rights violations related to AI.
  • Ensure fairness throughout the criminal justice system by developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis.
Standing Up for Consumers, Patients, and Students

AI can bring real benefits to consumers---for example, by making products better, cheaper, and more widely available. But AI also raises the risk of injuring, misleading, or otherwise harming Americans. To protect consumers while ensuring that AI can make Americans better off, the President directs the following actions:

  • Advance the responsible use of AI in healthcare and the development of affordable and life-saving drugs. The Department of Health and Human Services will also establish a safety program to receive reports of---and act to remedy -- harms or unsafe healthcare practices involving AI.
  • Shape AI's potential to transform education by creating resources to support educators deploying AI-enabled educational tools, such as personalized tutoring in schools.
Supporting Workers

AI is changing America's jobs and workplaces, offering both the promise of improved productivity but also the dangers of increased workplace surveillance, bias, and job displacement. To mitigate these risks, support workers' ability to bargain collectively, and invest in workforce training and development that is accessible to all, the President directs the following actions:

  • Develop principles and best practices to mitigate the harms and maximize the benefits of AI for workers by addressing job displacement; labor standards; workplace equity, health, and safety; and data collection. These principles and best practices will benefit workers by providing guidance to prevent employers from undercompensating workers, evaluating job applications unfairly, or impinging on workers' ability to organize.
  • Produce a report on AI's potential labor-market impacts, and study and identify options for strengthening federal support for workers facing labor disruptions, including from AI.
Promoting Innovation and Competition

America already leads in AI innovation---more AI startups raised first-time capital in the United States last year than in the next seven countries combined. The Executive Order ensures that we continue to lead the way in innovation and competition through the following actions:

  • Catalyze AI research across the United States through a pilot of the National AI Research Resource---a tool that will provide AI researchers and students access to key AI resources and data---and expanded grants for AI research in vital areas like healthcare and climate change.
  • Promote a fair, open, and competitive AI ecosystem by providing small developers and entrepreneurs access to technical assistance and resources, helping small businesses commercialize AI breakthroughs, and encouraging the Federal Trade Commission to exercise its authorities.
  • Use existing authorities to expand the ability of highly skilled immigrants and nonimmigrants with expertise in critical areas to study, stay, and work in the United States by modernizing and streamlining visa criteria, interviews, and reviews.
Advancing American Leadership Abroad

AI's challenges and opportunities are global. The Biden-Harris Administration will continue working with other nations to support safe, secure, and trustworthy deployment and use of AI worldwide. To that end, the President directs the following actions:

  • Expand bilateral, multilateral, and multistakeholder engagements to collaborate on AI. The State Department, in collaboration, with the Commerce Department will lead an effort to establish robust international frameworks for harnessing AI's benefits and managing its risks and ensuring safety. In addition, this week, Vice President Harris will speak at the UK Summit on AI Safety, hosted by Prime Minister Rishi Sunak.
  • Accelerate development and implementation of vital AI standards with international partners and in standards organizations, ensuring that the technology is safe, secure, trustworthy, and interoperable.
  • Promote the safe, responsible, and rights-affirming development and deployment of AI abroad to solve global challenges, such as advancing sustainable development and mitigating dangers to critical infrastructure.
Ensuring Responsible and Effective Government Use of AI

AI can help government deliver better results for the American people. It can expand agencies' capacity to regulate, govern, and disburse benefits, and it can cut costs and enhance the security of government systems. However, use of AI can pose risks, such as discrimination and unsafe decisions. To ensure the responsible government deployment of AI and modernize federal AI infrastructure, the President directs the following actions:

  • Issue guidance for agencies' use of AI, including clear standards to protect rights and safety, improve AI procurement, and strengthen AI deployment.
  • Help agencies acquire specified AI products and services faster, more cheaply, and more effectively through more rapid and efficient contracting.
  • Accelerate the rapid hiring of AI professionals as part of a government-wide AI talent surge led by the Office of Personnel Management, U.S. Digital Service, U.S. Digital Corps, and Presidential Innovation Fellowship. Agencies will provide AI training for employees at all levels in relevant fields.
As we advance this agenda at home, the Administration will work with allies and partners abroad on a strong international framework to govern the development and use of AI. The Administration has already consulted widely on AI governance frameworks over the past several months---engaging with Australia, Brazil, Canada, Chile, the European Union, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK. The actions taken today support and complement Japan's leadership of the G-7 Hiroshima Process, the UK Summit on AI Safety, India's leadership as Chair of the Global Partnership on AI, and ongoing discussions at the United Nations.

The actions that President Biden directed today are vital steps forward in the U.S.'s approach on safe, secure, and trustworthy AI. More action will be required, and the Administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation.

For more on the Biden-Harris Administration's work to advance AI, and for opportunities to join the Federal AI workforce, visit AI.gov.

https://www.whitehouse.gov/briefing...cure-and-trustworthy-artificial-intelligence/ (Archive)

Edit: lmao

1698685475158.png
 
Last edited:
A senior Energy Department official said last week that the National Nuclear Security Administration had already begun exploring how these systems could speed nuclear proliferation, by solving complex issues in building a nuclear weapon. And many officials have focused on how these systems could enable a terror group to assemble what is needed to produce biological weapons.
A chat bot is totally going to tell you how to build a nuke.
They know that they have no power to regulate AI without Congress if this is the strongest "justification" that they can come up with.
 
A chat bot is totally going to tell you how to build a nuke.
They know that they have no power to regulate AI without Congress if this is the strongest "justification" that they can come up with.
As if detailed plans for building nukes haven't been freely available for decades... Next they'll fear the chatBot will read a Wikipedia article and tell people how to make black powder.
 
I've said this before and I'm going to say it again, anything that will be announced is either going to be entirely toothless and a purely symbolic measure, or it will be way too heavy handed and hamstring the fuck out of US AI development. I think big tech in the US is going to get absolutely fucked here, they'll be forced to follow some bullshit regulations while all the AI investment money that will end up drying up once open source models begin outpacing proprietary corporate models goes to China.
So yeah, we get toothless, do-nothing legislation that's also going to hurt the country on the world stage, and all so AI won't tell you how to make a nuke/biological weapon? That's absolutely retarded, making a nuke isn't some big goddamn secret anymore, it's difficult to do because of the materials and equipment required. Same goes for biological weapons, the goddamn Rajneeshees were able to pull that off with salmonella cultures and salad bars. This administration is an absolute trash fire.
 
The regulations will include recommendations, but not requirements, that photos, videos and audio developed by such systems be watermarked to make clear that they were created by A.I.
Cynical move to make it easier for establishment entities to use A.I. to doctor video/audio/images with A.I. and pass it off as real.

"bUt oUrS dOeSnT hAvE tHe wAtErMaRk!!!11!"
 
The dirty secret of A.I. is grossly negligent IP violations. If you actually enforce copyright law ‘A.I.’ doesn’t work.
That's why Twitter and Reddit have locked down their APIs.
DALLE2 had things in place to slow that down. ChatGPT will also tell you your prompt has copyrighted IP in it. Sometimes you can get ChatGPT to change to prompt to make the description close enough(Link = popular adventure hero in a green tunic) and DALLE will know what you mean and it'll bypass the filter.
Funny enough it only trips on Disney and Nintendo for me. I cannot get it to generate any Episode 1 content for me.

I can violate indie studio IP all I want though the Touhou guy apparently doesn't enforce his anyways.


hatintime.pngmusl.png
touhouarma3.pngtouhouarma2.pngtouhoutarma.png
touhouquake.png
 
Last edited:
Too late, everyone that cares already has uncensored, open source, local models they can run on their machine.
I've tested it disconnected from the Internet and it's about as good as Google for finding out most stuff.
Also I got it to write a song about how niggers suck (posted on my profile).
 
Joe's right on the money with this one. Nice going.
Right only in the sense that AI needs some kind of regulation. These regulations, which seem to include a lot of "recommendations but not requirements", seem mostly toothless and ineffective. I'm sure they'll sneak in something that keeps the power in the hands of Microsoft and Google.
 
President Biden is set to issue an executive order introducing the first-ever federal regulations on artificial intelligence systems. These regulations will include mandatory testing of advanced AI products to ensure they can't be exploited for the creation of biological or nuclear weapons, with the test results reported to the federal government.

The new regulations also address concerns about AI-generated "deep fakes" by suggesting that photos, videos, and audio created by AI systems should be watermarked to indicate their AI origin. This precaution aims to combat the potential spread of convincing disinformation, particularly during the upcoming 2024 presidential campaign.

The United States recently limited the export of high-performance chips to China to slow its progress in developing large language models like ChatGPT. Similarly, the new regulations will require companies operating cloud services to disclose information about their foreign customers.

President Biden's order comes just before a global summit on AI safety organized by the UK's Prime Minister, Rishi Sunak. While the United States has lagged behind the European Union, China, Israel, and other nations in drafting AI regulations, this move is a significant step in addressing the challenges posed by AI's impact on jobs, disinformation, and its potential to develop its own form of intelligence.

The executive order's specifics were not released, but some aspects will need approval from independent agencies such as the Federal Trade Commission. These regulations apply to American companies, but enforcement will be challenging due to the global nature of software development. To address this, the administration is encouraging other countries to adopt similar rules.

These regulations also aim to set standards for safety, security, and consumer protection in the tech industry. Federal agencies will work on creating safety standards for AI use, studying AI's impact on the job market, and preventing algorithm-based discrimination in various sectors.

However, the White House has limitations on its authority, and not all directives are enforceable. The order calls for agencies to enhance guidelines for personal data protection, but comprehensive privacy legislation is deemed necessary.

To foster innovation and competition, the White House is requesting that the FTC play a more prominent role in consumer protection and antitrust enforcement. However, it lacks the authority to create regulations for an independent agency like the FTC.

Tech companies have expressed support for regulations but differ on the level of government oversight. Some, including Microsoft, OpenAI, Google, and Meta, have agreed to voluntary safety and security commitments.

President Biden's approach to AI regulation emphasizes the need to balance regulations with support for U.S. companies in the global race for AI leadership. The executive order also streamlines the visa process for highly skilled immigrants with AI expertise who wish to study and work in the United States.

Critical regulations related to national security will be outlined in a separate document, the National Security Memorandum, to be produced by the following summer. These regulations may include classified information to prevent the misuse of AI systems by foreign nations or non-state actors for nefarious purposes.

Despite the need for AI regulations, lawmakers and White House officials have stressed the importance of not rushing into legislation for rapidly evolving AI technologies. The European Union, for example, did not initially address large language models in its first legislative drafts.

Senator Chuck Schumer of New York, the Senate Majority Leader, has cautioned against hasty regulation, emphasizing the complexity of the AI landscape.
 
Trying to clamp down on Pandora's box are we? Too late. AI tech is in the hands of the users and you can bet they won't relinquish this fire. That is the real reason they're freaking out over this tech. They're losing control.

On the side of art, notice how despite the technology, they're still insistent on using Alegria/Globohomo/Bahaus art all the while everyone and their mother uses AI art?

1622712333466.png1696896988821977.jpg

I know which one makes me want to look more. And its not the flat garbage. Also, let's say the go full deplatform on AI stuff. That would just mean the AI developers are gonna end up on the side of the Farms. And the Kiwifarms welcomes anyone and everyone who got fucked over by the clownworld we're living in.
 
The legislative and constitutional authority comes from whe.....
Fuck it, we all know the answer.
 
  • Like
Reactions: AnsemSoD1
That's why Twitter and Reddit have locked down their APIs.
DALLE2 had things in place to slow that down. ChatGPT will also tell you your prompt has copyrighted IP in it. Sometimes you can get ChatGPT to change to prompt to make the description close enough(Link = popular adventure hero in a green tunic) and DALLE will know what you mean and it'll bypass the filter.
Funny enough it only trips on Disney and Nintendo for me. I cannot get it to generate any Episode 1 content for me.

I can violate indie studio IP all I want though the Touhou guy apparently doesn't enforce his anyways.


View attachment 5453188View attachment 5453194
View attachment 5453213View attachment 5453218View attachment 5453219
View attachment 5453326
I don’t refer so much to the output containing IP-derivative content. That’s actually protected by fair use (to the extent it applies). I’m more referring to the models themselves being developed with data that is protected by copyright (and data that probably should be private, but that’s a digression). There was some GitHub bot that was literally copy and pasting people’s code. ChatGPT gets its answers from other people’s work. There’s nothing particularly intelligent about copy and pasting people’s work, and as soon as you try to do something like ‘analyze a collection of research papers and make a conclusion’ you are more than likely to get a straight up retarded answer than you are to get a good one.
 
Back