Generative Language Models and AutomatedInfluence Operations: Emerging Threats and Potential Mitigations - An academic paper on the future role of AI content and human provenance on the Internet.

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.

Breadbassket

True & Honest Fan
kiwifarms.net
Joined
Nov 22, 2021
This is a paper talking about provenance and trying to have sites differentiate between AI-generated and human made posts.

Some excerpts:
In recent years, artificial intelligence (AI) systems have significantly improved and their capabilities have expanded. In particular, AI systems called “generative models” have made great progress in automated content creation, such as images generated from text prompts. One area of particularly rapid development has been generative models that can produce original language, which may have benefits for diverse fields such as law and healthcare
Once models are built, developers can choose how users interact with them. AI providers have some actions available to them that might reduce bad actors’ access to generative language models. At the same time, these actions could be highly costly for organizations looking to commercialize their models and would require large amounts of cooperation across all relevant AI providers to ensure that propagandists could not simply gravitate toward other equally capable models without similar restrictions in place
Because technical detection of AI-generated text is challenging, an alternate approach is to build trust by exposing consumers to information about how a particular piece of content is created or changed. Tools such as phone cameras or word processing software could build the means for content creators to
track and disclose this information. In turn, social media platforms, browsers, and internet protocols could publicize these indicators of authenticity when a user interacts with content.

This intervention requires a substantial change to a whole ecosystem of applications and infrastructure in order to ensure that content retains indicators of authenticity as it travels across the internet. To this end, the Coalition for Content Provenance and Authenticity (C2PA) has brought together software application vendors, hardware manufacturers, provenance providers, content publishers, and social media platforms to propose a technical standard for content provenance that can be implemented across the internet. This standard would provide information about content to consumers, including its date of creation, authorship, hardware, and details regarding edits, all of which would be validated with cryptographic signatures
Developers might choose to restrict model access to only trusted institutions, such as known companies and research organizations, and not to individuals or governments likely to use their access to spread disinformation. Huawei initially appears to have intended an access regime along these lines for its PanGu-α model
The preceding mitigations address the supply of AI-generated misinformation. However, as long as target audiences remain susceptible to propaganda that aligns with their beliefs, there will remain an incentive for influence operations generally, as well as incentives more specifically for propagandists to leverage AI to make those operations more effective. In this section, we therefore discuss two interventions that might help address the demand side of the misinformation problem: media literacy campaigns, and the use of AI tools to aid media consumers in interpreting and making informed choices about the information they receive.
Just as generative models can be used to generate propaganda, they may also be used to defend against it. Consumer-focused AI tools could help information consumers identify and critically evaluate content or curate accurate information. These tools may serve as an antidote to influence operations and could reduce the demand for disinformation. While detection methods (discussed in Section 5.2.1) aim to detect whether content is synthetic, consumer-focused tools instead try to equip consumers to make better decisions when evaluating the content they encounter.
Possibilities for such tools are numerous. Developers could produce browser extensions and mobile applications that automatically attach warning labels to potential generated content and fake accounts, or that selectively employ ad-blockers to demonetize them. Websites and customizable notification systems could be built or improved with AI-augmented vetting, scoring, and ranking systems to organize, curate, and display user-relevant information while sifting out unverified or generated sources. Tools and built-in search engines that merely help users quickly contextualize the content they consume could help their users evaluate claims, while lowering the risk of identifying true articles as misinformation. Such “contextualization engines” may be especially helpful in enabling users to analyze a given source and then find both related high-quality sources and areas where relevant data is missing. By reducing the effort required to launch deeper investigations, such tools can help to align web traffic revenue more directly with user goals, as opposed to those of advertisers or influence operators. Another proposal suggests using AI-generated content to educate and inoculate a population against misleading beliefs.

If you thought two-factor authentication was something, this should be even more wild. Is the age of the digital human ID just over the horizon in the next few years?

PDF in attachments
 

Attachments

Last edited:
It's what I already wrote in the other thread, it's all just blah blah blah around the fact that the powerful want a monopoly on AI tools. They want to wield this power alone and not share it with the plebs, because they realize how mighty it is. If everyone can generate convincing fake news, their fake news lose reach and will be met with more scrutiny, if people realize how easy it is to manipulate.

Fuckers.

I like the part about limiting sales of powerful GPUs to "civilians".
 
Developers Spread Radioactive Data to Make Generative Models Detectable
Platforms and AI Providers Coordinate to Identify AI Content
Already being done by some that we know of, but it could be mandated by legislation. One country or collective like the European Union could force most providers to do this. As an example, the EU basically forced Apple to adopt USB-C ports on iPhone worldwide. Bonus points if it not only identifies the content as AI generated but also points back to your user accounts. That could lead to your cancellation or arrest (see: UK).

Governments Impose Access Controls on AI Hardware
To some extent, money will be the barrier that keeps the anonymous chudbro from training their own great AI. You aren't buying a truckload of GPUs or a Cerebras wafer, and if you are, you will be noticed and put on a watchlist. Running some of these models could be difficult too, if VRAM and performance requirements are extreme.

I think the hardware restrictions could come later, in the 2030s, when we see new hardware breakthroughs and there's serious talk of Singularity and Skynet among politicians. Whether or not you believe it's possible to make a strong artificial general intelligence, they'll jump on it. Decades of Hollywood propaganda has primed the public to be scared of AI. Enterprise pricing and export restrictions will keep it out of your hands.

AI Developers Develop New Norms Around Model Release
It will be interesting to see how the open source world reacts, or gets reacted to. Watch programmers get prosecuted for releasing "dangerous" AI code.

Platforms Require “Proof of Personhood” to Post
AI spampocalypse is the perfect justification to require verification on all major social media platforms. Even Twitter savior Musk is a fan of verifying everybody. Tweaks to Section 230 could increase liability on platforms that don't want to play ball (KF obviously included). There could be bipartisan support for some evil bill that eliminates Internet freedom.
 
Last edited:
I like the part about limiting sales of powerful GPUs to "civilians".
This would be such a bad move. If there's one thing that would cause a violent peasant revolt, it's pissing off the gamers. Imagine if people could no longer sublimate their long-suppressed desire for armed combat in video games. Cod bros would become a new taliban
 
This would be such a bad move. If there's one thing that would cause a violent peasant revolt, it's pissing off the gamers. Imagine if people could no longer sublimate their long-suppressed desire for armed combat in video games. Cod bros would become a new taliban
Not to mention it would just lead to industry stagnation in general. This would be unregulatable. I don't doubt some country trying it but it would be so disastrous to businesses the world over it just would not be worth it. If only a few approved retards can get the hardware, the hardware becomes far more expensive - development slows down. A gated community of 'professionals' forms and they get so high off themselves that we see little progress in basically every realm of data science. Eventually the shitty gpus the plebs would be allowed to have start to break down and things really get bad.
 
This would be such a bad move. If there's one thing that would cause a violent peasant revolt, it's pissing off the gamers. Imagine if people could no longer sublimate their long-suppressed desire for armed combat in video games. Cod bros would become a new taliban
You should look around for what in LessWrong, OpenAI and other "rationalist" cult spaces is euphemistically called "the pivotal act".
In essence, there are several well-connected types in Silicon Valley (and some in Washington) that are looking to bring most or all GPU/CPU production onshore into America - which can then be destroyed via governmental activity, or failing that, simple terrorism now that it's local and not in China etc.
Several also believe they need to create a worldwide totalitarian state in order to prevent any AI from coming into fruition.
 
This would be such a bad move. If there's one thing that would cause a violent peasant revolt, it's pissing off the gamers. Imagine if people could no longer sublimate their long-suppressed desire for armed combat in video games. Cod bros would become a new taliban
My friend is a motion graphics artist and he needs like 5 of them to render his work. He would basically be put out of a job if they did something like this.
 
You should look around for what in LessWrong, OpenAI and other "rationalist" cult spaces is euphemistically called "the pivotal act".
In essence, there are several well-connected types in Silicon Valley (and some in Washington) that are looking to bring most or all GPU/CPU production onshore into America - which can then be destroyed via governmental activity, or failing that, simple terrorism now that it's local and not in China etc.
Several also believe they need to create a worldwide totalitarian state in order to prevent any AI from coming into fruition.

It doesn't all need to be within America or to be destroyed, as long as it can be controlled. TSMC making a GPU for AMD or Nvidia in Taiwan can be controlled. Chinese SMIC making a GPU for another Chinese company can't. Nevertheless, Taiwanese fabs could be taken offline permanently by an invasion, causing consumer hardware prices to skyrocket and never recover.

Inject surveillance into all personal computing, restrict what people can buy, coerce the public into using more cheap dumb terminals, put extra scrutiny on the graphics artist buying 5 GPUs.
 
U.S. inks bill to force geo-tracking tech for high-end gaming and AI GPUs [archive]

I'm hoping that this does not pass or even make headway at all because it's a stupendously retarded idea.
Cotton Introduces Bill to Prevent Diversion of Advanced Chips to America’s Adversaries and Protect U.S. Product Integrity (archive)
Text of the bill may be found here.

The Chip Security Act would direct the Secretary to:
  • Require a location verification mechanism on export-controlled advanced chips or products with export-controlled advanced chips within 6 months of enactment and require exporters of advanced chips to report to BIS if their products have been diverted away from their intended location or subject to tampering attempts.
  • Study, in coordination with the Secretary of Defense, other potential chip security mechanisms in the next year and establish requirements over the next few years for implementing such mechanisms, if appropriate, on covered advanced chips. This longer timeline accommodates the years-long technological roadmap for development of the next generation of advanced chips.
  • Assess, in coordination with the Secretary of Defense, the most up-to-date security mechanisms annually for three years and determine if any new mechanisms should be required
  • Make recommendations annually for three years on how to make export controls more flexible, thus streamlining shipments to more countries.
  • Prioritize confidentiality when developing requirements for chip security mechanisms.
 

Attachments

Back