- Joined
- Aug 28, 2019
California legislature passes controversial “kill switch” AI safety bill (archive)
Meanwhile, OpenAI and Anthropic have "voluntarily" signed deals with the US government to allow early access to their models, through the US AI Safety Institute. They will also collaborate with the UK AI Safety Institute:
As we've previously explored in depth, SB-1047 asks AI model creators to implement a "kill switch" that can be activated if that model starts introducing "novel threats to public safety and security," especially if it's acting "with limited human oversight, intervention, or supervision." Some have criticized the bill for focusing on outlandish risks from an imagined future AI rather than real, present-day harms of AI use cases like deep fakes or misinformation.
In announcing the legislative passage Wednesday, bill sponsor and state senator Scott Weiner cited support from AI industry luminaries such as Geoffrey Hinton and Yoshua Bengio (who both last year also signed a statement warning of a "risk of extinction" from fast-developing AI tech).
Meanwhile, OpenAI and Anthropic have "voluntarily" signed deals with the US government to allow early access to their models, through the US AI Safety Institute. They will also collaborate with the UK AI Safety Institute:
Through the deal, the US AI Safety Institute will "receive access to major new models from each company prior to and following their public release." This will ensure that public safety won't depend exclusively on how the companies "evaluate capabilities and safety risks, as well as methods to mitigate those risks," NIST said, but also on collaborative research with the US government.
Last edited: