US President Biden officially signed the Artificial Intelligence Safety Act
The AI Safety Act signed by US President Biden marks an important step for the United States in the field of AI regulation, aiming to balance technological innovation and risk prevention. The following is an analysis of the core content of the bill and the reactions of all parties:
1. Core requirements of the bill
Security review mechanism: Technology companies that develop advanced AI models (such as large language models and generative AI systems) must pass the federal government's security assessment, including data privacy, algorithm transparency, potential abuse risks, etc.912.
Key supervision in high-risk areas: AI applications involving key industries such as medical, financial, and defense require additional compliance review, similar to the EU's AI Act's classification of "high-risk systems"210.
Transparency obligations: Generative AI content (such as deep fakes) must be clearly labeled, and training data must disclose the copyright source, which is consistent with the EU's transparency rules12.
2. Policy background and motivation
Coping with international competition: The United States faces dual pressures from China (such as Huawei Ascend chips and Kimi Dark Side of the Moon model) and the European Union (full implementation of the AI Act) in the field of AI. The bill partially draws on the EU's risk classification ideas, but emphasizes "light regulation and innovation promotion"112.
Responding to industry calls: OpenAI CEO Altman and others have warned that "excessive regulation may force technology outflow", and the new bill attempts to alleviate corporate concerns through a flexible review mechanism19.
Election year political considerations: During the 2024 election, the proliferation of false information generated by AI has become a hidden danger, and the bill may help the Democratic Party gain the right to speak in "technology governance"13.
3. Divergent corporate responses
Support from technology giants:
Google, Microsoft and others have promised to cooperate because they have sufficient resources and their existing products (such as Gemini and Azure AI) meet security standards12.
Although Meta refused to sign the EU Code of Conduct, it has a positive attitude towards the US bill, probably because the review threshold is lower than the EU 811.
Opposition from start-ups:
Criticism of the high cost of security review, which may lead to a "monopoly of large companies", such as Anthropic, which relies on the open source ecosystem and may be affected1113.
Some companies proposed "tiered regulation" to only review ultra-large-scale models (such as those with more than one trillion parameters)12.
4. Potential impact and challenges
Balance between innovation and regulation: If the review process is lengthy, it may delay technological iteration, especially for agile start-ups; but moderate regulation may reduce the social trust crisis caused by the "AI black box"1013.
International coordination difficulties: Differences in regulatory logic between the United States and the European Union (such as the EU's comprehensive ban on "social scoring" and the United States's lack of clear restrictions) may lead to an increase in compliance costs for multinational companies210.
State rights dispute: The Republican-led House of Representatives once proposed "prohibiting states from regulating AI", and the federal bill may cause a conflict with state laws13.
5. Future prospects
Short term: The specific details of the bill (such as review standards and exemption conditions) will become the focus of the game, and federal agencies may draw on the NIST framework to formulate technical guidelines9.
Long term: If the bill successfully reduces the risk of AI abuse (such as election interference and deep fakes), it may become a global regulatory model; otherwise, if it hinders innovation, it may be called back by the Trump administration113.
The Biden administration positioned this bill as a key measure to "ensure AI leadership", but its actual effect will depend on enforcement and industry collaboration. At a time when AI competition is heating up, the US regulatory experiment may reshape the global technology governance landscape.
Link to this article:https://www.cnjiaxiao.com/post/96.html