California Senate Bill 53, formally titled the Transparency in Frontier Artificial Intelligence Act (TFAIA), is a state law of California that regulates developers of the most computationally intensive [[ai_alignment|artificial intelligence]] systems. The bill was authored by State Senator [[scott_wiener|Scott Wiener]] of San Francisco and was signed into law by Governor [[gavin_newsom|Gavin Newsom]] on September 29, 2025, becoming Chapter 138 of the California Statutes of 2025. SB 53 added Chapter 25.1, beginning at Section 22757.10, to Division 8 of the California Business and Professions Code. The substantive transparency, incident reporting, and whistleblower obligations took effect on January 1, 2026, making California the first U.S. jurisdiction to impose enforceable safety and disclosure obligations specifically on frontier AI developers. [1][2][3]
The statute applies to any company that has trained or initiated the training of a foundation model using more than 10^26 integer or floating-point operations of computing power, with the most demanding obligations reserved for so-called large frontier developers, defined as frontier developers whose annual gross revenues, taken together with their affiliates, exceeded $500 million in the preceding calendar year. As of early 2026, that threshold captured a small group of laboratories that includes [[openai|OpenAI]], [[anthropic|Anthropic]], [[google_deepmind|Google DeepMind]], [[meta_ai|Meta AI]], [[microsoft|Microsoft]], and xAI, with several second-tier laboratories likely to cross the compute threshold during 2026. Covered developers must publish a written Frontier AI Framework, release a transparency report before each model deployment, file critical safety incident reports with the California Office of Emergency Services within 15 days of discovery, and maintain anonymous whistleblower channels for so-called covered employees. Civil penalties run up to $1 million per violation and are enforced by the California Attorney General. The bill also authorises CalCompute, a planned public cloud computing consortium intended to broaden access to large training runs. [1][3][4][5]
SB 53 was drafted in the wake of the controversy over Wiener's earlier and more sweeping AI safety bill, [[sb_1047|SB 1047]], which Newsom vetoed on September 29, 2024. After that veto Newsom convened the Joint California Policy Working Group on AI Frontier Models, co-led by [[fei-fei_li|Fei-Fei Li]], Mariano-Florentino Cuellar, and Jennifer Tour Chayes, with prominent reviewers including [[yoshua_bengio|Yoshua Bengio]] and Ion Stoica. The group's June 17, 2025 California Report on Frontier AI Policy provided the intellectual scaffolding for SB 53, recommending a transparency-and-incident-reporting regime in place of the more prescriptive liability framework that had sunk SB 1047. The final law was endorsed by Anthropic, accepted with reservations by [[meta_ai|Meta]], and opposed in pre-signing letters by OpenAI and venture firm Andreessen Horowitz, with industry groups including the Chamber of Progress also coming out against it. Civil society sponsors included Encode AI, the Secure AI Project, and Economic Security California Action. [3][6][7][8][9][10]
The political road to SB 53 began with the 2024 fight over [[sb_1047|Senate Bill 1047]], the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which Wiener had introduced in February 2024. SB 1047 imposed a substantially broader regime: training-time safety testing, mandated kill switches, downstream developer liability for foreseeable harms, know-your-customer obligations on cloud compute providers, and civil penalties pegged to a percentage of training compute cost. The bill cleared both chambers of the California Legislature with strong margins by late August 2024, but was met with intense opposition from [[openai|OpenAI]], [[google_deepmind|Google]], [[meta_ai|Meta]], a substantial bloc of Silicon Valley venture capital led by Andreessen Horowitz, and a number of academic researchers. On September 29, 2024, exactly one year before he would sign SB 53, Newsom vetoed SB 1047, writing that the bill applied only the largest models and risked being both over-broad and quickly outdated by technological progress. [11][12]
In his veto message Newsom announced that he would convene a working group of leading AI scientists and policy researchers to recommend a more durable framework. The Joint California Policy Working Group on AI Frontier Models was formally constituted in late 2024 with Stanford computer scientist [[fei-fei_li|Fei-Fei Li]] as a co-lead, alongside Mariano-Florentino Cuellar (a former California Supreme Court justice and president of the Carnegie Endowment for International Peace) and Jennifer Tour Chayes (dean of the UC Berkeley College of Computing, Data Science, and Society). Reviewers included Turing laureate [[yoshua_bengio|Yoshua Bengio]], Databricks co-founder Ion Stoica, and a politically broad set of academics and practitioners. The group released a draft report in March 2025 and a final 53-page California Report on Frontier AI Policy on June 17, 2025. The report rejected both the laissez-faire posture of the Trump administration's [[ai_action_plan|AI Action Plan]] and the more prescriptive elements of SB 1047, recommending instead what it called a trust-but-verify approach: mandatory disclosure of safety frameworks, standardised incident reporting, whistleblower protections, and public investment in shared infrastructure such as compute clusters. [3][6][13]
Several other developments framed the post-veto landscape. The Trump administration took office in January 2025 and rescinded the prior Biden-era [[ai_executive_order|Executive Order 14110]] on AI; the new White House released its [[ai_action_plan|AI Action Plan]] in July 2025, calling for federal preemption of state AI rules and warning of regulatory fragmentation. The federal [[ai_safety_institute|US AI Safety Institute]] was rebranded and partially defunded over the spring of 2025. International momentum continued through the [[bletchley_declaration|Bletchley Declaration]] commitments of November 2023, the [[seoul_declaration|Seoul Declaration]] of May 2024, and the Frontier AI Safety Commitments adopted at the [[ai_safety_summit|AI Safety Summit]] series. Inside the laboratories, [[responsible_scaling_policy|Responsible Scaling Policies]] became the dominant voluntary instrument, with Anthropic, OpenAI, Google DeepMind, Meta, and xAI all publishing or updating frontier safety frameworks during 2024 and 2025. SB 53 took these voluntary practices and made them, for the first time, a legal obligation under U.S. law. [3][14][15][16]
The California Report on Frontier AI Policy, released on June 17, 2025, was the immediate intellectual antecedent of SB 53. The 53-page document treated frontier models as a class of dual-use general-purpose technologies with the potential for both significant economic benefit and significant catastrophic risk. It used a working definition of frontier models as the most capable subset of foundation models, those whose training is unusually compute-intensive and that produce capabilities transferable to a wide range of downstream applications. The authors argued that the rapid pace of capability progress, combined with the long lead time of typical legislative cycles, made any rule pegged to specific architectural features or specific risks likely to age badly, and that policy should therefore focus on disclosure, monitoring, and process. [3][13]
The report's principal recommendations grouped into five clusters. The first was mandatory transparency: large developers should be required to publish their internal safety policies, including how they identify and evaluate dangerous capabilities, what mitigations they apply before deployment, and how they secure model weights. The second was standardised incident reporting to a designated state agency, with clear definitions of what counts as a critical safety incident and a fixed reporting window. The third was protection for employees who raise good-faith concerns about catastrophic risk inside their employers, including anonymous internal channels and explicit anti-retaliation provisions. The fourth was the development of public computing infrastructure, both as an industrial policy measure and as a way to broaden the set of researchers who can study frontier systems independently. The fifth was an ongoing regulatory update mechanism, since static thresholds risk being either too narrow or too broad as technology shifts. SB 53 implements roughly the first four; the fifth is partially captured by the bill's annual review requirement on the California Department of Technology. [3][13]
The report also notably declined to endorse the more aggressive provisions of SB 1047. It did not call for a mandatory training-time safety certification, did not recommend strict liability on developers for downstream harms, and did not endorse the proposed $1 billion compute-cost-pegged penalty regime. Members later said that this was a deliberate attempt to find common ground between the [[ai_safety|AI safety]] community, which favoured stricter controls, and the AI industry, which had argued that SB 1047's structure would entrench incumbents and slow open-source release. The working group's framing of a trust-but-verify regime was adopted nearly verbatim by Wiener and his staff when drafting SB 53 in early 2025. [3][13]
SB 53 adds a new Chapter 25.1 to Division 8 of the Business and Professions Code, beginning at Section 22757.10. The chapter is organised around five blocks: definitions; framework and transparency duties on large frontier developers; baseline transparency duties on all frontier developers; critical safety incident reporting; and whistleblower protections. A separate set of provisions amends the Government Code to authorise CalCompute and the annual reports of the Office of Emergency Services and the Department of Technology. [1][4]
Section 22757.11 defines the principal regulated entities and the conduct triggering coverage:
| Term | Statutory definition |
|---|---|
| Foundation model | A model trained on broad data, intended for general output, designed to be adapted to a wide range of distinctive tasks (cross-references existing California foundation-model statute) |
| Frontier model | A foundation model trained using a quantity of computing power greater than 10^26 integer or floating-point operations, including cumulative compute from pretraining and any subsequent fine-tuning or reinforcement learning |
| Frontier developer | A person or entity that has trained or initiated the training of a frontier model |
| Large frontier developer | A frontier developer whose annual gross revenues, taken together with its affiliates, exceeded $500,000,000 in the preceding calendar year |
| Critical safety incident | A defined set of incidents tied to catastrophic risk, including unauthorised access to model weights, materialised catastrophic harm, loss of control of a model, or a material increase in the model's catastrophic risk profile |
| Catastrophic risk | A foreseeable and material risk that the development, storage, use, or deployment of a frontier model will materially contribute to an event causing more than 50 deaths or serious injuries, or more than $1 billion in damage to property or money, through specified vectors |
| Catastrophic risk vectors | Providing expert-level assistance for chemical, biological, radiological, or nuclear weapons; conducting cyberattacks without meaningful human oversight; engaging in murder, assault, extortion or theft involving conduct that would be a serious crime if performed by a human; or evading the control of its frontier developer or user |
The 10^26 floating-point operation threshold tracks the compute level used in the [[eu_ai_act|EU AI Act]] for general-purpose AI models with systemic risk, although the EU figure is 10^25 FLOPs. The catastrophic risk definition borrows directly from the SB 1047 drafting, with modifications to the harm thresholds and the introduction of explicit vectors. [1][4][5]
Under Section 22757.12, every large frontier developer must write, implement, comply with, and clearly and conspicuously publish on its public website a Frontier AI Framework. The framework must address, at minimum:
| Required element | Description in the bill |
|---|---|
| Catastrophic risk identification | How the developer identifies and assesses the risk that one of its frontier models will materially contribute to a catastrophic risk |
| Mitigation strategies | Pre-deployment and post-deployment measures to mitigate identified catastrophic risks |
| Capability thresholds | The capability levels at which the developer would consider a frontier model to pose an unacceptable level of catastrophic risk |
| Cybersecurity | How the developer secures unreleased model weights against unauthorised modification, theft, or release |
| Internal use governance | How the developer governs internal model use, including by employees and contractors |
| Incident response | How the developer responds to a critical safety incident, including triage, containment, and disclosure |
| Third-party assessments | How and when the developer uses third-party evaluators to assess catastrophic risk |
| National and international standards | How the framework incorporates relevant industry, national, or international standards |
The framework must be reviewed and, if appropriate, updated at least annually, and any material updates must be published within 30 days. The framework structure intentionally tracks the structure of an existing [[responsible_scaling_policy|Responsible Scaling Policy]] or a frontier safety framework, allowing companies that already publish such documents to comply with relatively modest additional work. [1][4][5][17]
Section 22757.12 also requires every frontier developer (whether large or not) to publish a transparency report before, or contemporaneously with, the deployment of a new frontier model or a substantially modified version of an existing frontier model. The transparency report must include:
| Item | Content |
|---|---|
| Model identification | Release date, intended uses, and modalities of input and output |
| Restrictions and conditions | Generally applicable restrictions on uses, and any conditions of access |
| Catastrophic risk assessment | A summary of the catastrophic risk assessment performed under the developer's framework |
| Third-party evaluator role | A description of the role of third-party evaluators, if any, in that assessment |
| Other governance attestations | A summary of how the deployment complies with the developer's framework, for large frontier developers |
Large frontier developers must additionally submit, on a quarterly basis, summaries of catastrophic risk assessments arising from their internal use of frontier models to the California Office of Emergency Services. The OES summaries are not made public. [1][4][5]
Section 22757.13 creates a mandatory incident reporting regime running through the California Office of Emergency Services. Frontier developers must report a critical safety incident to OES within 15 days of discovery. If the incident creates an imminent risk of death or serious injury, the developer must additionally provide notice to an appropriate authority, which may include OES, law enforcement, or other emergency response agencies, within 24 hours of the discovery. The bill instructs OES to design a confidential reporting form by January 1, 2027 and to begin publishing aggregated, anonymised annual reports on critical safety incidents on the same date. [1][4][5][18]
Section 1107 of the Labor Code, as added by SB 53, defines a covered employee as an employee, contractor, or unpaid adviser of a frontier developer whose responsibilities include assessing, managing, or addressing risk of catastrophic harm posed by a frontier model. Covered employees may not be prevented from disclosing, and may not be retaliated against for disclosing, in good faith, information they believe indicates that the developer's activities pose a specific and substantial danger to public health or safety arising from a catastrophic risk, or that the developer has violated TFAIA. Disclosure may be made to the Attorney General, to a federal authority, or to certain other state regulators. Large frontier developers must additionally maintain an internal anonymous reporting process and must provide the discloser with monthly status updates on any active investigation. The whistleblower provisions allow private suits for retaliation, with damages, injunctive relief, and attorney's fees available. [1][4][5][19]
A separate section of SB 53 adds Section 11546.8 to the Government Code, establishing the CalCompute consortium within the Government Operations Agency. The consortium is directed to develop a framework for a public cloud computing cluster that supports the development and deployment of safe, ethical, equitable, and sustainable artificial intelligence. The framework is to be submitted to the California Legislature by January 1, 2027. CalCompute itself is contingent on a future legislative appropriation; SB 53 itself does not fund construction. [1][3][4]
Section 22757.15 authorises the California Attorney General to bring a civil action for violations. Penalties are capped at $1,000,000 per violation, with severity calibrated to the nature of the noncompliance, the size of the developer, and any harm caused. The Attorney General has exclusive enforcement authority for the framework, transparency report, and incident reporting provisions; private rights of action are limited to retaliation suits under the whistleblower provisions. [1][4][5]
Section 22757.14 directs the California Department of Technology to consult annually with multistakeholder input and to submit recommendations to the legislature for updates to the bill, particularly in light of technological developments and the experience of the OES incident-reporting regime. This is the bill's principal mechanism for keeping pace with the underlying technology, since the 10^26 FLOPs threshold and the $500 million revenue threshold are not automatically indexed. [1][3]
SB 53 retains the structural target of SB 1047 (the most compute-intensive frontier models) but is materially narrower in obligation. The principal differences are summarised below.
| Feature | SB 1047 (vetoed September 29, 2024) | SB 53 (signed September 29, 2025) |
|---|---|---|
| Compute threshold | $100 million in training compute cost or 10^26 FLOPs | 10^26 FLOPs |
| Revenue gating | None for the principal duties | Strictest duties only at $500 million annual revenue |
| Pre-training certification | Mandatory "limited duty exemption" certification | Not required |
| Mandatory shutdown / kill switch | Required | Not required |
| Downstream developer liability | Strict liability for foreseeable critical harms | Removed |
| Cloud KYC requirements | Required | Removed |
| Incident reporting window | 72 hours | 15 days, with 24 hours for imminent harm |
| Civil penalties | Up to 10 percent of training compute cost (rising to 30 percent for repeat offences) | Up to $1 million per violation |
| Whistleblower protections | Yes, narrower | Yes, with anonymous channels and Labor Code remedies |
| Public compute provision | Yes (CalCompute) | Yes (CalCompute) |
| Frontier Model Division (FMD) | Created a dedicated state regulatory division | Replaced with reporting to existing OES and oversight by Department of Technology |
Observers including the Brookings Institution, the Future of Privacy Forum, the Carnegie Endowment, and the law firm Latham and Watkins have characterised SB 53 as a transparency-first regime that pulls back from SB 1047's prescriptive liability provisions while preserving its core idea that the largest models warrant special public oversight. [3][5][7][20]
Wiener introduced SB 53 in the 2025-2026 California legislative session on January 7, 2025. The bill carried his original goal forward in modified form, and from the start it was framed by Wiener and his staff as the legislative implementation of the working group's recommendations rather than as a return to SB 1047. The bill went through repeated amendment in both chambers, narrowing the compute and revenue thresholds and refining definitions. [1][2][21][22]
The high-level legislative timeline was as follows.
| Date | Action |
|---|---|
| January 7, 2025 | SB 53 introduced by Senator Scott Wiener |
| February 27, 2025 | First Senate amendment |
| March 25, 2025 | Senate Judiciary Committee: do pass as amended, refer to Judiciary |
| March 27, 2025 | Senate amendment |
| April 8, 2025 | Senate Judiciary: do pass, refer to Appropriations |
| May 23, 2025 | Senate amendment |
| May 28, 2025 | Senate Third Reading vote: 37-0 (3 not voting) [PASS] |
| June 17, 2025 | Joint California Policy Working Group releases final report |
| July 1, 2025 | Assembly: do pass, refer to Privacy and Consumer Protection |
| July 8, 2025 | Assembly amendment |
| July 16, 2025 | Assembly: do pass, refer to Appropriations |
| July 17, 2025 | Assembly amendment |
| August 29, 2025 | Assembly: do pass as amended |
| September 2, 2025 | Assembly amendment (final) |
| September 8, 2025 | Anthropic publishes formal endorsement |
| September 11, 2025 | Assembly: do pass |
| September 12, 2025 | Assembly Third Reading vote: 59-7 (14 not voting) [PASS] |
| September 13, 2025 | Senate concurrence vote: 29-8 (3 not voting) [PASS] |
| September 17, 2025 | Bill enrolled and chaptered |
| September 29, 2025 | Signed by Governor Gavin Newsom; became Chapter 138 |
| January 1, 2026 | Substantive transparency, framework, and reporting provisions take effect |
| January 1, 2027 | Critical Safety Incident reporting form to be ready; first OES annual report due; CalCompute framework due to Legislature |
The most consequential amendments narrowed the scope. The May 23, 2025 amendment introduced the affiliate-aggregated $500 million annual revenue floor for the Frontier AI Framework duties, replacing a flat coverage rule. The July 8 amendment refined the compute threshold to make explicit that fine-tuning and reinforcement-learning runs counted toward 10^26 FLOPs cumulatively. The September 2, 2025 amendment, made shortly before final passage, lengthened the OES incident reporting window from 72 hours to 15 days, while keeping a 24-hour rule for imminent threats, in response to industry concerns that the shorter window would generate noise without actionable signal. [1][22][23]
Industry response to SB 53 was unusually divided, even by the standards of California AI legislation.
[[anthropic|Anthropic]] formally endorsed SB 53 on September 8, 2025, three weeks before signing, becoming the only major frontier laboratory to publicly support the bill before passage. The company argued that the law's transparency requirements largely codified its existing [[responsible_scaling_policy|Responsible Scaling Policy]] practices, and that mandatory disclosure created a level playing field by preventing competitors from quietly retreating from voluntary safety commitments. Anthropic acknowledged a preference for federal regulation but said powerful AI advancements would not wait for consensus in Washington. After the bill was signed, Anthropic published a Frontier Compliance Framework on December 19, 2025, mapping its existing safety practices to the SB 53 requirements and committing to specific evaluation domains for its [[claude|Claude]] models. [9][24][25]
[[openai|OpenAI]] sent a public letter to Newsom in August 2025 urging him to veto SB 53. The letter focused on three concerns: that the bill represented a regulatory patchwork that would diverge from any future federal framework; that the 10^26 FLOPs and $500 million revenue thresholds would still capture OpenAI's core models without sparing them effective regulatory burden; and that the catastrophic risk language risked being interpreted broadly. After the bill was signed, OpenAI publicly softened its stance, telling reporters it was pleased that California had created a path toward harmonisation with the federal government. The company later supported a 2026 ballot initiative to attempt to restrict the scope of state AI laws, drawing pointed criticism from civil society sponsors of SB 53. [10][26][27]
[[google_deepmind|Google DeepMind]] and its parent Alphabet were among the first companies to lobby against SB 1047 in 2024 but took a notably quieter posture on SB 53. Google did not publicly endorse the bill, but it also did not publish a veto request letter. After signing, Google emphasised the importance of federal preemption, and pointed to the September 22, 2025 update of its Frontier Safety Framework, which the company argued already met or exceeded the SB 53 framework requirements. [10][27][28]
[[meta_ai|Meta AI]], whose Llama family of models was a focus of SB 1047 debates because of its open-weight release strategy, characterised SB 53 as a positive step. A Meta spokesperson told reporters that the company supports balanced AI regulation and called SB 53 a positive step in that direction. The narrower revenue gating in SB 53, and the absence of cloud KYC and downstream-liability provisions that had especially worried open-weight advocates, drove much of the change in posture. [10]
[[microsoft|Microsoft]], the largest investor in OpenAI, took an intermediate posture. The company did not formally endorse or oppose SB 53. President Brad Smith publicly called for federal AI legislation throughout 2025, citing the bill as evidence of growing state activity. Microsoft's existing Responsible AI program already produced documentation similar to that required by the SB 53 transparency reports, reducing its compliance burden in practice. [3][10]
Elon Musk's xAI, which had publicly supported SB 1047 in 2024, did not formally endorse or oppose SB 53. As of late 2025 the company met both the compute and revenue thresholds and was therefore subject to the strictest set of obligations under the bill. xAI published an updated Frontier Artificial Intelligence Framework on December 31, 2025, partly in response to SB 53. [29][30]
Andreessen Horowitz emerged as the most vocal opponent. Chief Policy Officer Chris Lehane organised a Super PAC during 2025 specifically to oppose state AI regulation, and Andreessen Horowitz's policy team argued that SB 53 imposed excessive burdens on startups, would discourage a Silicon Valley investment, and would be quickly outpaced by federal action. Other firms including Y Combinator and Khosla Ventures took softer positions, with some partners endorsing the bill personally. The Chamber of Progress, an industry trade group with a substantial venture capital presence on its board, argued in a published response that the bill imposed sweeping restrictions that would chill entrepreneurship in California. [10][27]
SB 53 had three principal civil society sponsors: Encode AI (formerly Encode Justice, the youth-led [[ai_safety|AI safety]] advocacy group founded by Sneha Revanur), the Secure AI Project (a non-profit founded by former Anthropic and Google policy staff to translate technical AI safety concerns into legislation), and Economic Security California Action (ESCAA), a state-level group focused on equitable distribution of new technologies. After signing, Encode AI's Nathan Calvin called California a leader in setting common-sense safeguards on AI. ESCAA's Teri Olle argued that the law recognised that AI's benefits should not be confined to a small number of corporations and their investors. The Secure AI Project's Thomas Woodside described the bill as a major step in recognising the stakes of frontier AI. [3][24][31]
The Center for AI Safety, which had been a key supporter of SB 1047, took a generally favourable position on SB 53 while expressing reservations that the law's narrow focus on the largest developers and on catastrophic risk left out important categories of AI harm. The Future of Life Institute likewise endorsed the bill while pushing for stronger third-party evaluation requirements. The Electronic Frontier Foundation expressed concerns about transparency report disclosures potentially being used to chill research, while declining to oppose the bill outright. The American Civil Liberties Union of California neither endorsed nor opposed SB 53, noting that the law's narrow focus on a small number of large developers meant it did not address most civil liberties harms arising from AI deployment. [3][4][32]
SB 53 was signed nearly a year before the federal government took any concerted action against state AI regulation, but the question of preemption was central to the political debate from the beginning. The Trump administration's [[ai_action_plan|AI Action Plan]], released July 23, 2025, called for a national, innovation-focused framework and explicitly described state regimes as a source of fragmentation, although the final text stopped short of proposing immediate preemption. OpenAI's veto letter and Andreessen Horowitz's lobbying both argued that SB 53 should be deferred in favour of federal action. The bill's sponsors and supporters, including Anthropic, accepted that federal legislation was preferable in principle but argued, citing congressional gridlock, that California should not wait. [10][33][34]
In the months after signing, the preemption question moved from the political to the legal arena. On December 11, 2025, [[donald_trump|President Donald Trump]] signed an executive order titled Ensuring a National Policy Framework for Artificial Intelligence. The order directed the Department of Justice to challenge state AI regimes in federal court, conditioned certain federal grants on the absence of inconsistent state regulation, and sought to use administrative reinterpretation of existing federal statutes to displace state AI rules. An earlier draft of the order had explicitly named SB 53 as fear-based and ideologically driven; the final text used softer language about regulatory patchworks, but the policy direction was clear. The order itself does not preempt state law; only an act of Congress or a successful court challenge can do so. As of May 2026, no Department of Justice suit against SB 53 had been filed, and the law remained operative. [33][34]
The Dormant Commerce Clause is the most likely federal-constitutional theory for any future challenge. SB 53 applies to any frontier developer doing business in California, which in practice means any company that wants to operate in the state's market, and so a challenger would need to argue that the law has the practical effect of regulating extraterritorial conduct. The Future of Privacy Forum has argued that SB 53's structure is more vulnerable on this point than New York's later [[ai_safety|RAISE Act]], because the New York statute is explicitly tied to operations within New York, whereas SB 53 attaches to any frontier developer doing business in California. The First Amendment is a secondary theory, since the framework and transparency report requirements are compelled speech. xAI's parallel challenge to AB 2013, California's training-data transparency law, could provide a template for any future SB 53 challenge. [4][5][35]
With no comprehensive federal AI law in effect, several U.S. states moved during 2025 and 2026 to fill the gap. SB 53 was the first of the states to adopt a frontier-model-specific safety regime, but it was followed in short order by New York's RAISE Act and amendments to other state laws. The principal comparables are summarised below.
| State | Law | Effective | Scope | Principal mechanism |
|---|---|---|---|---|
| California | SB 53 (TFAIA) | January 1, 2026 | Frontier developers (>10^26 FLOPs); strict duties at >$500M revenue | Framework + transparency reports + incident reporting + whistleblower; AG enforcement; civil penalty up to $1M per violation |
| New York | Responsible AI Safety and Education (RAISE) Act | After SB 53; March 27, 2026 amendments aligned with TFAIA | Frontier developers operating in New York | Transparency-and-incident regime; civil penalty up to $1M (first) and $3M (subsequent); 72-hour incident reporting; no whistleblower provisions |
| Colorado | Colorado AI Act (SB 24-205) | June 30, 2026 (delayed from February 1, 2026) | High-risk AI systems used for consequential decisions | Algorithmic discrimination duty of reasonable care; impact assessments; consumer disclosures |
| Texas | Responsible AI Governance Act (TRAIGA, HB 149) | January 1, 2026 | Broad: bans certain harmful uses; AI sandbox programme | Prohibitions on specified harms; 36-month regulatory sandbox under Department of Information Resources |
| Utah | AI Policy Act and amendments (SB 149, SB 226, others) | 2024-2026, rolling | Consumer-facing AI uses | Disclosure of AI interaction; sectoral rules in healthcare, mental health, and education |
SB 53 is the only one of these laws to focus exclusively on frontier-scale models. Colorado's Colorado AI Act addresses a different policy problem (algorithmic discrimination in consequential decisions). Texas' TRAIGA adopts a much broader scope but uses prohibitions on specified harms rather than disclosure obligations. Utah's regime is overwhelmingly disclosure-based and consumer-facing. Of the comparators, only New York's RAISE Act is a direct sister statute, and the New York legislature explicitly amended its bill in March 2026 to track TFAIA's structure more closely. [5][20][35][36]
The substantive transparency, framework, and incident reporting provisions of SB 53 took effect on January 1, 2026. By the May 2026 cut-off of this article, the implementation picture was still developing.
[[anthropic|Anthropic]] published its Frontier Compliance Framework on December 19, 2025, mapping its existing [[responsible_scaling_policy|Responsible Scaling Policy]] (then at version 3.0, released February 24, 2026) to the SB 53 framework requirements. [[google_deepmind|Google DeepMind]] continued to update its Frontier Safety Framework, with version 3.0 released September 22, 2025 specifically aligning with the SB 53 framework taxonomy. [[openai|OpenAI]] argued that its existing Preparedness Framework (version 2, April 2025) and the system cards published with [[gpt-5|GPT-5]], GPT-5.1, GPT-5.2, and GPT-5.5 satisfied SB 53's framework and transparency report obligations. xAI published a redrafted Frontier Artificial Intelligence Framework on December 31, 2025, replacing the August 2025 Risk Management Framework. Meta updated its 2025 Frontier AI Approach document with additional disclosures aligned to SB 53. [17][29][37]
The California Office of Emergency Services issued draft incident-reporting forms in March 2026, with a public comment period through May 2026. The Department of Technology held its first stakeholder consultations in early 2026. CalCompute remained in the design stage, with the Government Operations Agency soliciting input on the architecture of the public cloud cluster; the framework report to the legislature is due January 1, 2027 and any actual cluster construction is contingent on a future legislative appropriation. The California Attorney General had not, as of May 2026, brought any enforcement action under SB 53. No critical safety incidents had been publicly reported, although the OES system was not yet receiving filings under the published draft form. [3][4][37]
Independent assessments of compliance during the first quarter of 2026 noted considerable variation. METR's December 2025 update on common elements of frontier AI safety policies found that Anthropic and Google DeepMind frameworks were the closest to full SB 53 compliance, that OpenAI's documentation was extensive but unevenly mapped to the bill's specific framework elements, and that Meta and xAI had material gaps in third-party assessment and internal governance disclosures. The Stanford CRFM Foundation Model Transparency Index, released in December 2025, drew similar conclusions. The [[frontier_model_forum|Frontier Model Forum]]'s technical report series on frontier safety frameworks served as the principal industry-led standardisation venue, publishing reports on risk taxonomy, mitigations, and third-party assessments during 2025 and 2026 that explicitly referenced SB 53. [38][39][40]
As of May 2026, no court had enjoined or otherwise paused the operation of SB 53. xAI's earlier challenge to California's AB 2013 (the AI training data transparency law of 2024) is the closest existing analogue, raising First Amendment compelled speech and trade secret arguments that could be imported into a future SB 53 challenge. The Ninth Circuit's December 2, 2025 grant of an injunction against California's separate climate disclosure law SB 261 was widely cited as evidence that compelled disclosure regimes face meaningful federal-constitutional risk, although SB 261 implicates different statutory and First Amendment frameworks than SB 53. [35][41]
The Trump administration's December 11, 2025 executive order on state AI regulation directed the Department of Justice to consider preemption and Dormant Commerce Clause challenges to state AI laws, but as of May 2026 no such suit had been filed against SB 53. Carnegie Endowment analysis published in late 2025 argued that the most likely DOJ approach was conditioning federal grant funding on the absence of inconsistent state rules, rather than direct litigation, since direct litigation against a transparency-and-incident-reporting regime would be a more difficult constitutional case than the administration's earlier draft order suggested. [33][34]