The Frontier Model Forum is an industry body established on July 26, 2023, by Anthropic, Google, Microsoft, and OpenAI to advance safety research, identify best practices, and facilitate information sharing for the most advanced artificial intelligence systems. Operating as a 501(c)(6) nonprofit organization, the Forum focuses specifically on managing severe risks from frontier AI models, a term the organization defines as general-purpose AI models that constitute the state of the art and outperform other widely deployed models across capability benchmarks. The Forum does not engage in lobbying. Its funding comes exclusively from member firm fees.
The Forum was expanded in May 2024 with the addition of Amazon and Meta, bringing total membership to six of the world's largest AI developers. Under the leadership of Executive Director Chris Meserole, formerly of the Brookings Institution, the organization operates through five technical workstreams covering AI biosecurity, cybersecurity, nuclear security, AI model security, and frontier AI safety frameworks. A separately administered grant-making initiative called the AI Safety Fund has distributed more than $10 million to independent researchers developing new methods for evaluating frontier AI capabilities and risks.
The Forum occupies a distinct position in the AI governance landscape, differing from broader multi-stakeholder bodies such as the Partnership on AI in its narrower membership criteria, its exclusive focus on frontier-scale risks including those from chemical, biological, radiological, and nuclear (CBRN) threats, and its roots in the same cohort of companies whose model releases it aims to make safer. This proximity to developers has generated both practical advantages in technical depth and persistent criticism that the Forum represents industry self-regulation rather than independent oversight.
The founding of the Frontier Model Forum occurred against a backdrop of intensifying public debate about the risks posed by large-scale AI systems. In 2023, the commercial release of ChatGPT in late 2022 had rapidly catalyzed policy attention across governments worldwide. In March 2023, an open letter organized by the Future of Life Institute called for a six-month pause on AI training runs more powerful than GPT-4, gathering signatures from prominent researchers and attracting significant press coverage. Separately, the Biden administration had begun a series of meetings with AI company executives to explore voluntary commitments on AI safety.
On July 21, 2023, five days before the Forum's founding announcement, seven AI companies including Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI announced a set of eight voluntary commitments to the White House on AI safety. These commitments included conducting internal and external red-teaming of models, investing in cybersecurity safeguards for model weights, and developing mechanisms to identify AI-generated content through watermarking. The Frontier Model Forum's founding was directly connected to this moment: four of the seven White House commitment signatories, Anthropic, Google, Microsoft, and OpenAI, immediately formalized their coordination through the new industry body.
The Forum was announced on July 26, 2023, through simultaneous blog posts from all four founding companies. The announcement described three core objectives: advancing AI safety research, identifying best practices for the responsible development and deployment of frontier models, and sharing knowledge with policymakers, academics, and civil society. The founding companies also committed to establishing an Advisory Board representing diverse backgrounds, and to developing a charter, governance framework, and funding mechanism through a working group and executive board.
In October 2023, the Forum achieved two major milestones. On October 25, 2023, the organization announced the appointment of Chris Meserole as its first Executive Director and simultaneously announced the creation of the AI Safety Fund, an initial commitment of more than $10 million from the founding companies and philanthropic partners to support independent research. Meserole had previously served as Director of the Artificial Intelligence and Emerging Technology Initiative at the Brookings Institution, a nonpartisan Washington policy think tank, and brought experience in technology policy and AI governance to the role.
The Forum participated in the UK AI Safety Summit held at Bletchley Park on November 1 and 2, 2023, which brought together government officials from 28 countries and EU representatives alongside industry executives and researchers. The summit produced the Bletchley Declaration, a joint statement on frontier AI risks. In February 2024, the Forum joined the US AI Safety Institute Consortium as a founding member.
Amazon and Meta joined the Forum on May 20, 2024, in advance of the AI Seoul Summit co-hosted by South Korea and the UK on May 21 and 22, 2024. The addition of both companies brought the total membership to six and extended the Forum's reach to include two of the largest cloud computing providers and the operator of the world's most widely distributed social media platforms. At the Seoul Summit, all six member firms signed the Frontier AI Safety Commitments, which required publication of individual company Frontier AI Safety Frameworks in advance of the AI Action Summit in Paris scheduled for February 2025.
In a significant governance development, the Forum announced a first-of-its-kind voluntary information-sharing agreement among all member firms, establishing legal and technical infrastructure for the confidential exchange of information about vulnerabilities, threats, and capabilities of concern unique to frontier AI models. The Meridian Institute, which had been administering the AI Safety Fund since its creation, announced the closure of its operations in June 2025, after which the Forum assumed direct management of the fund. A second cohort of AI Safety Fund grantees was announced in December 2025, distributing more than $5 million to eleven research organizations.
The four founding members, Anthropic, Google, Microsoft, and OpenAI, represent the organizations primarily responsible for the frontier AI systems that generated the most intense public scrutiny in 2022 and 2023. Each brought distinct capabilities and institutional contexts to the Forum's founding.
Anthropic was founded in 2021 by former OpenAI researchers including Dario Amodei and Daniela Amodei, with a stated mission of AI safety research as a core organizational commitment rather than a secondary function. At the time of the Forum's founding, Anthropic had released the Claude model family and had articulated a safety approach based on Constitutional AI, a technique for aligning models with principles rather than solely through human feedback on individual outputs. Dario Amodei's statement at the Forum's launch noted that "the Frontier Model Forum will play a vital role in coordinating best practices and sharing research on frontier AI safety."
Google DeepMind represented the combined AI research capacity of Google's parent company Alphabet, which had recently merged its Google Brain and DeepMind units. Google had been a pioneer in large language model research through work such as the Transformer architecture paper and the development of the BERT and PaLM model families. Google's participation in the Forum reflected both its technical leadership in the field and its significant policy exposure as one of the most scrutinized technology companies globally.
Microsoft had become deeply invested in frontier AI through its multibillion-dollar partnership with OpenAI, which had given it exclusive cloud access to GPT-4 and enabled the integration of large language model capabilities into products including the Bing search engine, GitHub Copilot, and the Microsoft 365 suite. Brad Smith, Microsoft's President, stated at the Forum's founding that "companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control." Microsoft's participation brought significant enterprise and government customer relationships to the Forum's policy engagement work.
OpenAI was the developer of the GPT-4 and ChatGPT systems that had most directly precipitated the public debate driving the Forum's creation. Anna Makanju, OpenAI's Vice President of Global Affairs, stated that "it is vital that AI companies advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible." OpenAI's participation was notable given ongoing debate about whether the company's transition from a nonprofit to a capped-profit structure had adequately preserved its original safety mission.
Amazon and Meta joined the Forum simultaneously on May 20, 2024, after meeting the organization's membership criteria, which require applicants to demonstrate active development or deployment of frontier AI models, a track record of safety mitigations, and a commitment to participate in Forum activities and provide financial support for a minimum of three years.
Amazon brought to the Forum its AWS cloud infrastructure, the Bedrock managed AI service, and the Amazon Titan model family, as well as its position as the primary cloud provider for many AI startups including Anthropic, which received a multibillion-dollar investment from Amazon beginning in 2023. David Zapolsky, Amazon's General Counsel and Senior Vice President of Global Affairs, stated that the company would "work collectively with industry partners to advance the science, standards, and best practices that will enhance AI safety."
Meta's membership brought one of the most widely deployed AI systems in the world in terms of user exposure, as Meta's products including Facebook, Instagram, and WhatsApp collectively reach billions of users, and the company had deployed AI recommendation and content moderation systems at enormous scale. Meta had also taken a distinctive position in the frontier AI landscape through its open-weight release strategy, publishing the weights of the Llama model family for download and derivative use. Meta's President of Global Affairs, Nick Clegg, stated that "this collaboration will help us build AI to meet society's biggest needs, from healthcare to climate change."
The Forum's membership criteria specify three requirements. First, applicants must demonstrate proven ability to develop or deploy frontier AI models at scale. Second, they must have a track record of safety work, including public acknowledgment of risks, established processes for assessing safety throughout the model lifecycle that can lead to deployment delays, and support for third-party evaluations. Third, they must commit to financially supporting the Forum's work for at least three years. Organizations interested in membership are directed to contact membership@frontiermodelforum.org.
The Forum's mission centers on three interconnected functions: developing best practices and supporting standards for frontier AI safety and security, advancing the scientific understanding of frontier AI risks and mitigations, and facilitating information exchange among government, academia, civil society, and industry.
The organization defines frontier AI using a deliberately broad criterion: models that outperform other widely deployed models, specifically those that have been in deployment for at least twelve months, across capability benchmarks and assessments. This definition is intended to capture the leading edge of capability development rather than a specific parameter count or architecture class.
The Forum's primary operational focus is on managing what it characterizes as significant risks to public safety and national security. This includes risks arising from the potential misuse of frontier AI systems to assist in the development of chemical, biological, radiological, or nuclear weapons, as well as risks from offensive cybersecurity applications and from AI systems that develop concerning levels of autonomous behavior. The Forum explicitly frames these as the highest-priority risk categories warranting collective industry action.
Operationally, the Forum works through five specialized workstreams. The AI-Bio workstream engages experts across virology, microbiology, bioengineering, and pandemic preparedness to develop shared threat models, safety evaluation taxonomies, and mitigation strategies for risks in the biological domain. The AI-Cyber workstream focuses on risks arising from frontier AI's coding and vulnerability discovery capabilities, and has produced publications including an issue brief on AI for cyber defense. The AI-Nuclear workstream addresses questions about whether frontier AI could assist malicious actors in developing nuclear or radiological weapons, and has produced a research update on frontier AI and nuclear security. The AI Security workstream addresses the security of AI model development and deployment infrastructure, including risks from adversarial inputs, data poisoning, and unauthorized access to model weights. The Frontier Frameworks workstream focuses on the common elements across company-level safety frameworks and on developing shared norms for how frontier AI developers should assess and manage risks before and during deployment.
The AI Safety Fund was announced on October 25, 2023, simultaneously with the appointment of Chris Meserole as Executive Director. The initial funding commitment exceeded $10 million, combining contributions from the four founding member companies, Anthropic, Google, Microsoft, and OpenAI, with contributions from philanthropic partners including the Patrick J. McGovern Foundation, the David and Lucile Packard Foundation, entrepreneur and investor Eric Schmidt, and Jaan Tallinn, a co-founder of Skype and a prominent funder of AI safety research. The Meridian Institute was selected to administer the fund, supported by an advisory committee of independent external experts, AI company specialists, and grantmaking professionals. The fund's purpose is to support independent researchers at academic institutions, research institutions, and startups developing new methods for evaluating frontier AI capabilities and risks.
The fund focused its first rounds of grantmaking on safety evaluations, red-teaming methodologies, and risk assessment techniques. Its annual letter noted that funded work included the Virology Capability Test, a benchmark designed to assess whether frontier AI systems could provide actionable assistance to malicious actors seeking to conduct dangerous virology work. The benchmark, published as a preprint in April 2025, found that frontier models including OpenAI's o3 system outperformed 94 percent of expert virologists on questions within the experts' own specialties, findings that the authors treated as evidence of significant biosafety risk requiring mitigation.
In December 2025, the Forum announced a second cohort of eleven grantees receiving more than $5 million in total, selected from more than 100 competitive proposals. Grantees included Apollo Research (building monitoring systems for scheming behavior in AI agents), the California Institute of Technology (AI-driven detection of protein mimetic biothreats), SecureBio (evaluations for AI agents' ability to execute tasks enabling large-scale harm), the University of Illinois Urbana-Champaign (cybersecurity risk evaluation of AI agents with computer interaction capabilities), FAR.AI (quantifying the safety-adversary gap in large language models), Faculty AI (automated red-teaming for biosecurity risks), FutureHouse (benchmarks for AI-driven experimental design), Morgan State University (evaluating AI-assisted cybersecurity operations), Nemesys Insights (industrial control system benchmarks), the Institute for Decentralized AI (oversight for multi-agent networks), and the University of Toronto (analyzing sanctioning mechanisms in multi-agent LLM systems).
Following Meridian Institute's closure announcement in June 2025, the Forum transitioned to managing the AI Safety Fund directly, maintaining continuity of grantmaking activities.
Chris Meserole has served as Executive Director of the Frontier Model Forum since his appointment was announced on October 25, 2023. Before joining the Forum, Meserole served as Director of the Artificial Intelligence and Emerging Technology Initiative at the Brookings Institution, a Washington-based nonpartisan policy think tank, where he focused on technology policy and governance of emerging technologies.
As Executive Director, Meserole oversees the Forum's technical workstreams, its publication program, its government and international engagement, and the AI Safety Fund. In the Forum's annual letter, Meserole framed the organization's work around what he described as a singular mission of advancing frontier AI safety and security, and characterized the technical report series on frontier AI frameworks as "the most comprehensive and in-depth resource to date" for implementing frontier AI risk management. He emphasized the importance of collaborative work across frontier AI developers as AI development accelerates.
The Forum is governed by an operating board composed of representatives from its six member organizations. This structure gives member companies direct oversight of the Forum's direction while Meserole manages day-to-day operations. The organization is funded exclusively through member firm fees, a structure that aligns the Forum's financial interests with those of its members while raising questions about independence that critics have noted.
The Forum has developed an active publication program spanning issue briefs, technical reports, and research updates. Publications focus on practical guidance for frontier AI safety rather than theoretical AI alignment research.
Among the earliest publications was "What is Red Teaming?" (October 24, 2023), an issue brief providing a foundational explanation of adversarial testing techniques for AI systems, aimed at establishing shared terminology and practices across the industry and among policymakers.
The Forum subsequently developed a technical report series on frontier AI safety frameworks. "Components of Safety Frameworks" (November 8, 2024) outlined the common elements that robust frontier AI safety frameworks should include, drawing on emerging practices at member companies. "Early Best Practices for Frontier AI Safety Evaluations" (July 31, 2024) documented emerging consensus on how to conduct capability assessments before model deployment. "Measuring Training Compute" (May 2, 2024) provided technical guidance on how to quantify computational resources used in model training, a key input for threshold-based safety frameworks.
The Forum's Frontier Frameworks technical report series, launched in 2025, aimed to provide detailed guidance on implementing safety frameworks across different organizational contexts. Titles in the series include "Risk Taxonomy and Thresholds for Frontier AI Frameworks" (June 18, 2025), "Frontier Mitigations" (June 30, 2025), "Frontier Capability Assessments" (April 22, 2025), and "Managing Advanced Cyber Risks in Frontier AI Frameworks" (February 13, 2026). The series defines frontier capability assessments as structured procedures for determining whether a model has specific prespecified capabilities that would trigger risk management responses, and distinguishes them from general-purpose benchmarks by their explicit connection to deployment and safety decisions.
Domain-specific publications have addressed the AI-Bio, AI-Cyber, and AI-Nuclear risk areas. In the biosafety domain, the Forum published "Preliminary Taxonomy of AI-Bio Safety Evaluations" (December 20, 2024), "Preliminary Reporting Tiers for AI-Bio Safety Evaluations" (March 18, 2025), "Frontier AI Biosafety Thresholds" (May 12, 2025), and "Preliminary Taxonomy of AI-Bio Misuse Mitigations" (July 30, 2025). In cybersecurity, "AI for Cyber Defense" (November 22, 2024) examined how frontier AI could be applied to defensive cybersecurity operations. The series also includes "Chain of Thought Monitorability" (January 27, 2026) and "Adversarial Distillation" (February 23, 2026), addressing the interpretability of AI reasoning traces and techniques for extracting capabilities from frontier models.
In parallel with these publications, the Forum announced a first-of-its-kind voluntary information-sharing agreement among all member firms. The agreement established legal and technical infrastructure enabling the secure, confidential exchange of information about three categories: vulnerabilities, defined as weaknesses that could compromise frontier AI models including jailbreaks and adversarial inputs; threats, defined as attempts to gain unauthorized access to or manipulation of frontier AI models; and capabilities of concern, defined as frontier AI capabilities that could cause large-scale harm including CBRN-related capabilities, offensive cybersecurity attacks, and concerning levels of model autonomy. The Forum reported that information shared under this agreement has included security vulnerabilities, jailbreak repositories, and enhanced safety protocols.
The Forum has engaged with government bodies across multiple jurisdictions as part of its mandate to facilitate information exchange between industry and policy makers.
At the UK AI Safety Summit at Bletchley Park in November 2023, the Forum and its member firms participated alongside government delegations from 28 countries that together signed the Bletchley Declaration, which acknowledged the potential for serious, even catastrophic, harms from frontier AI and called for international collaboration on AI safety. The summit marked the first time a major intergovernmental agreement had specifically addressed risks from frontier-scale AI systems.
In February 2024, the Forum joined the US AI Safety Institute Consortium as a founding member. The Consortium was established within the National Institute of Standards and Technology (NIST) under the Biden administration's Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, issued in October 2023. The Consortium brought together AI developers, evaluators, researchers, and standards bodies to support the AI Safety Institute's work on evaluating frontier AI systems.
In March 2024, the Forum appeared before the National AI Advisory Committee to provide testimony on AI safety practices. In June 2024, the Forum participated in the OECD's AI Governance meeting. At the AI Seoul Summit in May 2024, all six Forum members signed the Frontier AI Safety Commitments, which established a requirement to publish individual company Frontier AI Safety Frameworks before the AI Action Summit in Paris in February 2025. These commitments represented a significant step toward standardized public disclosure of how each frontier AI developer assesses and manages catastrophic risk.
The Forum has also engaged with international standards bodies and contributed to multilateral discussions through the network of national AI safety institutes announced at the Seoul Summit. The Forum's stated goal in these engagements is to ensure that policy frameworks and regulatory standards reflect technical realities about what frontier AI systems can and cannot do, and to provide governments and regulators with accurate information about the state of safety practices across leading AI developers.
The Frontier Model Forum and the Partnership on AI are the two most prominent industry bodies focused on AI safety and governance in the United States, and they are frequently discussed together despite significant differences in structure, membership, and mission.
The Partnership on AI was founded in September 2016 by Amazon, Facebook, Google, DeepMind, Microsoft, and IBM, predating the current generation of large language models and reflecting the AI landscape of the mid-2010s, when concerns centered on algorithmic fairness, bias, and the social impact of recommendation systems. Its membership has since grown to more than 120 organizations from academia, civil society, and industry in 16 countries, making it a broadly multi-stakeholder body. Its mission encompasses AI's impact on the economy and work, justice, transparency and accountability, inclusive research and design, and security for AI, among other topics.
The Frontier Model Forum, by contrast, was founded specifically to address the risks posed by the most powerful AI systems, and its membership is restricted to companies actively developing or deploying frontier models. This restriction is both a feature and a limitation: it ensures that Forum members have direct technical knowledge of the systems being discussed, but it also means that civil society organizations, academic researchers, and smaller AI companies have no formal role in the Forum's deliberations. The Forum explicitly focuses on severe risks including CBRN threats and advanced cyber capabilities, areas that Partnership on AI does not prioritize.
The following table summarizes the key differences between the two organizations and also includes MLCommons, a third AI industry body that the Frontier Model Forum has identified as a collaborator.
| Attribute | Frontier Model Forum | Partnership on AI | MLCommons |
|---|---|---|---|
| Founded | July 2023 | September 2016 | December 2020 |
| Membership | 6 frontier AI developers | 120+ organizations (industry, academia, civil society) | ~50 organizations (industry, academia) |
| Membership criteria | Must develop or deploy frontier AI models | Open to broad range of AI-related organizations | Open to organizations with AI engineering interest |
| Primary focus | Severe risks from frontier AI (CBRN, cyber, safety frameworks) | Broad AI ethics, fairness, social impact, governance | AI benchmarks, datasets, and evaluation standards |
| Legal structure | 501(c)(6) nonprofit | 501(c)(3) nonprofit | 501(c)(6) nonprofit |
| Civil society participation | No formal membership role | Core component of membership | Limited |
| Grant-making | Yes (AI Safety Fund, $10M+) | No | No |
| Lobbying | No (by charter) | No | No |
The Forum's founding announcement explicitly acknowledged Partnership on AI and MLCommons as existing efforts making important contributions and stated that the Forum would explore ways to collaborate with and support them. This framing characterized the Forum as complementary rather than competitive, though observers noted that the Forum's funding, political salience, and concentration of the most powerful AI developers gave it substantially more visibility than its predecessors.
MLCommons is an engineering-focused consortium that develops AI benchmarks and datasets, most notably the MLPerf benchmark suite used to compare hardware and software performance on AI workloads and, separately, a set of safety benchmarks for evaluating AI model outputs. The Forum's engagement with MLCommons has centered on evaluation methodology, as both organizations are developing standards for assessing frontier AI capabilities and risks.
The Forum's founding generated substantial media coverage and a divided reception. Commentators who welcomed the initiative typically cited the unprecedented nature of direct coordination among the leading frontier AI developers on safety practices, the credibility lent to safety commitments by involving the same organizations deploying the most powerful systems, and the Forum's potential to produce technically grounded policy inputs that purely civil society or government bodies might lack.
The appointment of Chris Meserole and the $10 million AI Safety Fund were seen as evidence that the Forum was moving from announcement to operation, providing institutional infrastructure beyond a statement of intent. The Forum's participation in the Bletchley and Seoul summits positioned it as a recognized interlocutor in emerging international governance discussions, and its information-sharing agreement was noted as a significant operational achievement given the competitive sensitivities involved in sharing vulnerability information among direct competitors.
Coverage of the Forum's technical publications has generally been positive within AI safety and policy communities, with the Frontier Frameworks technical report series cited as providing practical guidance that policymakers and other AI developers had lacked. The Virology Capability Test results, distributed through the AI Safety Fund, attracted particular attention as concrete evidence of the biosafety risks that motivated a significant portion of the Forum's work.
The Forum has faced sustained criticism from several directions. The most consistent objection is structural: as an organization funded by and composed of the companies whose products it aims to make safer, the Forum cannot provide independent oversight in the sense that a government regulator or arms-length standards body might. Andrew Rogoyski of the Institute for People-Centred AI at the University of Surrey characterized the arrangement as "putting the foxes in charge of the chicken coop" and argued that profit-driven AI developers should not be responsible for setting the bar on AI safety research. Similar concerns have been raised by civil society organizations that note the absence of any formal role for non-industry voices in the Forum's governance.
A related critique concerns the potential for regulatory capture. If the Forum's technical standards and best practices become the de facto basis for government regulation, the companies that set those standards may have shaped the regulatory environment in ways that benefit incumbents and raise barriers to competition, regardless of whether this was the intention. Critics have noted that the Forum's membership criteria, which require active development or deployment of frontier AI models with established safety programs, would effectively exclude emerging competitors that have not yet developed comparable institutional safety infrastructure.
Questions have also been raised about whether voluntary commitments translate into practice. Analysis of the White House voluntary commitments that preceded the Forum's founding found mixed evidence of compliance one year later, with some companies having reduced rather than increased pre-deployment safety testing time as competitive pressures intensified. This pattern suggests that the cooperative safety norms the Forum aims to establish may be difficult to maintain when companies face competitive pressure to release capable models quickly.
Observers have also noted that the Forum's emphasis on catastrophic and CBRN-related risks, while addressing genuinely serious concerns, places less emphasis on the near-term harms from AI systems that have attracted the most regulatory attention in many jurisdictions, including labor market displacement, algorithmic bias in high-stakes decisions, misinformation and synthetic media, and privacy violations at scale. Critics argue that this framing reflects the priorities of frontier AI developers rather than those of affected communities.
Finally, the Forum's exclusive membership structure has been noted as a potential limitation on its long-term legitimacy. As the frontier of AI capability expands, the definition of frontier models will evolve, and organizations that are not currently Forum members may develop systems with comparable capabilities. The Forum's approach to these eventual new entrants, and whether its safety standards prove portable to institutional contexts beyond its founding members, remains an open question as of mid-2026.