| | Products | Claude (AI assistant) Claude API Claude Code Claude Cowork Claude Marketplace | | Revenue | $19+ billion ARR (March 2026)[1]
| | Employees | ~3,000 (2026)[2] | | Website | anthropic.com |
Anthropic PBC is an American artificial intelligence (AI) safety and research company founded in 2021. The company develops and deploys large language models (LLMs), most notably the Claude family of AI assistants, with a stated focus on AI safety and alignment. Its goal is to ensure that advanced AI systems are reliable, interpretable, and steerable.[3] As of February 2026, Anthropic is valued at $380 billion following a $30 billion Series G funding round, making it one of the most valuable private companies in the world.[4]
Anthopic was founded in 2021 by eight former employees of OpenAI, including siblings Dario Amodei and Daniela Amodei. Dario, who served as OpenAI's Vice President of Research, became Anthropic's CEO, while Daniela, formerly VP of Safety & Policy at OpenAI, became President.[5] The founders left OpenAI due to directional differences, particularly concerns about AI safety and the pace of AI development.[3]
The company was initially incorporated as a Delaware public benefit corporation (PBC), a structure that enables directors to balance stockholders' financial interests with its public benefit purpose of "the responsible development and maintenance of advanced AI for the long-term benefit of humanity."[6]
In May 2021, Anthropic raised $124 million in Series A funding led by Jaan Tallinn, co-founder of Skype.[7] In April 2022, the company raised $580 million in Series B funding, including $500 million from FTX prior to its collapse.[8]
The company spent 2022 developing the first version of Claude but chose not to release it publicly, citing the need for internal safety testing and a desire to avoid initiating a potentially hazardous race in AI capabilities development.[9]
In March 2023, Anthropic publicly launched Claude, its first AI assistant, marking the company's entry into the competitive AI assistant market.[10] In July 2023, the company was named to the White House's voluntary AI safety commitments alongside other leading labs like OpenAI, Google, and Meta.[11]
Throughout 2023, the company secured major investments and partnerships:
May 2023: Raised $450 million in Series C funding led by Spark Capital[12]
September 2023: Amazon announced an initial investment of up to $4 billion, making AWS Anthropic's primary cloud provider[13]
October 2023: Google invested $2 billion in the company[14]
In March 2024, Anthropic released the Claude 3 family of models (Haiku, Sonnet, and Opus), with Opus outperforming OpenAI's GPT-4 and Google's Gemini Ultra on various benchmarks.[15] June 2024 saw the release of Claude 3.5 Sonnet, which demonstrated significant improvements over Claude 3 Opus despite being a smaller model, alongside the introduction of the Artifacts feature for real-time code and content generation.[16] In October 2024, Anthropic released an upgraded Claude 3.5 Sonnet alongside Claude 3.5 Haiku, the fastest model in its lineup, which scored 40.6% on SWE-bench Verified and featured a 200,000-token context window.[17]
Also in November 2024, Anthropic announced the Model Context Protocol (MCP), an open standard for connecting AI assistants to external data systems and tools. MCP was designed to solve the "N-by-M" integration problem, where developers previously had to build custom connectors for each data source. The protocol was quickly adopted by major AI providers including OpenAI and Google DeepMind, and by early 2026 had become the de facto standard for connecting agents to tools and data, with thousands of community-built MCP servers and SDKs in Python, TypeScript, C#, and Java.[18]
In November 2024, Amazon invested an additional $4 billion, bringing its total investment to $8 billion and establishing AWS as Anthropic's primary training partner.[19]
The year 2025 was transformative for Anthropic. In February, the company released Claude 3.7 Sonnet, billed as the first "hybrid reasoning model," capable of producing both near-instant responses and extended step-by-step thinking visible to the user. Claude 3.7 Sonnet achieved 70.3% on SWE-bench Verified and was released alongside an early preview of Claude Code, an agentic coding tool.[20]
In March 2025, Anthropic raised $3.5 billion in Series E funding at a $61.5 billion valuation, led by Lightspeed Venture Partners.[21]
On May 22, 2025, Anthropic introduced the Claude 4 generation: Claude Opus 4 and Claude Sonnet 4. Opus 4 led on SWE-bench at 72.5% and Terminal-bench at 43.2%, and could work continuously for several hours on complex, long-running agent workflows. Sonnet 4 matched Opus 4's SWE-bench score (72.7%) at a fraction of the cost. Both models introduced extended thinking with tool use, allowing Claude to alternate between reasoning and tool calls within a single turn. Anthropic also activated ASL-3 (Anthropic Safety Level 3) safeguards for these models.[22]
Claude Code was fully released in May 2025, and its growth was explosive. Within six months of launch it reached a $1 billion annualized run rate, a velocity that even ChatGPT did not match.[23]
In July 2025, Anthropic accepted a $200 million contract from the United States Department of Defense for military AI applications, though the contract included usage restrictions negotiated by Anthropic.[24] In August 2025, the company released Claude Opus 4.1.[25]
In September 2025, Anthropic completed Series F funding of $13 billion at a $183 billion valuation, co-led by ICONIQ Capital, Fidelity Management, and Lightspeed Venture Partners. The company also released Claude Sonnet 4.5, marketed as its strongest model for coding, agents, and computer use.[26] That same month, Anthropic agreed to pay $1.5 billion to settle a class action copyright lawsuit (Bartz v. Anthropic) brought by authors who alleged the company had used millions of pirated books from shadow libraries to train its models. The settlement, preliminarily approved by Judge William Alsup, covered approximately 500,000 works at roughly $3,000 per book, making it the largest copyright settlement in U.S. history.[27]
In October 2025, Anthropic hired former Stripe CTO Rahul Patil as its new Chief Technology Officer. As part of this change, co-founder Sam McCandlish moved to a new role as Chief Architect, and the company restructured its core technical group to bring its product-engineering team into closer contact with the infrastructure and inference teams. Mike Krieger stepped away from his Chief Product Officer role to co-lead Anthropic's Labs team, with Ami Vora taking over as Head of Product.[63]
The company's annualized run-rate revenue grew from $1 billion at the start of 2025 to over $9 billion by the end of the year.[28]
On November 24, 2025, Anthropic launched Claude Opus 4.5, the first AI model to break 80% on SWE-bench Verified (scoring 80.9%). The model came with a 67% price cut relative to the previous Opus tier, priced at $5 per million input tokens and $25 per million output tokens.[29]
In December 2025, Anthropic donated MCP to the newly established Agentic AI Foundation (AAIF), a directed fund under the Linux Foundation. The AAIF was co-founded by Anthropic, Block, and OpenAI, with support from Google, Microsoft, AWS, Cloudflare, and Bloomberg. MCP joined two other founding projects: goose (by Block) and AGENTS.md (by OpenAI). In one year, MCP had grown to over 97 million monthly SDK downloads and 10,000 active servers, with first-class client support across major AI platforms including ChatGPT, Claude, Cursor, Gemini, Microsoft Copilot, and Visual Studio Code.[64]
Anthopic entered 2026 with continued momentum.
On January 12, 2026, Anthropic launched Claude Cowork as a research preview, a general-purpose AI agent built into the Claude Desktop app on macOS. Described as "Claude Code for the rest of your work," Cowork allows users to designate a folder on their computer where Claude can read, edit, and create files. Use cases demonstrated at launch included reorganizing downloads, converting receipt screenshots into expense spreadsheets, and producing first drafts from scattered notes. The tool implements sub-agent coordination for parallelizable tasks, spawning multiple Claude instances that execute concurrently. At launch, Cowork was available only to Max subscribers ($100 to $200 per month).[70]
On February 5, 2026, the company released Claude Opus 4.6, featuring a 1 million token context window in beta (the first Opus-class model to support this), support for up to 128k output tokens (double the previous limit), and a new "agent teams" capability. Agent teams allow multiple Claude Code agents to work in parallel from a single orchestrator, each with its own full context window, communicating through what Anthropic calls a "Mailbox Protocol." In a demonstration, 16 parallel agents wrote a 100,000-line C compiler (in Rust) in two weeks that could compile the Linux 6.9 kernel, QEMU, FFmpeg, SQLite, PostgreSQL, and Redis with a 99% pass rate on the GCC test suite.[30] Opus 4.6 also introduced adaptive thinking as its recommended reasoning mode, where Claude dynamically decides when and how much to think.[31]
On February 12, 2026, Anthropic closed a $30 billion Series G funding round at a $380 billion post-money valuation. The round was led by GIC (Singapore's sovereign wealth fund) and Coatue, with D. E. Shaw Ventures, Founders Fund, and Abu Dhabi's MGX co-leading. Other participants included Sequoia Capital, BlackRock, Blackstone, Temasek, Goldman Sachs, JPMorgan Chase, and Fidelity. This was the second-largest private financing round in tech history, trailing only OpenAI's $40 billion raise.[4]
On February 17, 2026, Anthropic released Claude Sonnet 4.6, which achieved 79.6% on SWE-bench Verified and 72.5% on OSWorld. In developer testing, Sonnet 4.6 was preferred over the previous flagship Opus 4.5 59% of the time for coding tasks, at one-fifth the price ($3/$15 per million tokens). It became the default model for Free and Pro plan users on claude.ai.[32]
Also on February 17, 2026, Anthropic announced a strategic collaboration with Infosys to develop enterprise AI solutions across telecommunications, financial services, manufacturing, and software development. The partnership integrates Anthropic's Claude models and Claude Code with Infosys Topaz AI offerings, with a focus on building agentic AI systems using the Claude Agent SDK. The collaboration began with a dedicated Anthropic Center of Excellence for the telecommunications sector. Anthropic also formally opened its Bengaluru office, its second in Asia after Tokyo, led by country Managing Director Irina Ghose. India had become Claude's second-largest market worldwide, accounting for roughly 6% of global usage.[71]
On February 20, 2026, Anthropic launched Claude Code Security as a limited research preview for Enterprise and Team customers. Claude Code Security scans codebases for security vulnerabilities by reasoning about code the way a human security researcher would: understanding how components interact, tracing data flows across files, and catching complex multi-component vulnerability patterns that static analysis tools miss. Each identified vulnerability goes through a multi-stage verification process to filter out false positives and is assigned a severity rating. Nothing is applied without human approval. Using Opus 4.6, the team discovered over 500 high-severity vulnerabilities in widely used open-source libraries such as Ghostscript, OpenSC, and CGIF, many of which had gone undetected for decades despite years of expert review. Open-source maintainers received expedited access to the tool.[72]
On February 24, 2026, Anthropic publicly accused three Chinese AI companies, DeepSeek, Moonshot AI, and MiniMax, of conducting "industrial-scale" distillation attacks on Claude. According to Anthropic's investigation, the three firms created over 24,000 fraudulent accounts and generated more than 16 million exchanges with Claude to extract its capabilities. MiniMax accounted for the bulk of the traffic with over 13 million exchanges targeting agentic coding and tool use. Moonshot AI generated more than 3.4 million exchanges aimed at agentic reasoning. DeepSeek conducted over 150,000 exchanges focused on foundational logic and alignment, with prompts designed to reveal chain-of-thought training data. Anthropic warned that models built through illicit distillation are unlikely to retain safety protections.[65]
Also on February 25, 2026, Anthropic acquired Vercept, a Seattle-based AI startup specializing in computer-use agents. Vercept had developed Vy, a computer-use agent capable of operating a remote Apple MacBook in the cloud. Co-founders Kiana Ehsani, Luca Weihs, and Ross Girshick joined Anthropic. The acquisition was intended to advance Claude's computer use capabilities, enabling it to complete multi-step tasks inside live applications.[66]
The Enterprise Analytics API was also introduced in February 2026, giving Enterprise plan customers programmatic access to usage and engagement data. The API provides five endpoints covering per-user activity metrics, organization-wide activity summaries, chat project usage, skill usage, and Claude Code metrics (including commits, pull requests, and lines of code). Data is aggregated per organization, per day, with a default rate limit of 60 requests per minute.[73]
The Pentagon relationship deteriorated sharply in early 2026. In January, Defense Secretary Pete Hegseth released a new AI strategy calling for contracts with AI companies to eliminate company-specific guardrails. Anthropic refused to accept language permitting unrestricted military use of its models, with Dario Amodei insisting on retaining prohibitions against mass domestic surveillance and fully autonomous weapons systems.[33]
In March 2026, Anthropic made its memory feature available to all users, including those on the free plan. The feature, previously limited to paid subscribers since October 2025, allows Claude to remember user preferences and context across conversations. Anthropic also introduced a memory import tool that lets users transfer their preferences from competing chatbots like ChatGPT and Gemini.[34]
On March 3, 2026, the Department of Defense formally designated Anthropic a supply chain risk to national security, the first such designation ever applied to an American company. On March 9, Anthropic filed lawsuits in two federal courts challenging the designation, arguing that it punished the company for exercising protected speech on AI policy. Nearly 150 retired federal and state judges subsequently filed an amicus brief supporting Anthropic, alongside statements of support from Microsoft, Google engineers, and employees at competing AI companies. The financial impact is significant: Anthropic estimated the designation could cost hundreds of millions or even billions of dollars in lost government revenue for 2026. OpenAI subsequently signed a contract with the Defense Department to fill the gap.[33]
On March 6, 2026, Anthropic launched the Claude Marketplace, a curated enterprise platform where businesses can purchase Claude-powered tools from third-party partners. Enterprises with existing Anthropic spend commitments can redirect part of their budget toward partner applications, consolidating procurement under a single invoice. Launch partners included Snowflake, GitLab, Harvey, Replit, Rogo, and Lovable. Notably, Anthropic does not take a revenue cut from Marketplace purchases, distinguishing it from cloud marketplace models operated by Amazon Web Services and Microsoft Azure.[74]
On March 9, 2026, Anthropic released Code Review for Claude Code, a multi-agent code review system that automatically analyzes pull requests for bugs, logic errors, and security vulnerabilities. When a pull request is opened, Code Review dispatches a team of agents that work in parallel. Findings go through a verification step to filter out false positives, and confirmed issues are ranked by severity. Results appear directly on the pull request as a single overview comment with inline comments for specific bugs. Agents do not approve pull requests; developers always retain final authority. In Anthropic's internal testing, the percentage of pull requests receiving substantive review comments rose from 16% to 54%, and on large pull requests (over 1,000 lines changed), 84% received findings averaging 7.5 issues. Code Review is available as a research preview for Team and Enterprise customers, with token-based pricing averaging $15 to $25 per review. Reviews typically take around 20 minutes per pull request.[75]
On March 10, 2026, Anthropic announced it would open an office in Sydney, its fourth in the Asia-Pacific region after Tokyo, Bengaluru, and Seoul. Australia ranked fourth globally in Claude.ai usage, with New Zealand ranking eighth. Existing Australian customers include Canva, Quantium, and Commonwealth Bank of Australia.[67]
On March 11, 2026, Anthropic launched the Anthropic Institute, a new research arm led by co-founder Jack Clark (in a new role as Head of Public Benefit). The Institute brings together three existing teams: the Frontier Red Team, which stress-tests AI systems; Societal Impacts, which studies how AI is used in the real world; and Economic Research, which tracks AI's effects on jobs and the broader economy. The Institute also incubates new research efforts, including forecasting AI progress and studying how advanced AI will interact with the legal system. Notable hires for the Institute include Matt Botvinick (formerly of Google DeepMind and Princeton), Anton Korinek (on leave from the University of Virginia), and Zoe Hitzig (previously at OpenAI). Anthropic simultaneously expanded its Public Policy organization under Sarah Heck as Head of Public Policy.[68]
On March 12, 2026, Anthropic launched the Claude Partner Network with a $100 million investment. Anchor partners include Accenture, Deloitte, Cognizant, and Infosys. The program offers training, dedicated technical support, joint go-to-market investment, and the first Claude technical certification (Claude Certified Architect, Foundations). Anthropic is scaling its partner-facing headcount fivefold, adding Applied AI engineers, technical architects, and localized go-to-market support across international markets.[69]
On March 17, 2026, Anthropic announced Claude Dispatch, a feature inside Claude Cowork that creates a single persistent conversation between the Claude mobile app on a phone and the Claude Desktop app on a computer. Users can assign tasks from their phone (such as summarizing recent work, pulling together a brief, comparing materials across folders, or producing a report) and return to completed work on their desktop. Execution occurs in a local sandboxed environment where files never leave the user's computer. Dispatch supports over 38 connectors including Notion, Gmail, Slack, Google Calendar, Google Drive, Dropbox, GitHub, Figma, Trello, and Asana. Max plan subscribers received immediate access, with Pro plan users gaining access within a few days.[76]
On March 20, 2026, Anthropic shipped Claude Code Channels as a research preview, enabling developers to connect a running Claude Code session to Telegram or Discord. Messages sent through the chat app are picked up by the local Claude Code session, which executes the work and replies through the same platform. The feature uses the Model Context Protocol (MCP): a Channel is an MCP server that declares the claude/channel capability and actively pushes events into a running session. Channels can be one-way (forwarding alerts or CI notifications) or two-way. The security model includes an allowlist-based plugin system (only Anthropic-approved plugins during the preview), pairing-code authentication that locks the bot to a specific user ID, and prompt injection threat modeling. The feature requires Claude Code version 2.1.80 or later and supports Telegram and Discord as of launch.[77]
On March 23, 2026, Anthropic launched Computer Use for Mac as a research preview, giving Claude the ability to control a Mac computer directly. Claude can move the mouse, use the keyboard, open applications, navigate browsers, fill in spreadsheets, and complete multi-step tasks autonomously. When a supported connector is available (such as Google Workspace or Slack), Claude prioritizes that integration, but it falls back to screen-based control when no connector exists. Claude requests permission before accessing new applications, and users can halt operations at any point. The feature is available to Pro and Max subscribers on macOS and integrates with Dispatch, allowing users to assign computer-use tasks from their phone. Windows and Linux support has not yet been announced.[78]
By March 2026, Anthropic's annualized run-rate revenue had surged to $19 billion, more than doubling from $9 billion at the end of 2025. Claude Code alone generated over $2.5 billion in annualized revenue. CEO Dario Amodei confirmed at a Morgan Stanley TMT conference that the company added $6 billion in run-rate revenue during February 2026 alone. Anthropic was capturing over 73% of all spending among companies buying AI tools for the first time.[1]
Anthopic has also been preparing for a potential IPO, reportedly hiring the law firm Wilson Sonsini and holding preliminary talks with investment banks, though the company has not confirmed a specific timeline.[35]
Claude is Anthropic's family of large language models designed to be helpful, harmless, and honest. The models are generative pre-trained transformers that have been fine-tuned using Constitutional AI and reinforcement learning from human feedback (RLHF).[36]
| Generation | Models | Release Date | Key Features |
|---|---|---|---|
| Claude 1 | Claude 1.0, 1.3, Claude Instant | March 2023 | Initial release; Claude Instant as lightweight version |
| Claude 2 | Claude 2.0, 2.1 | July 2023 | 100,000 token context window, public web interface at Claude.ai |
| Claude 3 | Haiku, Sonnet, Opus | March 2024 | 200,000 token context, vision capabilities, multimodal support |
| Claude 3.5 | Sonnet, Haiku | June-October 2024 | Computer use capability, improved coding, Artifacts feature |
| Claude 3.7 | Sonnet | February 2025 | Hybrid reasoning model, extended thinking, Claude Code early preview |
| Claude 4 | Sonnet 4, Opus 4 | May 2025 | Extended thinking with tool use, ASL-3 safeguards, MCP connector |
| Claude 4.1 | Opus 4.1 | August 2025 | Performance improvements |
| Claude 4.5 | Sonnet 4.5, Opus 4.5 | September-November 2025 | Opus 4.5 first to break 80% SWE-bench; 67% price cut |
| Claude 4.6 | Opus 4.6, Sonnet 4.6 | February 2026 | 1M token context (beta), agent teams, adaptive thinking, 128k output |
Constitutional AI (CAI) is Anthropic's proprietary methodology for training AI systems to be aligned with human values. The approach involves providing AI systems with a set of principles (a "constitution") that guide their behavior.[37]
The CAI process involves two main phases:
Supervised Learning Phase: The model generates self-critiques and revisions based on constitutional principles
Reinforcement Learning Phase: The model is trained using AI feedback (RLAIF, or Reinforcement Learning from AI Feedback) rather than solely human feedback[38]
The constitution draws from various sources including:
The UN Universal Declaration of Human Rights
Trust and safety best practices
Principles from other AI research labs
Platform guidelines from technology companies[39]
Anthopic extended the constitutional approach to safety classifiers in early 2025. The first generation of Constitutional Classifiers reduced jailbreak success rates from 86% to 4.4% during a red-teaming exercise where 183 participants spent over 3,000 hours attempting to break the system. No universal jailbreak was discovered. A second generation, Constitutional Classifiers++, reduced the compute overhead from 23.7% to roughly 1% by using a two-stage architecture: a lightweight probe screens all traffic using Claude's internal activations, escalating suspicious exchanges to a more powerful classifier.[40]
Anthopic conducts extensive research into interpretability, the field of understanding the internal workings of complex AI models. In 2024, the company used a compute-intensive technique called "dictionary learning" to identify millions of features (patterns corresponding to concepts) within the Claude 3 Sonnet model.[41] This research aims to better understand, monitor, and control model behavior to enhance safety.
Announced in November 2024, the Model Context Protocol is an open-source framework that standardizes how AI systems integrate with external tools, data sources, and systems. MCP defines a client-server architecture with three components: hosts (the AI application), clients (which maintain connections to servers), and servers (which provide context and tool access). The protocol has been adopted across the industry and is supported by SDKs in multiple programming languages.[18]
In December 2025, Anthropic donated MCP to the Agentic AI Foundation (AAIF) under the Linux Foundation, ensuring vendor-neutral governance under the same stewardship model that supports Kubernetes, PyTorch, and Node.js. By the time of the donation, MCP had grown to over 97 million monthly SDK downloads and 10,000 active servers, making it one of the fastest-growing open-source projects in AI history.[64]
Anthopic operates as a public benefit corporation (PBC), legally requiring it to balance stockholder interests with its public benefit mission. The company must regularly report on how it promotes public benefits to its owners.[42]
Anthopic established a unique governance structure called the Long-Term Benefit Trust (LTBT), a purpose trust designed to ensure the company remains focused on "the responsible development and maintenance of advanced AI for the long-term benefit of humanity." The Trust holds Class T shares that allow it to elect directors to Anthropic's board.[43]
As of April 2025, the Trust members include:
Neil Buddy Shah
Kanika Bahl
Zach Robinson
Richard Fontaine[44]
Current board members include:
Dario Amodei (CEO and Co-Founder)
Daniela Amodei (President and Co-Founder)
Yasmin Razavi
Jay Kreps
Reed Hastings[45]
| Round | Date | Amount | Lead Investors | Valuation |
|---|---|---|---|---|
| Series A | May 2021 | $124M | Jaan Tallinn | Undisclosed |
| Series B | April 2022 | $580M | FTX, Google | Undisclosed |
| Series C | May 2023 | $450M | Spark Capital | $4.6B |
| Series D | September 2023 | $4B (Amazon) | Amazon | $18.5B |
| Google Investment | October 2023 | $2B | N/A | |
| Amazon Additional | November 2024 | $4B | Amazon | N/A |
| Series E | March 2025 | $3.5B | Lightspeed Venture Partners | $61.5B |
| Series F | September 2025 | $13B | ICONIQ, Fidelity, Lightspeed | $183B |
| Series G | February 2026 | $30B | GIC, Coatue | $380B |
Total funding raised: approximately $57 billion (as of February 2026)[46]
Amazon: $8 billion total investment[19]
Microsoft: Up to $5 billion investment commitment, plus $30 billion Azure cloud services deal[47]
Google: $2 billion investment[14]
GIC: Co-led Series G
Coatue: Co-led Series G
Lightspeed Venture Partners: Led multiple rounds
Fidelity Management: Co-led Series F
ICONIQ Capital: Co-led Series F
Other notable investors include: Salesforce Ventures, Menlo Ventures, General Catalyst, BlackRock, Blackstone, Sequoia Capital, Temasek, Goldman Sachs, JPMorgan Chase, Qatar Investment Authority, D. E. Shaw Ventures, Founders Fund, and MGX (Abu Dhabi)[48]
The consumer-facing Claude assistant is available through:
Claude.ai: Web-based interface launched July 2023
Claude iOS/Android apps: Mobile applications
Claude Pro/Max: Premium subscription tiers with enhanced capabilities[49]
The Claude API provides programmatic access to Claude models for developers and businesses, supporting:
Text generation and analysis
Code generation and debugging
Vision capabilities (image analysis)
Extended context windows (up to 1 million tokens with Opus 4.6 and Sonnet 4.6)[50]
Introduced in February 2026, the Enterprise Analytics API gives Enterprise plan customers programmatic access to organization-wide usage and engagement data. The API provides five endpoints: per-user activity metrics (conversation counts, messages sent, projects created, files uploaded, artifacts created, and Claude Code metrics like commits and pull requests), organization-wide activity summaries (daily, weekly, and monthly active user counts with seat utilization), chat project usage (conversation and user counts by project), and skill usage breakdowns for Claude and Claude Code sessions. Data is aggregated per organization, per day, available for up to the past 90 days (from January 1, 2026 onward), with a default rate limit of 60 requests per minute.[73]
Launched as an early preview in February 2025 alongside Claude 3.7 Sonnet and fully released in May 2025, Claude Code is an agentic coding assistant that operates in the terminal. It can read and write code, run tests, use the command line, and coordinate multi-agent workflows with developer oversight. With the release of Opus 4.6 in February 2026, Claude Code gained the "agent teams" feature, allowing multiple agents to work on different parts of a codebase in parallel. Claude Code has generated over $2.5 billion in annualized run-rate revenue as of February 2026, with business subscriptions quadrupling since the start of the year.[23]
Released on March 9, 2026, Code Review is a multi-agent system built into Claude Code that automatically analyzes pull requests for bugs, logic errors, and security vulnerabilities. When a pull request is opened, the system dispatches a team of agents that work in parallel, with findings verified to filter out false positives and ranked by severity. Results appear as a single overview comment on the pull request with inline comments for specific bugs. In Anthropic's internal testing, substantive review comments rose from 16% to 54% of pull requests, and 84% of large pull requests (over 1,000 lines changed) received findings. Code Review is available as a research preview for Team and Enterprise customers at an average cost of $15 to $25 per review.[75]
Launched on February 20, 2026 as a limited research preview, Claude Code Security scans codebases for security vulnerabilities by reasoning about code in the way a human security researcher would. Unlike traditional static analysis tools, it understands how components interact, traces data flows across files, and catches complex multi-component vulnerability patterns. Each finding goes through a multi-stage verification process. During development, the team used Opus 4.6 to discover over 500 high-severity vulnerabilities in widely used open-source libraries that had gone undetected for years. The tool is available to Enterprise and Team customers, with expedited access for open-source maintainers.[72]
Shipped on March 20, 2026 as a research preview, Claude Code Channels lets developers connect a running Claude Code session to Telegram or Discord. Messages sent through the chat app are picked up by the local session, which executes the requested work and replies through the same platform. The feature is built on the Model Context Protocol: a Channel is an MCP server that declares the claude/channel capability and pushes events into an active session. Channels can be one-way (forwarding alerts or CI notifications) or two-way (full conversational interaction). The security model uses an allowlist-based plugin system (only Anthropic-approved plugins during the preview) and pairing-code authentication. Claude Code version 2.1.80 or later is required.[77]
Introduced on January 12, 2026 as a research preview, Cowork extends Claude's capabilities to everyday office tasks. Described as "Claude Code for the rest of your work," it is built into the Claude Desktop app and allows users to designate a folder on their computer where Claude can read, edit, and create files. The tool implements sub-agent coordination for parallelizable tasks, spawning multiple Claude instances that execute concurrently. Cowork launched for Max subscribers on macOS, with Pro subscribers and Team/Enterprise plans gaining access in subsequent weeks. By February 2026, Anthropic had launched a plugin marketplace and admin controls for enterprise deployments, along with connectors for Google Drive, Gmail, DocuSign, and other business tools. A Windows version followed in February 2026.[51][70]
Announced on March 17, 2026, Dispatch is a feature inside Claude Cowork that creates a single persistent conversation between the Claude mobile app on a phone and the Claude Desktop app on a computer. Users can assign tasks from their phone and return to completed work on their desktop. Execution occurs in a local sandboxed environment; files never leave the user's computer and no sensitive data is sent to Anthropic's servers. Dispatch supports over 38 connectors including Notion, Gmail, Slack, Google Calendar, Google Drive, Dropbox, GitHub, Figma, Trello, and Asana. Max plan subscribers received immediate access, with Pro plan users gaining access within days.[76]
Launched on March 6, 2026, the Claude Marketplace is a curated enterprise platform where businesses can purchase Claude-powered tools from third-party partners. Enterprises with existing Anthropic spend commitments can redirect part of their budget toward partner applications, receiving a single consolidated invoice. Launch partners included Snowflake, GitLab, Harvey, Replit, Rogo, and Lovable. Anthropic does not take a revenue cut from Marketplace purchases, distinguishing it from cloud marketplace models operated by AWS and Microsoft Azure. The Marketplace launched in limited preview, with plans to expand the partner catalog over time.[74]
A beta feature first released in October 2024 alongside Claude 3.5 Sonnet and Haiku that enables Claude to take screenshots, click, and type text, allowing it to interact with computer interfaces.[52] In February 2026, Anthropic acquired Vercept, a Seattle-based startup specializing in computer-use AI agents, to further advance these capabilities. The Vercept team brought expertise in building agents that can operate full desktop environments, including navigating spreadsheets and managing workflows across multiple tools.[66]
On March 23, 2026, Anthropic launched Computer Use for Mac as a research preview, enabling Claude to control a Mac computer directly. Claude can move the mouse, use the keyboard, open applications, navigate browsers, fill in spreadsheets, and complete multi-step tasks autonomously. When a supported connector is available (such as Google Workspace or Slack), Claude prioritizes that integration but falls back to screen-based control when no connector exists. The feature uses a permission-first approach: Claude requests access before interacting with new applications, and users can halt operations at any point. Computer Use for Mac is available to Pro and Max subscribers and integrates with Dispatch for remote task assignment from a phone. Windows and Linux support has not yet been announced.[78]
In November 2024, Anthropic named AWS as its primary training partner and primary cloud provider. Key aspects of the partnership include:
Anthropic uses AWS Trainium and Inferentia chips for training and deploying models
AWS customers access Claude models through Amazon Bedrock
Amazon has invested a total of $8 billion while remaining a minority investor[19]
Anthopic's models are available through Google Cloud's Vertex AI platform, providing enterprise customers with access to Claude capabilities within Google's cloud infrastructure. Google invested $2 billion in Anthropic in October 2023.[14]
In September 2025, Claude models were integrated into Microsoft 365 Copilot. In early 2026, the partnership deepened: Anthropic committed to spending $30 billion on Microsoft's Azure cloud services, while Microsoft agreed to invest up to $5 billion in Anthropic. Claude Sonnet 4.6 powers Microsoft Copilot Cowork, a product offering AI agents within the Microsoft 365 suite. Microsoft spends approximately $500 million per year on Anthropic's models.[47]
In November 2024, Anthropic partnered with Palantir Technologies and Amazon Web Services to provide Claude models to U.S. intelligence and defense agencies for use in classified environments.[53]
Announced on February 17, 2026, the collaboration with Infosys integrates Claude models and Claude Code with Infosys Topaz AI offerings to help enterprises automate complex workflows and accelerate software delivery in regulated industries. The partnership focuses on agentic AI, using the Claude Agent SDK to build persistent agents that handle multi-step tasks such as processing claims, generating and testing code, and managing compliance reviews. The initial focus is on telecommunications, with plans to expand into financial services, manufacturing, and software development. A dedicated Anthropic Center of Excellence supports the collaboration.[71]
Launched in March 2026 with a $100 million investment, the Claude Partner Network supports consulting firms, system integrators, and technology companies that build on Claude for their enterprise customers. Anchor partners include Accenture, Deloitte, Cognizant, and Infosys. The program provides training, dedicated technical support, joint go-to-market investment, and a new Claude Certified Architect certification. Anthropic is scaling its partner-facing team fivefold to support the network.[69]
Anthopic has experienced rapid enterprise adoption, growing from under 1,000 business customers two years ago to over 300,000 as of 2025.[54] The company has expanded its global presence with offices in San Francisco (headquarters), Dublin, London, Tokyo, Bengaluru, Seoul, Zurich, and (as of March 2026) Sydney.[67]
Major enterprise customers and partners include:
Pfizer: Uses Claude to accelerate research and reduce operational costs
United Airlines: Used Claude to personalize customer messages and improve response speeds
Zoom: Integrated Claude for various business applications
Snowflake: Utilizes Claude for data analytics
Thomson Reuters: CoCounsel tax platform uses Claude for tax professionals
Novo Nordisk: Reduced clinical study report writing from 12 weeks to 10 minutes
Commonwealth Bank of Australia: Reduced customer scam losses by 50%
Rakuten: Cut feature development time by 79% using Claude Code
Replit: Integrated Claude into Agent for code generation
Canva: Uses Claude for AI-powered design features[55]
| Year | Annualized Revenue (Run-Rate) |
|---|---|
| 2022 | $10 million |
| 2023 | $100 million |
| 2024 | $1 billion |
| End of 2025 | ~$9 billion |
| February 2026 | $14 billion |
| March 2026 | $19+ billion |
The company added $6 billion in run-rate revenue during February 2026 alone, driven largely by Claude Code adoption and enterprise growth. The number of customers spending over $100,000 annually has grown 7x year-over-year, and over 500 customers now spend more than $1 million annually, up from roughly a dozen two years ago. Anthropic was capturing over 73% of all spending among companies buying AI tools for the first time as of March 2026. The company has set an internal target of $20 billion to $26 billion ARR for 2026, with a longer-term projection of up to $70 billion in revenue and $17 billion in cash flow by 2028.[1]
Anthopic published a Responsible Scaling Policy (RSP) in September 2023 that establishes safety levels for AI systems based on their capabilities. Models are classified into different Anthropic Safety Levels (ASL), with escalating controls as model risk increases. The policy has been updated multiple times: Version 2.0 in October 2024 introduced new capability thresholds and safety case methodologies; Version 2.1 in March 2025 added CBRN-related thresholds; Version 2.2 in May 2025 coincided with ASL-3 activation for Claude 4 models; and Version 3.0 in February 2026 introduced Frontier Safety Roadmaps with detailed safety goals and Risk Reports that quantify risk across all deployed models.[56]
Key research focus areas include:
AI Alignment: Ensuring AI systems behave as intended
Mechanistic Interpretability: Understanding how AI systems make decisions
Constitutional AI: Developing value-aligned AI systems
Safety Research: Mitigating risks from advanced AI systems
Deceptive Behavior Studies: Research on "sleeper agent" behaviors that can persist through safety fine-tuning[57]
Launched on March 11, 2026, the Anthropic Institute is a dedicated research arm focused on studying AI's societal impact. Led by co-founder Jack Clark (in a new role as Head of Public Benefit), the Institute brings together three teams: the Frontier Red Team (which stress-tests AI systems at their capability limits), Societal Impacts (which studies real-world AI usage), and Economic Research (which tracks AI's effects on employment and the broader economy). The Institute also incubates new research efforts, including forecasting AI progress and studying how advanced AI will interact with the legal system. Notable recruits include Matt Botvinick (formerly of Google DeepMind), Anton Korinek (University of Virginia), and Zoe Hitzig (previously at OpenAI).[68]
Red Teaming: Regular testing for vulnerabilities and potential misuse
External Audits: Collaboration with organizations like the US AI Safety Institute and UK Safety Institute
Constitutional Classifiers: Defenses against jailbreaking attempts, with the second-generation system (Constitutional Classifiers++) reducing compute overhead to ~1% while maintaining strong jailbreak resistance[40]
In October 2023, Anthropic was sued by music publishers including Universal Music Group, Concord, and ABKCO for alleged copyright infringement of song lyrics used in training data. A judge later denied a request for a preliminary injunction while litigation continued.[58]
In September 2025, Anthropic agreed to pay $1.5 billion to settle the Bartz v. Anthropic class action lawsuit brought by authors who alleged the company had illegally downloaded millions of pirated books from shadow libraries like Library Genesis and Pirate Library Mirror to train its AI models. The settlement, the largest copyright settlement in U.S. history, covers approximately 500,000 works at roughly $3,000 per book. Judge William Alsup had earlier ruled on summary judgment that using books to train AI was fair use if acquired legally, but denied Anthropic's motion regarding pirated copies.[27]
On February 24, 2026, Anthropic published a detailed blog post accusing three Chinese AI companies of conducting industrial-scale distillation attacks against Claude. The companies, DeepSeek, Moonshot AI, and MiniMax, allegedly created over 24,000 fraudulent accounts and generated more than 16 million exchanges with Claude to train their own models. The attackers used commercial proxy services and what Anthropic called "hydra cluster" networks, with one proxy setup controlling more than 20,000 fraudulent accounts at once. Anthropic argued that distillation-derived models strip away safety protections, creating national security risks. The accusations drew mixed reactions: some observers supported Anthropic's concerns about intellectual property theft, while others noted the irony given Anthropic's own legal challenges over training data sourced from copyrighted works.[65]
In early 2026, Anthropic's relationship with the U.S. Department of Defense deteriorated after the Trump administration demanded that AI companies remove all company-specific guardrails from military contracts. Anthropic refused to allow unrestricted use of its models, insisting on retaining two prohibitions: against mass domestic surveillance and against fully autonomous weapons systems. On March 3, 2026, the Department of Defense formally designated Anthropic a supply chain risk to national security, the first time such a designation had been applied to an American company. On March 9, Anthropic filed lawsuits in two federal courts (the U.S. District Court for the Northern District of California and the U.S. Court of Appeals for the Federal Circuit) challenging the designation as an unconstitutional retaliation for protected speech. Nearly 150 retired federal and state judges subsequently filed an amicus brief supporting Anthropic. Microsoft, Google engineers, and employees at competing AI companies also voiced support. The financial impact could reach billions of dollars in lost government revenue. OpenAI subsequently signed a contract with the Defense Department to replace Anthropic's services.[33]
The company's partnerships with Amazon and Google have attracted regulatory attention:
The UK Competition and Markets Authority investigated Amazon's partnership but concluded it could not be examined under current merger rules
The Federal Trade Commission has reviewed Big Tech AI investments but has not taken enforcement action[59]
The leadership team is composed of experts with experience across leading tech companies and research institutions:
| Name | Title | Background |
|---|---|---|
| Dario Amodei | Co-founder and CEO | Former VP of Research at OpenAI; PhD in Computational Neuroscience from Princeton University[60] |
| Daniela Amodei | Co-founder and President | Previously held roles at OpenAI and Stripe |
| Rahul Patil | Chief Technology Officer | Former CTO of Stripe; joined October 2025[63] |
| Sam McCandlish | Co-founder and Chief Architect | Previously worked on GPT-3 at OpenAI; transitioned from CTO to Chief Architect in October 2025[63] |
| Mike Krieger | Co-lead of Labs | Co-founder of Instagram; joined as CPO in May 2024, moved to Labs co-lead in late 2025[61] |
| Ami Vora | Head of Product | Succeeded Mike Krieger as Head of Product in late 2025[63] |
| Jack Clark | Co-founder, Head of Public Benefit | Former Policy Director at OpenAI; leads the Anthropic Institute |
| Tom Brown | Co-founder, Head of Core Resources | Previously worked on GPT-3 at OpenAI and at Google DeepMind |
| Krishna Rao | Chief Financial Officer | Former head of corporate finance at Airbnb |
| Paul Smith | Chief Commercial Officer | First CCO appointment |
| Jason Clinton | Chief Information Security Officer | Former Staff Software Engineer at Google |
| Jan Leike | Co-lead of Alignment Science | Former alignment researcher at OpenAI |
| Chris Ciauri | Managing Director of International | Former President of EMEA for Google Cloud; 25 years in tech including a decade at Salesforce |
| Sarah Heck | Head of Public Policy | Joined as Head of External Affairs, promoted to lead expanded Public Policy organization[68] |
| Irina Ghose | Managing Director, India | Leads the Bengaluru office; India is Claude's second-largest market[71] |
Anthopic competes in a rapidly evolving AI market with both Western and Chinese rivals. As of early 2026, the competitive landscape includes:
OpenAI: Creator of ChatGPT and GPT models. Anthropic's most direct competitor in both consumer and enterprise AI, and its successor for the Pentagon contract after Anthropic's departure.
Google DeepMind: Developer of Gemini models. Also an investor in Anthropic through Google's $2 billion stake.
Meta: Developer of open-source Llama models, which compete on cost and accessibility.
xAI: Elon Musk's AI company, developer of the Grok model family. Competes aggressively on pricing, with Grok 4.1 offering a 2 million token context window at very low cost.
DeepSeek: Chinese AI lab that has driven prices down significantly while achieving competitive benchmark scores. Subject of Anthropic's distillation accusations in February 2026.
Mistral AI: European AI startup focused on efficient models.
Anthopic has differentiated itself through its safety-first approach, strong coding performance (particularly through Claude Code), and enterprise adoption. By March 2026, the company was capturing over 73% of first-time AI tool spending among enterprises, up from a 50/50 split with OpenAI just 10 weeks earlier.[1]
AI safety
Amazon Bedrock
Vertex AI