Windsurf is an AI-powered code editor developed by the company formerly known as Codeium (originally Exafunction). Launched in November 2024, Windsurf positions itself as the first "agentic IDE," combining traditional code editing with an AI agent called Cascade that can understand entire codebases, perform multi-file edits, and execute terminal commands on behalf of the developer. The company was founded in 2021 by Varun Mohan and Douglas Chen, both MIT graduates, and rebranded fully from Codeium to Windsurf in April 2025. Following a complex acquisition saga involving OpenAI, Google, and Cognition AI, the Windsurf product is now owned by Cognition AI as of mid-2025. By early 2026, Windsurf ranked first in the LogRocket AI Dev Tool Power Rankings, ahead of both Cursor and GitHub Copilot.
Windsurf is built as a fork of Visual Studio Code, inheriting VS Code's extension ecosystem, keybindings, and settings. It supports over 40 IDEs through its plugin system, including JetBrains IDEs, Vim, NeoVim, and Xcode. The editor runs on Windows, macOS, and Linux, and is available as both a standalone desktop application and as plugins for existing editors.
The company behind Windsurf was founded in June 2021 as Exafunction by Varun Mohan (CEO) and Douglas Chen (co-founder). Mohan and Chen first met in middle school and later became classmates at MIT. Before starting Exafunction, Mohan served as a tech lead at Nuro, where he managed large-scale deep learning infrastructure for autonomous vehicles. Chen worked at Meta, developing software tools for VR headsets including the Oculus Quest.
Exafunction's original focus was on optimizing GPU utilization at scale, helping companies run machine learning workloads more efficiently on GPU clusters. The startup attracted early venture capital for this infrastructure-layer product.
Anshul Ramachandran, a founding team member, currently serves as Head of Enterprise and Partnerships and has played a key role in the company's growth trajectory.
In 2022, the company pivoted from GPU optimization to AI-powered developer tools, rebranding as Codeium. The pivot was driven by the founders' recognition that the rapidly advancing capabilities of large language models could transform software development. Rather than helping companies use GPUs more efficiently, they would use those same AI capabilities to help developers write code more efficiently.
Codeium launched as a free AI code completion tool, offering autocomplete suggestions powered by proprietary language models. The decision to offer a generous free tier was strategic: it allowed Codeium to rapidly build a user base and gather the usage data needed to improve its models, while competitors like GitHub Copilot charged from the start. This approach proved effective, helping the product gain traction among individual developers and small teams.
Codeium distinguished itself from competitors by training its own models rather than relying solely on third-party APIs. This gave the company more control over latency, cost, and the ability to fine-tune models specifically for code-related tasks.
In November 2024, the company took its most ambitious step yet by releasing the Windsurf Editor, a standalone AI-native IDE built on top of a Visual Studio Code fork. Rather than offering AI as a plugin bolted onto an existing editor, Windsurf was designed from the ground up to integrate AI capabilities at every level of the development experience.
The headline feature was Cascade, an agentic AI system capable of understanding project-wide context, making coordinated changes across multiple files, and running commands in the terminal. The launch positioned Windsurf as a direct competitor to Cursor, another AI-first code editor that had been gaining significant developer mindshare.
The full company rebrand from Codeium to Windsurf was completed in April 2025, aligning the company name with its flagship product. The AI coding assistant plugin formerly known as Codeium was rebranded as "Windsurf Plugins" to match.
The company raised approximately $243 million in disclosed funding prior to its acquisition, growing from a seed-stage GPU optimization startup to an AI coding unicorn in under four years.
| Round | Date | Amount | Valuation | Lead Investor(s) |
|---|---|---|---|---|
| Seed | 2021 | Undisclosed | N/A | Various |
| Series A | 2022 | Undisclosed | N/A | Various |
| Series B | January 2024 | $65M | $500M | Kleiner Perkins |
| Series C | August 2024 | $150M | $1.25B (post-money) | General Catalyst |
| Attempted raise | February 2025 | Reported ~$2.85B target valuation | Did not close | Kleiner Perkins (reported) |
The January 2024 Series B, led by Kleiner Perkins, valued the company at $500 million and marked its emergence as a serious contender in the AI developer tools space. Just seven months later, the August 2024 Series C led by General Catalyst more than doubled the valuation to $1.25 billion, making the company a unicorn.
In February 2025, reports indicated the company was in talks to raise additional funding at a valuation approaching $2.85 billion, led by returning investor Kleiner Perkins. However, this round did not close, as the company was soon swept into acquisition discussions.
Other notable investors across the company's funding history include Greenoaks and various strategic angels from the technology industry.
Windsurf became the center of one of the most dramatic acquisition stories in the AI industry in 2025, involving three of the biggest names in technology. The entire sequence of events played out over roughly 72 hours in July 2025, making it one of the fastest high-stakes corporate reshufflings in recent tech history.
In April 2025, reports emerged that OpenAI was in talks to acquire Windsurf for approximately $3 billion, which would have been OpenAI's largest acquisition. Bloomberg was first to report on the potential deal on April 16, 2025. OpenAI's interest stemmed from a desire to own a developer-facing IDE product that could serve as a distribution channel for its models, complementing its existing ChatGPT and API businesses. The deal was formally announced on May 6, 2025.
However, the acquisition fell apart by July 11, 2025. The primary obstacle was a conflict with Microsoft, OpenAI's largest investor and partner. Microsoft holds access to OpenAI's intellectual property under their partnership agreement, and OpenAI did not want Microsoft to gain access to Windsurf's AI coding technology as well. An OpenAI spokesperson confirmed to Fortune that the exclusivity period for the $3 billion acquisition deal had expired, leaving Windsurf free to pursue other options.
In a rapid turn of events following the collapse of the OpenAI deal, Google DeepMind moved to hire Windsurf CEO Varun Mohan, co-founder Douglas Chen, and approximately 40 senior R&D staff members focused on "agentic coding." Google paid approximately $2.4 billion for a nonexclusive license to certain Windsurf technologies and the talent acquisition. The deal was structured as a "reverse acquihire," providing investor returns and obtaining talent and intellectual property without acquiring equity in the company. Critically, Google did not take an equity stake in Windsurf or gain control over the company; the license is nonexclusive, meaning Windsurf remained free to license its technology to others.
The Google deal was executed on Friday evening, July 11, 2025, the same day the OpenAI exclusivity expired. Details that later emerged revealed that Windsurf's venture capital investors, including Kleiner Perkins and General Catalyst, received payouts from the $2.4 billion licensing fee.
With its founding team departing to Google, Windsurf needed a new home. Cognition AI, the company behind the autonomous coding agent Devin, stepped in just days later and reached a definitive agreement to acquire the remainder of Windsurf. The acquisition, reported at approximately $250 million, was completed in the summer of 2025. Cognition announced that all vesting cliffs would be waived, allowing Windsurf employees to immediately benefit from the deal. The approximately 210 remaining employees (out of the original 250-person team) joined Cognition.
Following the acquisition, Cognition raised $400 million in additional funding and was valued at $10.2 billion by September 2025. The combined enterprise ARR grew more than 30% following the merger, and Cognition's total annual recurring revenue more than doubled.
In a VentureBeat interview following the acquisition, Cognition noted that the deal also restored Windsurf's relationships with model providers, stating "We're friends with Anthropic again," a reference to tensions that had developed during the OpenAI acquisition period.
Cascade is Windsurf's flagship AI agent and the feature that most distinguishes it from traditional code editors and simpler AI assistants. Unlike basic autocomplete tools that suggest the next few tokens of code, Cascade operates as a full-fledged coding partner that can plan, execute, and iterate on complex multi-step tasks.
How Cascade works:
Cascade maintains a deep understanding of the user's entire codebase, not just the currently open file. It builds and updates an internal representation of the project's architecture, dependencies, and patterns. When a developer asks Cascade to perform a task (for example, "refactor the authentication module to use JWT tokens"), the agent can:
Cascade tracks all of the developer's actions, including edits, commands, conversation history, clipboard contents, and terminal output, to infer intent and adapt its behavior in real time. This contextual awareness allows it to provide proactive suggestions rather than only responding to explicit requests.
Modes and capabilities:
| Feature | Description |
|---|---|
| Code Mode | Cascade directly edits files in the workspace based on instructions |
| Chat Mode | Conversational interaction for asking questions about the codebase, getting explanations, or brainstorming |
| Plan Mode | Cascade asks clarifying questions and produces a structured plan before executing, helping define context and constraints upfront |
| Tool Calling | Cascade can invoke external tools, run terminal commands, and interact with the development environment |
| Voice Input | Developers can dictate instructions using voice rather than typing |
| Checkpoints | Automatic save points that allow developers to roll back Cascade's changes if needed |
| Linter Integration | Cascade reads linter output and can automatically fix warnings and errors |
| Planning Agent | A specialized sub-agent that continuously refines a long-term plan while the primary model focuses on short-term actions |
| Todo Tracking | For complex tasks, Cascade creates and maintains a todo list within the conversation to track progress |
| Arena Mode | Side-by-side blind comparison of two AI models running the same prompt in parallel, with hidden identities; users vote on which performed better to build personal and global leaderboards |
The planning capability is particularly notable. For longer or more complex tasks, a dedicated planning agent operates alongside the primary model. The planning agent maintains and refines the overall strategy, while the primary model focuses on executing individual steps. This division of labor helps prevent the common problem of AI agents losing track of their goals during extended interactions.
Cascade Hooks allow developers and administrators to execute custom commands at specific points during Cascade's execution flow. Hooks can run before or after Cascade responses, and a post_setup_worktree hook enables initialization of worktrees created by Cascade. Enterprise teams can configure Cascade Hooks through a cloud dashboard, enabling centralized policy enforcement. Use cases include logging all user prompts, blocking policy-violating prompts, and triggering custom automation during agent execution. A rules_applied field tracks which rules were triggered in the post_cascade_response hook, giving teams visibility into how policies affect agent behavior.
Introduced in January 2026, Agent Skills let developers bundle reference scripts, templates, checklists, and supporting files into folders that Cascade can invoke on demand. Each skill includes a SKILL.md file describing its purpose and instructions. Cascade uses progressive disclosure: only the skill's name and description are shown to the model by default, with the full SKILL.md content and supporting files loaded only when Cascade decides to invoke the skill. This approach keeps context usage efficient while enabling complex, multi-step workflows to be executed consistently.
Windsurf's autocomplete feature provides real-time code suggestions as the developer types. The system goes beyond simple token prediction by considering the broader context of the file and project. Autocomplete suggestions can span multiple lines and are informed by the project's coding patterns and conventions.
The free tier includes unlimited tab autocomplete, a notable differentiator from competitors that limit completion counts on their free plans. Autocomplete is powered by the SWE-1-mini model, a compact, high-speed model specifically designed for the passive prediction features of the Windsurf Tab environment.
Flows is Windsurf's term for its approach to iterative, multi-file editing workflows. The concept bridges the gap between simple autocomplete (where the AI suggests the next few characters) and full agent mode (where Cascade takes autonomous action). In a Flow, the developer and AI collaborate in a back-and-forth rhythm: the developer provides direction, the AI proposes changes across relevant files, the developer reviews and accepts or modifies, and the cycle continues.
Flows are designed to understand architectural patterns and propagate changes across dependencies. For example, if a developer renames a function parameter in one file, Flows can identify and update all call sites across the project.
Codemaps is a beta feature that provides AI-annotated visual code maps for codebase understanding and navigation. Accessible through a dedicated pane in the editor, Codemaps renders interactive visual representations of code architecture with trace guides and line-level linking. Developers can use Codemaps to explore unfamiliar codebases, understand relationships between modules, and navigate large projects more effectively. Codemaps can also be mentioned directly in Cascade conversations to provide the AI agent with visual context about code structure. The feature is powered by the SWE-1.5 model.
Lifeguard is a built-in debugging tool that helps developers find and resolve bugs directly inside the IDE. Rather than requiring developers to switch between the editor and separate debugging tools, Lifeguard integrates bug detection and resolution suggestions into the normal editing workflow.
Vibe and Replace is Windsurf's multi-file refactoring capability, designed for performing large-scale code transformations across hundreds of files simultaneously. Unlike traditional find-and-replace operations that match literal text patterns, Vibe and Replace uses AI understanding to perform semantic refactoring, changing code structure and patterns across an entire codebase while preserving correctness.
Windsurf developed its own family of AI models specifically optimized for software engineering tasks, branded as SWE-1. The model family was officially announced on May 15, 2025, representing what the company described as a fundamental shift from mere code generation to encompassing the entire software engineering workflow. The models are trained to understand code structure, development workflows, and the specific patterns that arise in real-world software projects.
| Model | Description | Availability |
|---|---|---|
| SWE-1 | Full model with strong reasoning and code generation; competitive with Claude 3.5 Sonnet in tool usage, multi-hop reasoning, and planning tasks | Paid plans |
| SWE-1 Lite | Base model capable enough for most everyday coding tasks; improved quality over initial release | Free tier (unlimited) |
| SWE-1-mini | Compact, high-speed model that powers the Windsurf Tab autocomplete environment | All tiers |
| SWE-1.5 | Updated model released October 29, 2025, under Cognition ownership; frontier-size model with hundreds of billions of parameters | All tiers (free access for 3 months at launch) |
| SWE-1.5 Free | Standard-throughput version of SWE-1.5 with full intelligence but without the premium speed tier | Free tier |
SWE-1.5, released under Cognition's ownership, represents a significant leap in both speed and capability. The model was developed using a combination of reinforcement learning on custom environments, optimization for the Cascade agent harness, and custom infrastructure built on NVIDIA GB200 NVL72 chips.
Key performance characteristics of SWE-1.5:
Windsurf evaluated SWE-1 performance through both offline benchmarks and blind production experiments with real users. The company uses two primary benchmarks: the Conversational SWE Task Benchmark, which tests how well a model can address a user query in the middle of an existing session with a half-finished task, and the End-to-End SWE Task Benchmark, which evaluates a model's ability to solve a problem from beginning to end. In production experiments, SWE-1 demonstrated strong performance in metrics like "Daily Lines Contributed per User" and "Cascade Contribution Rate," reflecting both the quality of its suggestions and users' willingness to adopt them.
The SWE-1 Lite model is notable for its inclusion in the free tier. Multiple developer reviews have described it as "surprisingly capable" for most tasks, making Windsurf's free offering competitive with paid tiers from other products.
Windsurf natively integrates with the Model Context Protocol (MCP), an open standard originally developed by Anthropic that enables large language models to access custom tools and external services. In Windsurf's architecture, the editor acts as the MCP host, while Cascade functions as the MCP client.
MCP servers in Windsurf are configured through the ~/.codeium/windsurf/mcp_config.json file, which stores server connections in JSON format. The system supports environment variable interpolation across commands, arguments, environment variables, headers, and URLs. Windsurf supports three transport types for MCP server connections:
| Transport Type | Description |
|---|---|
| stdio | Standard input/output communication for local processes |
| Streamable HTTP | HTTP-based transport for remote servers with streaming support |
| SSE (Server-Sent Events) | Event-based transport for real-time server communication |
All three transport types support OAuth authentication for secure connections to external services.
Windsurf includes an MCP Marketplace accessible through the MCPs icon in the Cascade panel or via Windsurf Settings. The marketplace offers one-click installation of pre-configured MCP servers, with officially verified servers marked by blue checkmarks. Popular pre-configured servers available through the marketplace include:
Cascade supports a maximum of 100 tools at any given time. Users can selectively enable or disable individual tools from each MCP server through the settings interface.
For enterprise deployments, team administrators can restrict MCP access by whitelisting approved servers using regex pattern matching. Once any server is whitelisted, non-approved servers become unavailable to team members, ensuring that only vetted integrations are used in production environments.
Windsurf uses a credit-based pricing model, where different AI operations consume different numbers of credits. The pricing was simplified in 2025 from a dual-currency system (prompt credits and flow action credits) to a single credit system called "prompt credits."
| Plan | Price | Credits/Month | Key Features |
|---|---|---|---|
| Free | $0 | 25 | Unlimited SWE-1 Lite, unlimited tab autocomplete, unlimited inline edits, 1 deploy/day |
| Pro | $15/month | 500 | All premium model access, full Cascade experience, 5 deploys/day |
| Teams | $30/user/month | 500 per user (not pooled) | Centralized billing, admin dashboard with analytics, team management |
| Enterprise | $60+/user/month | 1,000 per user (at 200+ seats) | RBAC, SSO, SCIM, hybrid deployment, HIPAA, FedRAMP/DOD, ITAR compliance |
New users receive a 2-week free trial of the Pro plan with 100 prompt credits. Add-on credits can be purchased at $10 for 250 credits on the Pro plan, or $40 for 1,000 credits on Teams and Enterprise plans.
Credits are consumed either at a flat rate for Windsurf's in-house models (for example, 0.5 credits per prompt for SWE-1 models) or based on token usage for third-party models like Claude 3.5 Sonnet or GPT-4. The consumption rate varies by model, with more capable models consuming more credits per interaction.
The free tier has been one of Windsurf's most effective growth drivers. By offering unlimited autocomplete and a capable base model at no cost, the product has attracted a large base of individual developers who might not otherwise try a new IDE. This stands in contrast to GitHub Copilot, which limits both model availability and tab completions on its free plan, and Cursor, whose free Hobby tier is more restricted.
Windsurf operates in the rapidly growing AI code editor and assistant market, competing against several well-funded products.
| Competitor | Type | Developer | Key Differentiator |
|---|---|---|---|
| Cursor | AI-native IDE (VS Code fork) | Anysphere | Strong agent mode; large developer community; $2.5B+ valuation |
| GitHub Copilot | AI assistant (plugin + IDE) | GitHub / Microsoft | Massive distribution via GitHub ecosystem; integrated into VS Code and JetBrains |
| Claude Code | CLI-based AI agent | Anthropic | Terminal-first workflow; deep reasoning via Claude models |
| Augment Code | AI coding assistant | Augment | Enterprise focus; strong codebase understanding |
| Amazon Q Developer | AI assistant | Amazon Web Services | AWS ecosystem integration; free tier for individual developers |
The most direct comparison in the market is between Windsurf and Cursor, as both are AI-native IDEs built on Visual Studio Code forks. The two products have different strengths and design philosophies.
| Feature | Windsurf | Cursor |
|---|---|---|
| Monthly price (Pro) | $15 | $20 |
| Monthly price (Teams) | $30/user | $40/user |
| Enterprise pricing | Starting $60/user (transparent) | Custom pricing |
| Proprietary models | SWE-1.5 (13x faster than Sonnet 4.5) | Relies on third-party models |
| Context handling | Automatic codebase analysis; Fast Context (10x faster retrieval) | Manual context tagging or codebase mention |
| IDE support | 40+ IDEs (JetBrains, Vim, NeoVim, Xcode, VS Code) | VS Code fork only |
| Code visualization | Codemaps (AI-annotated visual code maps) | Basic embedding search |
| Multi-file refactoring | Vibe and Replace (hundreds of files) | Limited; single-file focus |
| Compliance certifications | SOC 2, HIPAA, FedRAMP/DOD, ITAR, ZDR, RBAC, SCIM | SOC 2 only |
| Agent tool access | File editing, web search, terminal commands | Grep search, fuzzy file matching, advanced codebase operations |
| Large codebase support | Optimized for enterprise-scale (100M+ lines) | Better suited for smaller codebases |
| Model leaderboard | Arena Mode (blind model comparison) | No built-in model comparison |
Cursor has generally had a larger developer community and stronger brand recognition among early adopters. It also provides broader tool access for its agent mode, including grep searches and fuzzy file matching. Windsurf has differentiated itself through its more generous free tier, its proprietary SWE-1 model family with significantly faster inference speeds, automatic context detection (versus Cursor's more manual approach), and stronger enterprise compliance certifications. Under Cognition ownership, Windsurf also benefits from integration with Devin, an autonomous coding agent that can handle tasks independently.
GitHub Copilot has the advantage of massive distribution through its integration with GitHub, the world's largest code hosting platform. Copilot functions primarily as a plugin for existing editors (VS Code, JetBrains, Neovim) rather than as a standalone IDE. Windsurf's advantage lies in its deeper agentic capabilities: while Copilot excels at inline suggestions and chat, Cascade can perform more complex multi-step tasks involving coordinated changes across many files.
Claude Code takes a fundamentally different approach as a terminal-first AI coding agent. Rather than operating through a graphical IDE, Claude Code runs in the command line and interacts with codebases through shell commands. This makes it particularly strong for developers who prefer terminal-based workflows and want deep reasoning capabilities powered by Claude models. Windsurf, by contrast, offers a visual IDE experience with features like Codemaps, inline diffs, and a familiar VS Code interface that may be more accessible to developers who prefer graphical editors.
Windsurf is built as a fork of Visual Studio Code, inheriting VS Code's extension ecosystem, keybindings, and settings. This means developers switching from VS Code can bring their existing extensions, themes, and configurations with them, reducing the friction of adoption.
The AI capabilities are integrated at a level below the editor surface. Cascade maintains a persistent understanding of the project through a combination of embedding-based retrieval and real-time indexing. When a developer opens a project, Windsurf indexes the codebase to build a searchable representation. As files change, the index is updated incrementally.
The system uses a combination of local computation and cloud-based inference. Autocomplete suggestions are generated with low latency (the system is optimized for sub-200-millisecond response times for basic completions), while more complex Cascade operations that require deeper reasoning are processed through cloud-based models.
Windsurf supports Git worktrees natively, allowing developers to spawn multiple Cascade sessions in the same repository without conflicts. Worktrees check out different branches into separate directories while sharing the same Git history, enabling parallel development workflows where multiple AI agents can work on different features simultaneously.
Developers can view and interact with multiple concurrent Cascade sessions in separate panes and tabs within the same window. This enables side-by-side monitoring and comparison of different agent approaches, and is particularly useful when using Arena Mode to compare model outputs or when working on multiple related tasks in parallel.
Windsurf includes a visual indicator that displays current context window usage, helping developers anticipate token limits and decide when to start a new Cascade session. This transparency helps users manage their interactions more effectively, especially during long multi-step tasks.
Cascade uses a dedicated zsh shell specifically configured for reliability in agent command execution. This dedicated terminal replaces the default shell for commands run by the AI agent, supports environment variables from .zshrc, and enables interactive prompts. The dedicated terminal improves reliability for users with complex shell configurations that might otherwise interfere with agent-executed commands.
Windsurf follows a "Wave" release cadence for major feature updates. Each Wave brings a collection of new features, model improvements, and interface enhancements.
| Wave | Date | Key Features |
|---|---|---|
| Wave 13 | December 24, 2025 | SWE-1.5 Free, Git worktrees, side-by-side Cascade panes, dedicated terminal, context window indicator, Cascade Hooks |
| Wave 14 | February 2026 | Arena Mode (blind model comparison), Plan Mode, Megaplan for complex task planning |
Released on December 24, 2025, Wave 13 focused on enabling parallel, multi-agent sessions. The release introduced first-class Git worktree support, side-by-side Cascade panes, the SWE-1.5 Free model available to all users for three months, and a dedicated terminal profile for more reliable agent command execution. Wave 13 also added system-level Rules and Workflows configurable via MDM policies for enterprise deployments, and introduced Cascade Hooks for executing custom commands during agent execution.
Wave 14, released in February 2026, introduced Arena Mode as its headline feature. Arena Mode runs two Cascade agents in parallel on the same prompt with the underlying model identities hidden. Developers interact with both agents using their normal workflow, including full access to their codebase, tools, and context. After reviewing the outputs, users vote on which response performed better. Those votes feed into both personal and global leaderboards published on the Windsurf website.
Arena Mode includes "Battle Groups" that let users choose specific models to compare or let Windsurf randomly select from curated groups such as "fast models" versus "smart models." Users can also send follow-up prompts to both agents simultaneously or branch and explore different paths individually.
Wave 14 also introduced Plan Mode, which prompts developers with clarifying questions and produces structured execution plans before Cascade begins writing code. The companion "Megaplan" feature extends this to complex, multi-phase tasks that benefit from upfront architectural planning.
As of early 2026, Windsurf continues to operate under Cognition AI's ownership. The product maintains its own brand identity and development roadmap, while benefiting from Cognition's resources and its Devin autonomous agent technology.
Key developments in the current period include:
The broader market for AI coding tools continues to expand rapidly. Analysts expect AI-assisted development to become the default mode of software creation within the next few years. Windsurf, backed by Cognition's $10.2 billion valuation and growing enterprise traction, is positioned as one of the leading products in this space.