Continue is an open-source AI code assistant that integrates directly into code editors, allowing developers to connect any large language model (LLM) and customize AI-powered coding features including autocomplete, chat, code editing, and autonomous agent workflows. Developed by Continue Dev, Inc. and released under the Apache 2.0 license, Continue supports both Visual Studio Code and JetBrains IDEs. The project has accumulated over 32,000 stars on GitHub and more than 2.4 million installs on the Visual Studio Marketplace as of March 2026 [1][2].
Unlike proprietary competitors such as GitHub Copilot and Cursor, Continue gives developers full control over which models they use and where those models run. Users can connect to cloud-hosted APIs from providers like OpenAI, Anthropic, and Google, or run models locally using tools like Ollama and llama.cpp, keeping code entirely on their own machines [3].
Continue was co-founded in June 2023 by Ty Dunn (CEO) and Nate Sesti (CTO) in San Francisco, California [4].
Ty Dunn studied Cognitive Science with a focus on Computer Science at the University of Michigan, graduating in 2019. After college, he joined Rasa, the open-source machine learning framework for building conversational AI, where he rose from the company's first product manager to Group Product Manager. At Rasa, Dunn oversaw a product with millions of downloads, 17,000 GitHub stars, and adoption by roughly 10% of the Fortune 500. In 2021, while at Rasa, Dunn began experimenting with GPT-3 for user simulation and OpenAI Codex for code generation, which planted the seed for Continue [5].
Nate Sesti studied mathematics and physics at MIT, where he also published research on graph neural networks. Before co-founding Continue, Sesti built mission control software at NASA's Jet Propulsion Laboratory (JPL) and served as the first engineer at Mayan, a Y Combinator Winter 2021 startup. His experience at NASA, where strict security policies prevented the use of LLMs on proprietary code, directly informed Continue's emphasis on local model support and data privacy [5].
Continue was accepted into Y Combinator's Summer 2023 batch (S23). After graduating from YC in late 2023, the company raised $2.1 million in its first funding round, led by Jesse Robbins at Heavybit. Angel investors in this round included Julien Chaumond (co-founder of Hugging Face), Lisha Li (founder of Rosebud AI), and Florian Leibert (co-founder of Mesosphere) [4][6].
On February 26, 2025, Continue announced the release of version 1.0 alongside a new $3 million funding round in SAFEs (Simple Agreements for Future Equity), also led by Heavybit. This brought the company's total funding to approximately $5.1 million [6][7].
The v1.0 release introduced the Continue Hub (later renamed Mission Control), a platform for creating, sharing, and distributing custom AI coding assistants. The Hub functions as a registry where developers and organizations can publish and discover reusable building blocks: model configurations, rules, MCP servers, and complete assistant definitions. Continue compared the Hub to platforms like Docker Hub and the Hugging Face model registry [7].
At the time of the v1.0 launch, Continue reported over 20,000 GitHub stars, more than 10,000 Discord community members, and hundreds of thousands of active users. Early enterprise customers included Siemens, Morningstar, and IONOS [7].
By March 2026, the GitHub repository had grown to over 32,000 stars with more than 21,000 commits, 4,300 forks, and 809 releases. The VS Code extension had surpassed 2.4 million installs, with version 1.3.34 as the latest release [1][2]. The team remained small, with approximately five employees as of 2026 [4].
The product's focus expanded beyond IDE extensions to include the Continue CLI (cn), a command-line tool for running AI-powered code checks on pull requests as part of CI/CD pipelines. This positioned Continue not only as a developer tool but also as a quality control system for software teams [8].
Continue is built as a monorepo with the following language composition [2]:
| Language | Percentage |
|---|---|
| TypeScript | 84.4% |
| JavaScript | 7.4% |
| Kotlin | 3.8% |
| Python | 2.2% |
| Rust | 0.7% |
| Other | 1.5% |
The TypeScript core handles the shared logic for model interaction, context gathering, and tool orchestration. Kotlin is used for the JetBrains plugin, while Python and Rust handle specialized backend tasks. The extension communicates with LLMs through a provider abstraction layer, allowing the same interface to work with dozens of different model APIs and local inference engines [2].
Continue offers four primary interaction modes, each designed for different development tasks [9].
Chat mode provides a conversational interface within the IDE sidebar. Developers can ask questions about their codebase, request explanations of code, get debugging help, and discuss implementation approaches. In Chat mode, the AI operates without any tools; it functions as a knowledgeable conversation partner that can reference code context but cannot modify files [9].
Users can attach context to their chat messages using the @ symbol to reference specific files, functions, documentation sites, URLs, terminal output, and more [10].
Agent mode gives the AI model access to a full set of tools for autonomous coding tasks. When a developer describes a task in natural language, the agent can read and write files, run terminal commands, search the codebase, and make multi-file edits to complete the request. By default, the agent asks for permission before executing each tool, though users can configure automatic approval for specific tools [9].
Agent mode can be activated by pressing Cmd/Ctrl + . to cycle between Chat, Plan, and Agent modes. Plan mode serves as an intermediate option that gives the AI read-only tools for exploring the codebase and formulating a strategy without making changes [9].
Edit mode allows targeted, inline code modifications. A developer highlights a block of code, presses Cmd/Ctrl + I, and describes the desired change in natural language. The AI generates a diff that appears inline within the file, which the developer can accept or reject on a change-by-change basis using keyboard shortcuts [11].
Common use cases for Edit mode include refactoring functions, adding documentation comments, fixing bugs in specific code blocks, converting code between programming languages, and updating function signatures [11].
Autocomplete provides inline code suggestions as the developer types, similar to the tab-completion experience offered by GitHub Copilot. Continue's autocomplete system uses a fill-in-the-middle (FIM) approach: the model receives the code before and after the cursor position and predicts what belongs in between [12].
The autocomplete pipeline consists of three phases: context gathering (collecting relevant code from open files and language server data), LLM inference (generating the completion), and rendering (displaying the suggestion inline). Developers can accept a full suggestion with Tab or accept word-by-word with Cmd/Ctrl + Right Arrow [12].
Autocomplete works best with models specifically trained for FIM tasks. Recommended models include Codestral (from Mistral AI), StarCoder, and Qwen Coder. Even relatively small models (3 billion parameters) can deliver strong autocomplete performance, while larger chat-oriented models often perform poorly in this role despite their general capabilities [12].
Continue is model-agnostic. Rather than being tied to a single AI provider, it supports a wide range of models and services through a provider abstraction layer [3].
| Provider | Example models | Notes |
|---|---|---|
| Anthropic | Claude 4.6, Claude Sonnet 4.5 | Strong reasoning and long context windows |
| OpenAI | GPT-4o, GPT-4 Turbo | Broad coding capabilities |
| Google Gemini | Gemini 2.5 Pro, Gemini 2.5 Flash | Multimodal support |
| Mistral AI | Codestral, Mistral Large | Specialized coding models |
| DeepSeek | DeepSeek-V3, DeepSeek-R1 | Cost-effective reasoning models |
| xAI | Grok | General purpose |
| Amazon Bedrock | Various | AWS-managed access |
| Azure AI Foundry | OpenAI models via Azure | Enterprise Azure integration |
| OpenRouter | Aggregated models | Single API for multiple providers |
| Tool | Description |
|---|---|
| Ollama | Run open-source models locally with a simple CLI |
| llama.cpp | High-performance C++ inference for GGUF models |
| LM Studio | Desktop app for running local models with a GUI |
| llamafile | Single-file executable for portable local inference |
| vLLM | High-throughput serving for self-hosted deployments |
Local model support is a defining characteristic of Continue. Organizations with strict data governance requirements can run the entire system air-gapped, ensuring that source code never leaves their infrastructure [13].
Continue assigns models to specific roles, and users can configure different models for each role [3]:
| Role | Purpose | Recommended models |
|---|---|---|
| Chat | Conversational interaction and code discussion | Claude Sonnet 4.5, GPT-4o, Qwen3 Coder |
| Edit | Code transformations and refactoring | Claude Opus 4.1, Qwen3 Coder 30B |
| Apply | Applying targeted modifications to files | FastApply, Relace Instant Apply |
| Autocomplete | Inline code suggestions while typing | Codestral, StarCoder, Qwen Coder 2.5 |
| Embed | Vector representations for codebase search | Nomic Embed Text, Voyage Code 3 |
| Rerank | Improving search result relevance | Voyage Rerank 2.5 |
Continue is configured through a config.yaml file stored in the user's ~/.continue/ directory. The configuration uses a declarative YAML format with three required top-level properties (name, version, schema) and several optional sections [14].
Models: Defines which LLMs to use for each role, including provider, model name, API keys, and inference parameters such as temperature, max tokens, and stop sequences [14].
Rules: Provides system-level instructions that guide the AI's behavior during chat, agent, and edit interactions. Rules can be defined as inline strings in the YAML file or as separate Markdown files in a .continue/rules/ directory. Team-wide rules can be shared through the Continue Hub [14][15].
Context: Configures context providers that supply extra information to the AI. Built-in providers include @codebase (semantic search over the repository), @file (specific files), @docs (indexed documentation sites), @url (web page content), @terminal (terminal output), @git diff (uncommitted changes), @open (all open editor tabs), and @problems (current diagnostics) [10].
MCP Servers: Configures Model Context Protocol servers that give the AI access to external tools, databases, and services. MCP servers are defined with a name, command, and optional arguments. Continue supports stdio, SSE, and streamable-http transport methods [16].
Prompts: Custom prompt templates that can be invoked by name, replacing the older slash command system [14].
Docs: Documentation sites to index for the @docs context provider, allowing the AI to reference official documentation for frameworks and libraries [14].
Continue previously used a config.json format, but migrated to YAML starting with v1.0 to improve readability and better support multi-line content like rules [14].
The Continue Hub (rebranded as Mission Control in late 2025) serves as the central platform for managing and distributing Continue configurations [7][15].
The Hub organizes configurations into reusable units called "blocks." A block can be a model configuration, a set of rules, an MCP server definition, or a complete assistant profile. Blocks can be published publicly or kept private within an organization. Users reference hub blocks in their local config.yaml using a namespace format like anthropic/claude-3.5-sonnet or ollama/llama3.1-8b [7].
Verified partner blocks are available from companies including Anthropic, Mistral, Ollama, Voyage AI, and Docker [7].
An assistant in Continue is a complete configuration that combines models, rules, MCP servers, and context providers into a ready-to-use AI coding experience. Developers can create assistants tailored to specific tech stacks, coding standards, or organizational requirements and share them through the Hub [7].
Mission Control offers three tiers [7]:
| Tier | Target | Key features |
|---|---|---|
| Free (Solo) | Individual developers | Hub access, extension usage, basic assistants |
| Teams | Small to mid-size teams | Governance features, shared configurations |
| Enterprise | Large organizations | Private data plane deployment, security controls, audit logs |
In 2025, Continue expanded beyond IDE extensions with the release of the Continue CLI, a command-line tool identified by the command cn. The CLI enables AI-powered code review and quality checks that can run in CI/CD environments such as GitHub Actions, Jenkins, and GitLab CI [8].
The CLI can be installed via npm:
npm i -g @continuedev/cli
It requires Node.js 20 or later [2].
Each AI check is defined as a Markdown file stored in .continue/checks/ within a repository. These checks run as agents on every pull request and report results as GitHub status checks. Teams use them to enforce coding conventions, security patterns, architecture boundaries, and documentation standards [8].
Example CLI usage:
cn --config continuedev/review-bot
cn -p "Review this code for security issues" < changes.diff
cn --rule nate/spanish
The CLI can also be used interactively from the terminal for ad-hoc code analysis and generation tasks [8].
The VS Code extension is Continue's primary distribution channel, with over 2.4 million installs on the Visual Studio Marketplace. It supports Windows (x64 and ARM), macOS (Intel and Apple Silicon), Linux (x64 and ARM variants), and Alpine Linux. The extension requires VS Code version 1.70.0 or later [1].
The extension adds a sidebar panel for Chat and Agent modes, provides inline autocomplete suggestions, supports the Edit workflow through a keyboard shortcut overlay, and displays context providers through the @ mention interface [1].
Continue is available as a plugin for JetBrains IDEs (IntelliJ IDEA, PyCharm, WebStorm, and others) through the JetBrains Marketplace. The JetBrains plugin is community-maintained as of 2025, meaning it may lag behind the VS Code extension in feature parity. The Edit feature in JetBrains is implemented as an inline popup rather than the sidebar approach used in VS Code [9][11].
Continue stores development data locally in the .continue/dev_data/ directory on the user's machine by default. No telemetry or code is sent to Continue's servers unless the user explicitly configures a cloud model provider [13].
When paired with local models through Ollama or llama.cpp, Continue operates entirely offline. This makes it suitable for organizations with strict compliance requirements, air-gapped environments, or developers working on proprietary codebases who cannot send code to third-party APIs [13].
For teams using Mission Control, Continue offers a private data plane option in the Enterprise tier, where the infrastructure runs within the customer's own cloud environment [7].
Continue supports the Model Context Protocol (MCP), an open standard for connecting AI models to external tools and data sources. MCP servers can be configured in config.yaml or placed as YAML files in a .continue/mcpServers/ directory at the workspace root [16].
MCP tools are available exclusively in Agent mode. They allow the AI to interact with external systems such as databases, issue trackers (Jira, Linear), code hosting platforms (GitHub, GitLab), error monitoring services (Sentry), security scanners (Snyk), and browser automation tools (Playwright) [16].
Supported transport methods include stdio (for local processes), SSE (Server-Sent Events), and streamable-http (for remote servers). Recent protocol updates have added support for parallel tool calls and explicit capability declarations [16].
Continue occupies a distinct position in the AI coding assistant market by combining open-source licensing with model flexibility.
| Feature | Continue | GitHub Copilot | Cursor |
|---|---|---|---|
| License | Apache 2.0 (open source) | Proprietary | Proprietary |
| Pricing | Free (core); paid tiers for teams | Free/Pro ($10/mo)/Business ($19/mo) | Hobby (free)/Pro ($20/mo)/Business ($40/mo) |
| Model flexibility | Any model (cloud or local) | Primarily OpenAI models | Multiple providers (Claude, GPT, etc.) |
| IDE approach | Extension for existing IDEs | Extension for existing IDEs | Fork of VS Code (standalone editor) |
| Local model support | Full support via Ollama, llama.cpp | Limited | Limited |
| Open source | Yes (Apache 2.0) | No | No |
| Autonomous agent mode | Yes | Yes (Agent Mode) | Yes (Composer/Agent) |
| Multi-file editing | Yes (via Agent mode) | Yes (Copilot Edits) | Yes (Composer) |
| CI/CD integration | Yes (Continue CLI) | GitHub-native integration | No |
Continue's main advantage is its flexibility and transparency. Because it is fully open source, developers can inspect, modify, and self-host the entire system. Its model-agnostic design means teams are never locked into a single AI provider and can switch models as better options become available [3][7].
Its main disadvantage relative to Cursor is the lack of deep codebase indexing that Cursor provides through its custom-built editor. Compared to GitHub Copilot, Continue lacks the seamless integration with the broader GitHub ecosystem (pull request reviews, issue tracking, and repository-level context) that Copilot offers natively [17].
Continue has raised a total of approximately $5.1 million across multiple rounds [4][6]:
| Round | Date | Amount | Lead investor |
|---|---|---|---|
| Pre-seed / YC S23 | Late 2023 | $2.1 million | Heavybit (Jesse Robbins) |
| Seed (SAFEs) | February 2025 | $3 million | Heavybit |
Notable angel investors include Julien Chaumond (co-founder of Hugging Face), Lisha Li (founder of Rosebud AI), and Florian Leibert (co-founder of Mesosphere). Y Combinator also participated in the funding [6].
The company monetizes through its Mission Control platform, offering paid tiers for teams and enterprises that need governance, security, and administration features. The core IDE extensions and CLI remain free and open source under the Apache 2.0 license [7].
Continue maintains an active open-source community. As of March 2026, the project has over 32,000 GitHub stars, more than 4,300 forks, and hundreds of contributors [2]. The community communicates through a Discord server with over 11,000 members, a GitHub Discussions forum, and the Continue blog [4].
The project accepts external contributions and maintains a Code of Conduct, contributing guidelines, and a security policy. The rapid release cadence (809 releases as of March 2026) reflects the active development pace of the project [2].