opencode is an open-source AI coding agent built for the terminal. Developed by Anomaly Innovations, formerly known as the SST (Serverless Stack) team, it provides a terminal user interface (TUI) through which developers can interact with large language models to read, write, and edit code, run shell commands, and navigate complex codebases. Released in June 2025, opencode reached 150,000 GitHub stars and 6.5 million monthly active developers by mid-2026, making it one of the fastest-growing open-source developer tools in recent memory. The project is licensed under the MIT License and is hosted at github.com/sst/opencode.
The team behind opencode traces its roots to SST (Serverless Stack), an open-source framework for building full-stack applications on AWS serverless infrastructure. Jay V (CEO) and Frank Wang (CTO) met during their first week at the University of Waterloo in Canada and co-founded their first company together, Anomaly Innovations, carrying the same executive division across all their ventures. Jay and Frank put SST through Y Combinator in 2021 and secured $1 million post-demo day from prominent Silicon Valley investors including founders of PayPal, LinkedIn, Yelp, and YouTube. SST grew to 25,000 GitHub stars and turned profitable by 2025.
Dax Raad, a software engineer known for his work in serverless infrastructure, became an early SST user in 2021 and eventually joined as a co-founder. Adam Elmore, another engineer on the team, rounded out the small founding group. Together, this team later pivoted its attention toward AI-native developer tooling, which led to the creation of opencode.
The company operates under the name Anomaly (anomalyco on GitHub), which consolidates its portfolio of open-source projects including SST, OpenNext (Next.js on AWS), OpenAuth, OpenTUI, and opencode. The founder Jay V has stated the team's long-standing conviction that developers care deeply about tooling they can trust, modify, and truly own, a philosophy that shaped opencode's design from the beginning.
opencode launched publicly on June 19, 2025. Unlike most AI coding tools that debuted as proprietary products, opencode required no account, no email address, and no credit card. Users could download and run it immediately using their own API keys from any supported model provider.
The project gained substantial momentum in late 2025 and early 2026. A two-week surge in January 2026 brought 18,000 new GitHub stars, driven partly by community frustration over access restrictions from proprietary coding agents. By April 2026, the repository held 147,000 stars and served 6.5 million monthly developers, a growth rate approximately 4.5 times faster than Claude Code over the same period. Enterprise adoption followed, with companies including Cloudflare among early customers. Within five months of launch the project was generating several million dollars in annualized revenue through its hosted model service, OpenCode Zen.
opencode uses a client-server architecture in which the core agent logic runs as a local server process while user interfaces connect to it as clients. The primary client is the TUI, but the same server can be driven from a desktop application (in beta), IDE extensions, GitHub Actions runners, or GitLab CI pipelines. This architecture means the terminal interface is one of several possible frontends rather than an inseparable part of the system, enabling headless automation and future remote-control scenarios.
The repository is primarily TypeScript, with the CLI entrypoint bootstrapping the server and connecting to the TUI. The interface layer was originally built on the Bubble Tea framework for Go, following the Elm architecture (model-update-view cycle). As the project matured the team developed OpenTUI, an in-house TypeScript-based TUI framework with Zig bindings for efficient rendering, which replaced the earlier Bubble Tea implementation in version 1.0. OpenTUI supports React, SolidJS, and Vue component patterns and is used by the opencode interface.
The TUI provides a rich interactive environment in the terminal. The main layout shows a conversation pane alongside a file-tree or diff panel, allowing developers to read AI responses and see code changes side by side. Key interaction patterns include:
@ symbol to reference specific files in the conversation context/init, /theme, /undo, /redo, /share, and /connecttui.json, with Ctrl+X as the default leader key for multi-key shortcutsTheme selection is handled via the /theme slash command or the tui.json configuration file, which supports a flexible JSON-based theme system. Users can choose built-in themes or define a custom theme by providing color definitions in a customTheme map.
The project maintains separate configuration files for the server layer (opencode.json) and the TUI layer (tui.json), stored by default under ~/.config/opencode/.
opencode exposes a set of built-in tools that the LLM can invoke during a session. These tools form the agent's hands: they allow the model to read and write files, execute shell commands, search the codebase, and retrieve information from the web. Each tool can be configured with a permission mode of allow (runs without confirmation), ask (requires user approval), or deny (blocked entirely). Permissions are set in opencode.json under the permission key and support wildcard patterns.
The core built-in tools are:
| Tool | Function |
|---|---|
bash | Executes shell commands in the project environment |
edit | Modifies existing files via exact string replacement |
write | Creates new files or overwrites existing ones |
read | Reads file contents with optional line range support |
grep | Searches file contents with regular expressions (uses ripgrep, respects .gitignore) |
glob | Finds files by pattern matching (e.g., **/*.ts) |
lsp | Code intelligence including definitions, references, and hover info (experimental) |
apply_patch | Applies patch files to the codebase |
skill | Loads a SKILL.md file for agent specialization |
todowrite | Manages todo lists during long coding sessions |
webfetch | Retrieves and reads web pages |
websearch | Web search via Exa AI (requires OPENCODE_ENABLE_EXA=1) |
question | Asks the user a clarifying question during execution |
opencode integrates with Language Server Protocol (LSP) servers to enhance the LLM's understanding of code. When LSP is enabled and the agent opens a file, opencode checks the file extension against configured LSP servers and starts the appropriate server if it is not already running. The agent then receives language-server diagnostics (errors, warnings, type information) which are fed back into the model's context.
LSP is configured in the lsp field of opencode.json. Setting "lsp": true enables all 25+ built-in servers with automatic installation for popular language servers. Individual servers can be configured with custom commands, environment variables, or disabled selectively. Supported languages out of the box include Python, Rust, JavaScript, TypeScript, Go, PHP, and many others.
Example configuration enabling Go and TypeScript LSP:
{
"lsp": {
"go": {
"command": ["gopls"]
},
"typescript": {
"command": ["typescript-language-server", "--stdio"]
}
}
}
A defining design choice for opencode is provider agnosticism. The tool does not lock users into any single AI provider. Support for more than 75 LLM providers is achieved through integration with the Vercel AI SDK and models.dev, a public database of AI models maintained by the Anomaly team.
models.dev functions as an open catalog of AI models from dozens of providers. opencode's provider system fetches model definitions from the models.dev API (https://models.dev/api.json) and caches them locally. This means new models from supported providers often become available in opencode without requiring a software update. The ModelsDev namespace in the codebase handles fetching, caching, and exposing these definitions to the rest of the system.
The most popular providers are preloaded in opencode. Additional providers can be added using the /connect slash command at runtime. Major supported providers include:
Models from Chinese providers such as Qwen, Kimi, GLM, and MiniMax are also available through the OpenCode Zen service.
The active model is set in opencode.json using the format provider_id/model_id:
{
"model": "anthropic/claude-sonnet-4-5"
}
A small_model key selects a lighter model for auxiliary tasks such as title generation. Provider-specific options such as reasoning effort (for OpenAI models), thinking budgets (for Anthropic extended thinking), and Bedrock region configuration are set under the provider key. opencode also supports custom variants, allowing users to override model defaults without duplicating entire model definitions.
Model selection priority at startup: command-line --model flag > config file model value > last used model > first model by internal priority.
For environments with strict data governance requirements opencode supports fully local execution. Using Ollama or LM Studio to host an open-weight model, users can run opencode without any network calls to cloud services. Ollama exposes an OpenAI-compatible API that opencode consumes as a standard provider. This capability is marketed as Air-gapped Mode and targets developers in regulated industries such as defense, healthcare, and financial services.
opencode ships with four standard agents:
ask by default, intended for read-only analysis and planningThree additional system agents run automatically in the background: Compaction (manages context window pruning), Title (generates conversation titles), and Summary (creates session summaries).
opencode supports user-defined agents written as Markdown files with YAML frontmatter. Project-specific agents are placed in .opencode/agents/ inside the repository; global agents are stored in ~/.config/opencode/agents/. The filename without extension becomes the agent identifier.
A custom agent file consists of a frontmatter block followed by the system prompt:
---
description: A specialized agent for reviewing API contracts
mode: primary
model: anthropic/claude-opus-4-5
temperature: 0.3
permission:
bash: deny
edit: ask
---
You are a specialist in API design. Review each endpoint for...
Frontmatter fields include:
description (required): Explains what the agent does, shown in autocomplete menusmode: primary, subagent, or allmodel: Optional model override for this agenttemperature: Controls response randomness (0.0 to 1.0)permission: Granular access control for individual toolssteps: Maximum agentic iterations before falling back to text-only responseshidden: When set to true, hides subagents from autocomplete menusAgents can also be defined in opencode.json using a structured JSON format with the same properties. Custom agents integrate fully with the tool system, meaning they can use any combination of built-in tools subject to the permission constraints defined in their configuration.
opencode uses a file named AGENTS.md as its primary mechanism for injecting persistent instructions into the model context. Placing an AGENTS.md in the project root creates project-specific rules that apply whenever opencode runs in that directory or its subdirectories. A global ~/.config/opencode/AGENTS.md applies across all sessions on the machine and is appropriate for personal workflow preferences.
The /init slash command analyzes the repository and creates or updates AGENTS.md with project-specific guidance. It captures build, lint, and test commands; repository architecture and conventions; and any setup details the codebase itself does not make obvious. The command may ask targeted questions when automated analysis is insufficient.
Best practice is to commit the project AGENTS.md to version control so that all team members and CI runners benefit from consistent instructions.
opencode also supports CLAUDE.md as a fallback for compatibility with Claude Code conventions. If no AGENTS.md exists, opencode will read CLAUDE.md at the project root and ~/.claude/CLAUDE.md globally. This compatibility can be disabled with the OPENCODE_DISABLE_CLAUDE_CODE=1 environment variable.
The instructions field in opencode.json extends rules loading to external files and glob patterns:
{
"instructions": ["docs/guidelines.md", "packages/*/AGENTS.md"]
}
Remote URLs are also supported with a five-second timeout.
opencode.json uses JSON or JSONC (JSON with Comments) format with schema validation. Configurations are layered and merged from multiple sources in the following order of precedence:
.well-known/opencode~/.config/opencode/opencode.jsonOPENCODE_CONFIG environment variableopencode.json.opencode directoriesOPENCODE_CONFIG_CONTENTVariable substitution within configuration values is supported via {env:VARIABLE_NAME} for environment variables and {file:path/to/file} for file content injection. The configuration schema is published at https://opencode.ai/config.json and supports IDE autocompletion.
Key top-level configuration sections include model, small_model, provider, lsp, mcp, plugin, agent, tools, permission, instructions, formatter, shell, server, share, snapshot, and autoupdate.
opencode supports the Model Context Protocol (MCP), a standardized interface developed by Anthropic and donated to the Linux Foundation in December 2025 for connecting AI agents to external tools and services. MCP servers are configured in the mcp section of opencode.json. Both local (stdio-based) and remote (HTTP-based) MCP servers are supported.
The MCP ecosystem available to opencode users includes over 1,200 servers. Common integrations include Sentry for error monitoring, Context7 for documentation search, and Vercel's Grep server for searching open-source GitHub repositories. MCP tools become available to the LLM alongside built-in tools and can be granted per-tool permission settings using wildcard patterns (e.g., "sentry_*": "ask").
Plugins extend opencode with custom tools, hooks, and integrations. They can be placed in .opencode/plugins/ (project-specific) or ~/.config/opencode/plugins/ (global), or loaded from npm through the plugin configuration key. Plugins can bundle their own MCP servers, reducing manual configuration overhead.
opencode integrates with GitHub and GitLab, allowing developers to trigger the agent directly from pull request and issue comments.
Mentioning /opencode or /oc in a comment on a GitHub issue or pull request causes the configured GitHub Actions workflow to execute the task. The workflow runs on the repository's Actions runners, meaning the agent operates with the repository's own compute and credentials. Setup is handled through the opencode github install command, which installs the necessary workflow YAML file and GitHub App. Required API keys are stored as repository secrets.
The GitLab integration works analogously: mentioning @opencode in a GitLab comment triggers execution in the GitLab CI pipeline. A community-maintained CI component (nagyv/gitlab-opencode) provides minimal setup for common use cases. opencode also supports GitLab Duo as a backend.
OpenCode Zen is a managed AI model gateway operated by the Anomaly team. Rather than requiring users to maintain API keys across multiple providers, Zen provides a curated selection of more than 40 verified models through a single billing relationship. Models are tested and vetted by the opencode team before inclusion.
The service operates on a pay-as-you-go basis with pricing at or near provider list prices, plus a processing fee of 4.4% plus $0.30. Automatic account reload triggers when the balance falls below $5, replenishing by $20 by default. A hybrid approach allows users to bring their own OpenAI or Anthropic API keys while using Zen for access to other providers, particularly Chinese frontier models not otherwise easily accessible.
A lower-cost subscription tier, OpenCode Go, priced at $10 per month, provides access to open-weight models such as GLM, Kimi, and MiniMax variants with defined usage limits. A premium tier, OpenCode Black, offered at $200 per month and initially sold out on launch, provides access to frontier models from both OpenAI and Anthropic plus open-weight options.
Zen constitutes the project's primary revenue stream, generating several million dollars in annualized revenue within five months of the product's launch.
opencode occupies the same category as Claude Code, Aider, Cline, Codex (OpenAI), and Gemini CLI. The following table summarizes key differences as of mid-2026:
| Feature | opencode | Claude Code | Aider | Cline |
|---|---|---|---|---|
| License | MIT (open source) | Proprietary | Apache 2.0 (open source) | MIT (open source) |
| Primary interface | Terminal TUI + desktop app | Terminal | Terminal / CLI | VS Code extension |
| Model support | 75+ providers | Anthropic only | 75+ providers | 75+ providers |
| LSP integration | Yes (25+ built-in servers) | Limited | No | Via VS Code |
| Git workflow | Manual | Automatic commits optional | Git-first, auto-commits | Manual |
| Local model support | Yes (Ollama, LM Studio) | No | Yes | Yes |
| Custom agents | Yes (Markdown + JSON) | Yes (Claude.md hooks) | Limited | Limited |
| MCP support | Yes | Yes | Limited | Yes |
| CI/CD integration | GitHub Actions, GitLab CI | Limited | Limited | No |
| Base cost | Free (API costs apply) | Requires $20/month plan | Free (API costs apply) | Free (API costs apply) |
| Stars (mid-2026) | ~150,000 | ~33,000 | ~39,000 | ~75,000 |
Claude Code is Anthropic's official terminal coding agent and opencode's most frequently cited point of comparison. Claude Code achieves top benchmark scores on SWE-Bench Verified with Claude Opus 4.6, benefits from deep integration with Anthropic's model infrastructure, and is broadly regarded as more polished with superior automatic context compaction and planning behavior. However, it is locked to Anthropic's model ecosystem, requires a paid subscription, and transmits all code to Anthropic's servers.
opencode offers comparable agentic capability with greater flexibility: users can freely switch models, run entirely locally, and inspect every line of the agent's behavior. The trade-off is that opencode requires more configuration and occasionally needs more user oversight on complex multi-file tasks. Critics have observed that Claude Code still handles edge cases more gracefully and plans more cleanly on the most difficult tasks.
Aider is a mature Python-based CLI tool that has been developed since 2023 and logged 4.1 million installations. Its defining characteristic is a git-first philosophy: every AI edit becomes a labeled git commit that can be reviewed, reverted, or cherry-picked. Aider maps entire repositories to understand structure and excels at systematic refactoring across many files. It has more than 100 language maps and is considered extremely reliable, though its terminal interface is minimal compared to opencode's TUI.
opencode offers LSP-powered type-aware edits, a richer interactive experience, parallel multi-session agent support, and session sharing. Aider lacks LSP integration and native parallelism. Many developers in 2026 use both tools for different workflows, with Aider preferred for systematic refactoring and opencode preferred for interactive development.
Cline is a VS Code extension that brings AI coding agent functionality into the IDE. It is stable in controlled workflows, particularly with its structured approval system. opencode operates outside the IDE, which means it integrates into any terminal workflow regardless of editor, but users who prefer to stay within VS Code often find Cline more convenient. Cline does not offer the same degree of model agnosticism or CI/CD pipeline integration.
Codex (OpenAI) and Gemini CLI are provider-specific tools from OpenAI and Google respectively. Both lock users into their respective model ecosystems. Notably, following the Anthropic-opencode dispute in early 2026 (described below), OpenAI officially partnered with opencode to allow Codex and ChatGPT subscribers to use their subscriptions inside opencode.
opencode is used across a wide range of software development scenarios:
Interactive development: Developers keep a terminal split open with opencode running alongside their editor. They describe changes in natural language, review diffs, and apply or reject edits. The Plan/Build mode toggle allows safe exploration before committing to changes.
Codebase exploration: The Explore subagent and Plan mode allow asking questions about unfamiliar codebases without risk of modification. LSP integration provides accurate symbol resolution and type information.
Automated pull request review: GitHub and GitLab integrations allow teams to trigger opencode on every PR. The agent reviews code quality, identifies potential bugs, and posts a structured report as a comment.
Regulated environments: Air-gapped mode with Ollama allows teams in defense, healthcare, and financial services to use AI coding assistance without transmitting source code off-premises.
Cost optimization: Teams use opencode to mix expensive frontier models for complex tasks with cheaper or free models for routine work such as test generation, documentation, and boilerplate. The small_model configuration key directs lightweight tasks to cheaper alternatives automatically.
Parallel agent workflows: The multi-session architecture supports running several opencode instances simultaneously on different parts of a large codebase, with results reviewed and merged manually.
opencode's launch attracted immediate attention from developers frustrated with proprietary AI coding tools. The zero-friction onboarding (no account required) and model agnosticism were widely cited as differentiators. The TUI was praised as genuinely well-designed, with reviewers describing the interface as clean and responsive. The team's open-source credentials from SST gave early adopters confidence in the project's long-term commitment to openness.
The project was covered on Hacker News, where discussions noted both its potential as a serious Claude Code alternative and its rapid feature development. On Product Hunt the tool received consistently positive reviews, particularly from developers who valued the ability to bring their own models. The growth from launch to 150,000 GitHub stars over approximately eleven months was noted as exceptional even by the standards of the 2025-2026 AI tooling boom.
Enterprise adoption provided additional validation: Cloudflare's use of opencode for internal development was publicly noted.
opencode has not been without criticism. The most persistent technical complaint on Hacker News and developer forums concerned release velocity and quality: the team shipped updates at a pace that sometimes outran testing, leading to regressions and instability. The application was noted to consume significant system resources, with RAM usage reported at 1 GB or more for a terminal interface, drawing unfavorable comparisons to leaner alternatives.
Security researchers flagged several issues in early releases, including configuration pulled from remote URLs by default and potential for remote code execution in authentication flows. One significant controversy involved early versions of opencode using hidden cloud model calls (reportedly using an xAI model) for session title generation even when users configured fully local models, violating user expectations about data residency.
The OpenCode Go subscription tier received negative feedback from subscribers who found the combination of aggressively quantized models and strict rate limits to be poor value, with user complaints achieving a 94% upvote ratio on public forums.
In January 2026, Anthropic deployed server-side restrictions preventing third-party tools from using Claude Pro and Max OAuth tokens. Early versions of opencode had been spoofing the claude-code-20250219 beta HTTP header, causing Anthropic's servers to treat requests as originating from the official Claude Code CLI. When Anthropic activated checks on January 9, 2026, users of opencode who relied on their Claude subscription for access received errors.
On February 19, 2026, Anthropic formalized the restriction in updated service terms, explicitly prohibiting OAuth token use with third-party tools or the Agent SDK. A subsequent legal communication caused the opencode team to remove all Claude OAuth code, the spoofed header, and Anthropic-specific authentication on February 19, followed by a final cleanup commit in March 2026.
The incident generated substantial developer backlash directed at Anthropic. DHH (David Heinemeier Hansson, creator of Ruby on Rails) posted publicly that the policy was objectionable for a company whose models were trained on open-source code. OpenAI responded by formally partnering with opencode to allow Codex and ChatGPT subscribers to use their subscriptions natively within opencode, a move widely interpreted as a competitive signal.
The dispute crystallized a broader industry debate about the boundary between AI providers' commercial interests and the open-source developer ecosystem that sustains their model training data.
opencode has documented limitations that users should consider:
AGENTS.md guidance and LSP tuning.