Trae (styled TRAE, short for The Real AI Engineer) is an AI-native integrated development environment (IDE) developed by ByteDance, the Chinese technology company best known for TikTok and the Doubao large language model family. Trae was first released to Chinese developers in January 2025 and subsequently launched internationally, offering free access to frontier AI models including Claude and GPT-4o. Built as a fork of Microsoft Visual Studio Code, Trae competes directly with Cursor (code editor), Windsurf (software), and Cline in the rapidly expanding AI-augmented IDE market. By subsidizing model inference costs through ByteDance's corporate resources, Trae positioned itself as the most aggressively priced option in its category, attracting over six million registered users across nearly 200 countries within its first year.
The product ships in two distinct regional variants: an international version available at trae.ai that integrates Anthropic's Claude models and OpenAI's GPT-4o, and a domestic Chinese version that integrates ByteDance's proprietary Doubao-1.5-Pro model alongside DeepSeek's R1 and V3 models. Both variants share the same underlying VS Code architecture and agentic workflow structure.
ByteDance was founded in 2012 by Zhang Yiming and Liang Rubo in Beijing, China. Although it became globally prominent through short-form video platforms -- TikTok internationally and Douyin domestically -- ByteDance has aggressively expanded into artificial intelligence research and product development since the early 2020s. The company operates several AI-focused divisions, including the Seed research group (responsible for the Doubao family of large language models) and the Volcano Engine cloud computing platform.
In 2024 and 2025, ByteDance substantially accelerated its AI infrastructure investment. The company committed over $20 billion to AI infrastructure in 2025 alone, with plans to increase capital expenditure to approximately 160 billion yuan (around $23 billion) in 2026. Doubao, ByteDance's flagship language model, achieved a dominant 46.4% market share in China's public cloud large model service market according to an IDC report covering the period. Daily token consumption across ByteDance's AI services reached 16.4 trillion tokens as of mid-2025.
Prior to Trae, ByteDance had explored AI-assisted development through MarsCode, a coding assistant that achieved notable results on the SWE-bench Lite benchmark. Trae represented a more ambitious step: a full standalone IDE rather than an extension or plugin, designed to compete with the new generation of AI-native development environments coming from Western startups.
The market for AI-augmented code editors exploded between 2023 and 2025. Cursor (code editor), developed by Anysphere, popularized the concept of an AI-first VS Code fork and reached significant commercial traction by late 2024. Windsurf (software), developed by Codeium (later renamed Windsurf), pursued a similar architecture. Cline, an open-source alternative funded by a $32 million raise from Emergence Capital and Pace Capital, offered a bring-your-own-API-key model. GitHub Copilot, backed by Microsoft's investment in OpenAI, occupied the dominant incumbent position as a VS Code extension.
These tools generally charged between $15 and $20 per month for professional tiers and imposed usage limits on expensive frontier model inference. ByteDance identified the pricing structure as a competitive vulnerability and designed Trae to offer comparable or superior functionality at no cost during its early access period, leveraging the company's existing scale in AI infrastructure to absorb the model inference costs.
Trae was released to Chinese developers on January 19-20, 2025, initially available only for macOS. The launch positioned Trae as "China's first AI-native integrated development environment" and offered unlimited free access to GPT-4o and Claude 3.5 Sonnet, alongside intelligent code completion, bug fixing, and natural language-based code generation. Windows support followed in late February 2025.
The January release attracted immediate attention in developer communities. A discussion on Hacker News generated over 450 comments, with developers debating the product's technical architecture, its relationship to Visual Studio Code, and the implications of ByteDance's entry into developer tooling. Visual Studio Magazine reported on the launch with the subheading "It Looks To Be a Fork," reflecting the broader conversation about how Trae related to Microsoft's open-source code editor.
On March 3, 2025, ByteDance officially launched the domestic version of Trae for the Chinese market, describing it as "China's first AI-native integrated development environment." This version differed from the international release in its model lineup: rather than Claude and GPT-4o, it featured Doubao-1.5-Pro as the primary model, with the ability to switch between DeepSeek's R1 and V3 models. The domestic launch emphasized ByteDance's investment in its own model ecosystem and its ability to offer a fully domestic AI stack for Chinese enterprises concerned about data sovereignty.
The March 2025 domestic launch also expanded Trae's Builder mode to support plain-language application creation using the Doubao model, bringing feature parity with the international version to Chinese developers.
Trae's international version officially launched a paid subscription model on May 27, 2025. The Pro tier was priced at $10 per month, with the first month discounted to $3. This placed Trae at half the price of Cursor ($20/month) and significantly below the initial rate for Windsurf. The international version simultaneously upgraded its model roster to include Claude 3.7 Sonnet, with access to DeepSeek R1 provided free alongside it.
By the time the paid tier launched, Trae had already reached one million monthly active users and had delivered over six billion lines of code since launch, according to company statements.
Trae 2.0 launched in summer 2025, introducing the SOLO mode as its headline feature. SOLO mode, which entered preview on July 21, 2025, represents a fully autonomous software engineering pipeline in which a user describes a desired application and the system handles requirement analysis, technical architecture design, code implementation, testing validation, and deployment without requiring step-by-step human supervision. The SOLO mode entered initial availability gated behind an invitation code for Pro users.
The 2.0 release also added voice interaction capabilities, an expanded MCP (Model Context Protocol) marketplace with one-click installation, and the Cue tab completion system for predictive multi-line edits.
Trae is built as a fork of Microsoft Visual Studio Code, the open-source code editor that forms the basis of several competing AI IDEs including Cursor and Windsurf. Because Visual Studio Code's core is published under the MIT license, companies can legally build derivative products on top of it. Trae itself is closed-source, meaning ByteDance has not published the modifications it made to the base VS Code codebase.
The VS Code foundation means that most existing VS Code extensions are compatible with Trae, and users can import their existing settings, extensions, and keyboard shortcuts from either VS Code or Cursor during initial setup. The process involves downloading the platform-specific installer, running it, and optionally importing a configuration profile. A terminal command (trae) can be configured to open the editor from the command line.
Trae's interface departs from VS Code's default visual design. Early reviews described the aesthetic as blending elements of JetBrains Fleet -- a lighter, more whitespace-forward IDE -- with familiar VS Code structural elements. The sidebar chat panel, inline editing overlay, and integrated terminal are positioned to minimize context-switching during AI-assisted coding workflows.
Trae uses a just-in-time analysis strategy for codebase context, scanning only the files relevant to a current task rather than pre-indexing the entire repository. This approach makes Trae fast and lightweight for small to medium-sized projects (under approximately 100,000 lines of code) but less effective for large monolithic applications where cross-cutting context is required. Competitors such as Cursor offer deeper repository-wide indexing, which provides better recall on large codebases at the cost of upfront indexing time.
Users can reference specific files, folders, classes, or functions within chat using the # syntax, or paste URLs for web-based context. The .ignore configuration file controls which parts of the codebase are indexed, and rule files written in Markdown can be committed to the repository to encode architectural guidelines, development standards, and design philosophy that the AI will observe during generation.
Trae supports the Model Context Protocol (MCP), an open standard developed by Anthropic that allows AI applications to connect to external tools, APIs, and data sources through a standardized interface. Trae includes an MCP marketplace that provides one-click installation of popular integrations. Developers can also create custom agents with specific system prompts, tool configurations, and behavioral patterns for specialized workflows. Examples include connecting to design tools such as Figma for UI-aware code generation, or integrating database connectors and cloud service APIs directly into the agentic loop.
Chat mode is Trae's baseline AI interaction layer. It provides two interaction surfaces: a side chat panel accessible via keyboard shortcut and an inline chat interface that opens within the editor buffer. Chat mode supports context from files, folders, and the entire workspace, and users can upload screenshots, terminal output, or other documents to provide additional context for a query. Web search is integrated into the chat UI, allowing the assistant to retrieve current documentation or reference material without leaving the editor.
Chat mode handles tasks such as explaining unfamiliar code, answering questions about frameworks or APIs, generating unit tests, writing documentation, and producing targeted code patches that the user can accept or reject. The inline editing surface allows fine-grained edits within a specific function or code block, while the side panel supports broader architectural discussion.
Builder mode is Trae's primary agentic coding feature. It handles general development tasks through natural language specifications, using a planning-first approach that the product describes as similar to test-time scaling strategies: the agent thinks through the required steps before executing, reducing the rate of compounding errors in multi-step tasks.
In Builder mode, a developer can describe an application in natural language (for example, "build a React frontend with an Express backend that connects to a PostgreSQL database") and Trae will scaffold the project structure, generate the initial implementation files, and iteratively refine the output in response to follow-up requests. Trae's 2025 annual report data indicated that Builder mode was used by approximately 80% of active users, making it the most heavily used feature in the product. Bug fixing accounted for approximately 40% of Builder mode sessions and code generation for approximately 30%.
Builder mode integrates with a built-in browser for web application previews and supports one-click deployment to Vercel for frontend projects, making the path from natural language description to deployed prototype significantly shorter than in traditional workflows.
SOLO mode, introduced in Trae 2.0 and entering preview in July 2025, is described by ByteDance as a "fully autonomous software engineering experience." It represents Trae's most ambitious agentic capability and is distinct from Builder mode in the scope of autonomy it is designed to exercise.
Where Builder mode operates in a conversational loop with the developer approving or rejecting steps, SOLO mode constructs a complete development pipeline and executes it with minimal human intervention. The system performs requirement analysis from the initial description, designs a technical architecture, writes the implementation, runs tests, handles errors, and proceeds to deployment. SOLO mode integrates the editor, terminal, browser, and external documentation into a unified execution environment.
At launch, SOLO mode was available to Pro users via invitation code. Usage data from the 2025 annual report indicated that approximately 40% of active users had adopted SOLO Coder mode, though direct comparison with Builder mode usage figures suggests many users adopted multiple modes for different task types.
SOLO Builder is a web-development-focused variant of SOLO mode with additional tooling for frontend projects, including a built-in browser preview and tighter Vercel integration. It is optimized for the vibe coding workflow in which non-expert developers describe visual or functional specifications and receive a deployable web application as output.
Trae allows users to define custom agents with specific system prompts, tool selections, MCP integrations, and behavioral constraints. This enables teams to create specialized assistants for particular workflows, such as a database migration agent configured with access to a specific schema tool, or a documentation agent constrained to a particular output style. Custom agents can be shared within a project via rule files committed to the repository.
Cue is Trae's tab-based code completion system, providing single-line auto-completion, multi-line suggestions, predictive edits based on recent changes, and jump-to-edit navigation that repositions the cursor to the most probable next edit location. Independent evaluations noted that Cue's response speed was slower than the equivalent system in Cursor, and the jump-to-edit feature was criticized for inconsistent accuracy, often requiring manual cursor adjustment.
A defining characteristic of Trae's market positioning is the provision of frontier AI model inference at no additional cost to users. During its early access period, Trae provided unlimited free access to Claude 3.5 Sonnet and GPT-4o for all users. After the paid tier launched in May 2025, the free tier retained 5,000 auto-completions per month and a limited allocation of fast and slow requests on premium models (10 fast and 50 slow requests monthly), while the Pro tier ($10/month) expanded these limits significantly.
The model roster on the international version evolved over time. By mid-to-late 2025, Pro users had access to Claude 4 Sonnet, Gemini 2.5 Pro, Grok 4, Kimi K2, GPT-4o, DeepSeek R1, and DeepSeek V3. The domestic Chinese version offers Doubao-1.5-Pro and DeepSeek R1/V3 but not the Anthropic or OpenAI models, reflecting Chinese regulatory constraints on foreign AI service providers.
The economic model raises sustainability questions that parallel those faced by other AI IDE providers. Trae uses a request-based pricing structure (one request per prompt, regardless of complexity) similar to the model Cursor originally employed before transitioning to usage-based pricing after user backlash against that transition. Because codebase-wide operations consume substantially more AI inference than simple edits, flat-rate request pricing creates economic strain at scale. Analysts noted that ByteDance's ability to sustain the model depends on its broader corporate scale and its strategic interest in capturing developer mindshare rather than immediate monetization.
ByDance does not publish information about the total infrastructure cost of subsidizing Trae's model access, but the company's 2025 AI infrastructure commitment of over $20 billion provides context for the scale at which such subsidies can be sustained.
Trae's pricing structure as of mid-2025:
| Tier | Monthly Price | Auto-completions | Fast Requests | Slow Requests |
|---|---|---|---|---|
| Free | $0 | 5,000 | 10 (premium models) | 50 (premium models) |
| Pro | $10 (first month $3) | Unlimited | Expanded | Expanded |
Add-on packages for additional fast request capacity were available in $3 to $12 bundles with a 30-day expiration window. Linux support was not available as of mid-2025, with the platform limited to macOS (10.15 and above) and Windows 10/11.
The following table compares Trae's key characteristics against its primary competitors as of mid-2025:
| Aspect | Trae | Cursor | Windsurf | Cline |
|---|---|---|---|---|
| Base price (Pro) | $10/month | $20/month | $15/month | Free (BYOK) |
| Free tier | Yes (limited) | Yes (limited) | Yes (limited) | Yes (full, own API key) |
| VS Code fork | Yes | Yes | Yes | Extension |
| Source availability | Closed | Closed | Closed | Open source (Apache 2.0) |
| Autonomous agent | SOLO mode | Agent mode | Cascade | Yes |
| Developer | ByteDance | Anysphere | Windsurf/Codeium | Community |
| Privacy / ZDR | No | Enterprise only | Default ZDR | Local code, BYOK |
| Enterprise plans | No | Yes | Yes | No (community) |
| MCP support | Yes | Limited | Yes | Yes |
Cursor (code editor) established the AI-first VS Code fork category and maintained the largest installed base among this class of tools at the time of Trae's launch. Cursor's strengths include deeper repository-wide indexing, background agents that can operate independently of the open IDE window, mobile and Slack integrations, and a longer track record of reliability. Cursor offers zero data retention (ZDR) options for enterprise customers concerned about code privacy.
Trae's competitive advantages over Cursor center on price and, during the early access period, model generosity. The $10 Pro tier versus Cursor's $20 represents a 50% cost reduction for equivalent functionality. Trae's custom agent system and MCP marketplace are considered more mature than Cursor's equivalent. The interface is rated by many reviewers as more visually polished. However, Trae lacks Cursor's background agent functionality, session memory across conversations, and the ecosystem depth that Cursor has accumulated through its first-mover advantage.
A common recommendation in developer communities is to use Trae for learning and prototyping where budget is a primary constraint, and to migrate to Cursor for professional work that requires deeper codebase integration and more reliable enterprise controls.
Windsurf (software) (developed by the company formerly named Codeium) positions itself as a privacy-first alternative with zero data retention enabled by default rather than as an enterprise add-on. Windsurf's Cascade agentic system is competitive with Trae's Builder mode. At $15 per month for its Pro tier, Windsurf sits between Trae and Cursor on price.
Windsurf's default privacy posture is its clearest differentiation from Trae. For developers working on proprietary or client code, Windsurf's ZDR default removes the data sovereignty ambiguity that characterizes Trae's offering. Windsurf also offers enterprise plans and documented security certifications that Trae had not provided as of mid-2025.
Cline occupies a different position in the market as an open-source VS Code extension rather than a standalone IDE fork. Because Cline is Apache 2.0 licensed and runs entirely within the user's existing VS Code installation, it inherits VS Code's privacy and data handling guarantees rather than introducing a new data pipeline. Users bring their own API keys, meaning the actual model inference occurs directly between the user's machine and the model provider without any intermediary.
The trade-off is cost predictability: Cline's usage-based billing through user-supplied API keys can reach hundreds of dollars per month for heavy users, while Trae's flat-rate pricing is more predictable. Cline offers stronger appeal for developers who require code to remain local and who want full control over which models process their work.
Trae's privacy posture has attracted significant scrutiny from the security research community. An investigation reported by Cybernews and covered by TechRadar documented approximately 500 network calls within a seven-minute observation period, transmitting approximately 26 megabytes of data to ByteDance servers on the byteoversea.com domain. The data categories identified by the investigation included system hardware specifications, operating system details, performance metrics, mouse and keyboard activity, usage patterns, project file path information, and persistent unique tracking identifiers.
Of particular concern was the finding that Trae's telemetry toggle appeared non-functional: data collection continued regardless of whether the user had disabled the telemetry setting. The investigation also identified a capability for ByteDance to remotely enable or disable features and modify Trae's functionality without pushing a traditional software update, giving the company the ability to alter the product's behavior without user awareness.
Trae's privacy policy, as documented by reviewers, states that the company collects "any information (including your prompts and any text/code and file uploads) that you choose to input into the Platform" and may use this content to "provide, maintain, develop, and improve our Services" and "train our models." Data retention periods of five years after account closure have been reported in community reviews. ByteDance has not published a publicly accessible Data Processing Agreement (DPA) framework for Trae.
Code embeddings computed during codebase indexing are processed on ByteDance servers. While the company states that plaintext code is deleted after processing, the embeddings themselves persist, and their retention and potential use for model training has not been fully disclosed.
The geopolitical dimension of Trae's data collection concerns stems from ByteDance's status as a Chinese company subject to the People's Republic of China's data security and cybersecurity legal framework. China's National Security Law, Cybersecurity Law, and Data Security Law collectively require Chinese companies to cooperate with government intelligence and security agencies, including by providing access to data upon request. These laws do not require that the company inform users or foreign governments when such access occurs.
This framework is the same one that has driven United States legislative and regulatory scrutiny of TikTok, ByteDance's consumer social media product. Leaked audio from internal TikTok meetings, reported by BuzzFeed News, confirmed that China-based ByteDance employees had accessed non-public data about US TikTok users on multiple occasions. US intelligence officials have described the Chinese government's ability to use ByteDance as a vector for data collection as a national security concern.
For developers using Trae to write code for sensitive applications, this legal framework means that any code processed through Trae's servers is potentially accessible to Chinese government intelligence agencies. This concern is particularly acute for developers working on government contracts, national security applications, or proprietary intellectual property with significant competitive value.
The practical risk calculus varies significantly by use case. Security-focused reviewers have generally recommended Trae for personal projects, open-source work, and learning, while advising against its use for proprietary code, client projects, or work subject to confidentiality agreements. Several reviewers noted that Trae offers no SOC 2 or ISO 27001 certifications as of mid-2025, unlike Cursor and Windsurf which offer enterprise security documentation.
ByDance did not provide an immediate response to media inquiries about the data collection investigation findings. The company has not published a security whitepaper or audit report addressing the concerns raised by the Cybernews investigation.
The concerns are not unique to Trae: all AI IDEs that process code on remote servers introduce some data exposure risk. What distinguishes Trae's situation is the combination of a non-functional telemetry opt-out, the five-year data retention claim, the absence of enterprise privacy controls, and ByteDance's legal obligations under Chinese law.
Trae's adoption trajectory was rapid. By May 2025, approximately four months after launch, the product had reached one million monthly active users. By the time of its 2025 annual report (covering the full year), Trae reported:
The top programming languages used in Trae during 2025 were Vue, Python, JavaScript, HTML, Java, and TypeScript, reflecting a skew toward web and full-stack development consistent with the Builder mode's strengths.
In May 2025, ByteDance's software engineering research division announced that Trae had achieved the top position on the SWE-bench Verified leaderboard with a score of 70.6%. SWE-bench Verified is a widely cited benchmark for evaluating automated software engineering systems on real GitHub issues drawn from popular open-source Python repositories.
The methodology behind the result involved a three-stage pipeline. In the generation stage, multiple large language models (Claude 3.7 Sonnet, Gemini 2.5 Pro, and o4-mini) produced diverse candidate patches using four tools: code editing, bash execution, code knowledge graphs, and structured reasoning. A tester agent then eliminated patches that failed existing regression tests. Finally, a selector agent used syntax-based clustering followed by multi-selection voting among the remaining candidates to choose the final submission.
The achievement attracted significant media coverage and validated Trae's technical credibility, though commentators on Hacker News and elsewhere noted that SWE-bench Verified scores have become less discriminating at the frontier as multiple systems have converged near similar scores, and that benchmark performance does not always translate to real-world coding utility.
Community reception has been broadly positive but qualified. Developers praised Trae's generous free tier, clean interface, smooth onboarding experience (particularly for users migrating from VS Code or Cursor), and the breadth of models available at no cost. The Builder mode received consistently positive reviews for scaffolding new projects from natural language descriptions, with one benchmark producing a functional React and Express application in under two minutes.
Criticisms have focused on several recurring issues:
Trustpilot reviews and community forum discussions reflected a pattern consistent with these findings: strong enthusiasm among students, indie developers, and prototype-stage builders, with more skepticism from enterprise or security-conscious professionals.
Codebase scale. Trae's just-in-time context strategy works well for projects under approximately 100,000 lines but degrades for large monolithic codebases that require cross-cutting awareness of many files simultaneously.
No Linux support. As of mid-2025, Trae was not available for Linux, limiting its adoption in server-side and DevOps workflows where Linux is the dominant development environment.
No enterprise tier. The absence of an enterprise plan means Trae cannot provide the SSO, SCIM, SLA, audit logging, or data residency guarantees that corporate IT and security teams typically require before approving software for professional use.
No zero data retention. Unlike Cursor (enterprise) and Windsurf (default), Trae does not offer a zero data retention option in which code is processed without being stored on the provider's servers.
No session memory. The AI assistant does not retain information from previous conversations, meaning context established in one session is not available in the next.
Non-functional telemetry toggle. Independent investigation found that disabling telemetry in Trae's settings does not stop data transmission to ByteDance servers.
Remote modification capability. ByteDance can alter Trae's behavior through remote configuration changes without pushing a traditional software update, giving users less control over the software than a typical locally installed application.
Pricing model sustainability. The request-based flat pricing structure may prove economically unsustainable as the user base grows and complex agentic tasks consume disproportionate API costs, potentially forcing a transition to usage-based pricing.
Geopolitical risk. Trae's use by US government contractors, defense-adjacent developers, or individuals working on nationally sensitive technology may raise compliance issues under existing or future US government data security regulations.