n8n (pronounced "n-eight-n") is a source-available workflow automation platform built for technical teams. Founded in 2019 by Jan Oberhauser, it lets users build multi-step automations by connecting nodes on a visual canvas, combining clicks with custom code and, since 2022, a growing suite of AI agent capabilities backed by a native LangChain integration. The software is distributed under the Sustainable Use License, a fair-code model that makes the source code freely available while restricting commercial redistribution. n8n can be self-hosted at no cost or used through n8n Cloud. As of October 2025, the company had raised $240 million across several rounds at a $2.5 billion valuation and reported more than 186,000 GitHub stars, making it one of the most-starred automation projects on the platform.
Jan Oberhauser spent years in the visual effects industry before pivoting to software, working on commercials, television series, and feature films including Maleficent and Happy Feet Two. In early 2019, frustrated by automation tools that were either too rigid for complex logic or too expensive to scale, he began building a self-hostable alternative. On October 4, 2019, he published the project on GitHub, moving it from a private prototype to a community-driven platform.
The name derives from "nodemation," a portmanteau of node (referencing both the visual node-based interface and the Node.js runtime the software is built on) and automation. When Oberhauser searched for a short, memorable domain, most obvious names were taken; he settled on the abbreviated numeronym "n8n," where the digit 8 stands for the eight letters between the first and last "n" in "nodemation." The word is always written in lowercase and officially pronounced "n eight n."
Initial traction was organic. Developers searching for a self-hostable Zapier alternative discovered n8n through GitHub and spread it on Hacker News and Reddit. A seed round in 2020, led by Sequoia Capital and Firstminute Capital, raised roughly $1.5 million and let Oberhauser hire an initial engineering team. A Series A of $12 million followed in 2021.
Through 2021 and into 2022, the company added integrations at a steady pace and grew a community of self-hosters. The broader market for AI-powered automation had not yet materialized, but the foundation was in place.
In 2022, n8n began incorporating large language model support and building toward what would become its AI Agent node. The timing proved significant: demand for tools that could wire AI models into real business processes accelerated sharply after late 2022, when the widespread availability of capable chat-oriented LLMs sparked an explosion in interest from developers and businesses looking to automate knowledge work.
Revenue grew five times in the period following the AI pivot, and doubled again in the two months preceding the Series B announcement in March 2025. At the time of the Series B, approximately 75% of n8n's customers were already using the platform's AI features, a figure that illustrated how quickly the product's center of gravity had shifted from pure data pipeline automation toward agent-based workflows.
Jan Oberhauser described the company's thesis at the time of the Series B: "It took a while for the market to catch up," but once it did, n8n's existing technical foundation (self-hosting, code execution, flexible node model) turned out to be well-suited for AI use cases that simpler no-code tools struggled to handle.
n8n does not describe itself as open source. The company coined the term fair-code to describe its licensing philosophy and founded faircode.io to define the concept more broadly.
The Sustainable Use License (SUL), introduced in 2022 and based on the Elastic License 2.0, grants anyone the right to use, modify, create derivative works from, and redistribute the software, subject to three key restrictions:
Source code files with ".ee." in their filename are licensed separately under the n8n Enterprise License, covering features such as SSO, Git-based version control, and audit logging that are only available on paid tiers.
The Open Source Initiative (OSI) does not recognize the Sustainable Use License as an open source license because it includes restrictions on use. Critics have noted that the distinction matters in practice: companies that want to build commercial products on top of n8n need a separate agreement. Supporters argue the model lets n8n fund continued development while giving individual users and businesses full self-hosting rights at no cost.
| Round | Date | Amount | Lead investor | Notes |
|---|---|---|---|---|
| Seed | 2020 | ~$1.5M | Sequoia Capital, Firstminute Capital | Initial team hire |
| Series A | 2021 | $12M | Sequoia Capital | Integration expansion |
| Series B | March 2025 | €55M ($60M) | Highland Europe | Valuation ~€250M ($270M); HV Capital, Felicis, Harpoon co-invested |
| Series C | October 2025 | $180M | Accel | Valuation $2.5B; Meritech, Redpoint, NVentures (NVIDIA), T.Capital, Evantic, Visionaries Club participated |
Total disclosed funding reached approximately $240 million. The Series B was announced March 24 or 25, 2025, and the Series C on October 9, 2025. At the Series C, annual recurring revenue had grown more than 10x year over year, though the specific ARR figure was not publicly disclosed by the company. An estimate from Sacra, citing public signals, placed ARR at around $40 million in mid-2025.
Notable institutional backers across all rounds include Sequoia Capital, Accel, Highland Europe, Felicis Ventures, HV Capital, and NVIDIA's venture arm NVentures.
The n8n interface is a visual canvas where users place and connect nodes. Each node represents a discrete operation: fetching data from an API, transforming a record, sending an email, querying a database, or invoking an AI model. Connections between nodes carry data in a standardized JSON array format, where each element in the array represents one record. A node can emit multiple items, and downstream nodes process each item in sequence or in parallel depending on configuration.
Workflows are saved as JSON definitions in the database. The execution engine reads this JSON, resolves the order of operations, and processes nodes step by step.
n8n organizes its nodes into several broad categories:
| Category | Description | Examples |
|---|---|---|
| Trigger nodes | Start a workflow in response to an event or schedule | Webhook, Cron, Email Trigger, Chat Trigger |
| Action nodes | Perform operations on external services | Gmail, Slack, Postgres, HTTP Request |
| Core nodes | Provide logic, branching, and data manipulation | If, Switch, Merge, Code, Set |
| Cluster nodes | Groups of a root node plus sub-nodes for AI tasks | AI Agent (root) + LLM, Memory, Tool (sub-nodes) |
| Community nodes | Third-party nodes published via npm | Various |
The Code node accepts JavaScript or Python, letting developers drop into custom logic without leaving the canvas. The HTTP Request node handles arbitrary REST API calls and supports OAuth, API keys, and bearer tokens.
For smaller deployments, n8n runs in a single main-process mode where the core engine handles all workflow executions. For production scale, n8n supports a queue mode backed by Redis: workflow executions are placed into a queue and distributed across multiple worker processes. This architecture allows horizontal scaling and keeps the main process available for the editor and API calls while workers handle execution load.
Storage defaults to SQLite for simple setups. PostgreSQL (or MySQL/MariaDB) is the recommended backend for production deployments because it supports concurrent writes and larger execution log volumes.
n8n passes data between nodes in a JSON array. Each item in the array has a json property containing the record data and an optional binary property for file attachments. This structure means that a single node can emit dozens of items and the downstream node will process each item independently, enabling bulk operations without explicit loops. The Merge node and Loop Over Items node provide additional control over how batches are processed, split, or joined across steps.
Workflows can call other workflows using the Execute Sub-Workflow node, which allows teams to build reusable automation components and compose complex logic from simpler, testable building blocks.
With version 2.0, released in late 2025, n8n introduced Task Runners: an isolated execution environment for the Code node. JavaScript and Python code now runs in a sandboxed process rather than directly inside the main application, reducing the risk from untrusted code. Task Runners are enabled by default in 2.0. The same release added granular role-based access control and improved certificate handling in the HTTP Request node.
n8n's AI capabilities are organized under what the documentation calls "Advanced AI." The platform wraps LangChain's core primitives into a set of visual nodes, making it possible to build agentic workflows without writing LangChain code directly.
The AI Agent node is a cluster node: a root node that coordinates a group of sub-nodes. The root node handles the agent reasoning loop. Sub-nodes supply the components the agent needs:
When a workflow triggers the agent, it receives input (typically a text prompt or a structured message), reasons over the available tools, calls tools as needed, and returns a response. The Tools Agent variant implements LangChain's tool calling interface and passes a structured tool schema to the model, improving reliability for multi-step tasks.
n8n exposes several agent types within the AI Agent node:
| Agent type | Approach | Best for |
|---|---|---|
| Tools Agent | LangChain tool calling | Multi-tool orchestration, structured outputs |
| Conversational Agent | Dialogue-focused chain | Chatbots, support assistants |
| ReAct Agent | Reason + Act loop | Complex multi-step reasoning |
| Plan and Execute Agent | Decompose then execute | Long-horizon tasks |
| OpenAI Functions Agent | OpenAI function calling | OpenAI-specific deployments |
| SQL Agent | Text-to-SQL generation | Database querying from natural language |
The model sub-node supports connections to a wide range of providers:
| Provider | Notes |
|---|---|
| OpenAI | GPT-4o, GPT-4, GPT-3.5, plus Azure OpenAI |
| Anthropic | Claude 3 and 4 family |
| Gemini via AI Studio and Vertex AI | |
| Groq | Hosted inference for Llama, Mixtral |
| Mistral Cloud | Mistral model family |
| AWS Bedrock | Access to multiple foundation models |
| Ollama | Local model hosting |
| Cohere | Command models |
| DeepSeek | DeepSeek-R1 and V3 |
| MiniMax and Moonshot Kimi | Asia-focused providers |
Memory sub-nodes persist conversational context so agents can maintain state across sessions. Options include simple in-memory buffers (for single-session use), MongoDB, Redis, PostgreSQL, and Zep (a dedicated memory store for AI applications).
For retrieval-augmented generation use cases, n8n includes vector store sub-nodes for Pinecone, Qdrant, Chroma, Weaviate, and Supabase Vector. Text splitter and embedding nodes prepare documents for ingestion. Retrieval nodes (MultiQuery, Contextual Compression) fetch relevant chunks at query time.
With n8n 2.0, a natural-language workflow builder became available in the editor. Users describe what they want to automate in plain text, and n8n generates a draft workflow as a starting point. The feature uses the platform's own AI credits (included in cloud plans) and is intended to lower the entry barrier for users who are not yet familiar with the node catalog.
The Chat node allows agentic workflows to pause mid-execution and ask a human for input before continuing. This pattern is useful for approval workflows, sensitive data handling, or situations where the agent's confidence is low. The agent resumes automatically once a response is received.
n8n is available in two deployment forms.
The Community Edition is the version available from the GitHub repository under the Sustainable Use License. It is free to download and run on any infrastructure the operator controls: a personal server, a VPS, a Kubernetes cluster, or a Docker container. There are no execution limits in the Community Edition; the only costs are hosting infrastructure (typically $10 to $50 per month for a small single-server setup, more for high-availability or high-throughput deployments).
The Community Edition includes every integration in the catalog, the full workflow canvas, the AI Agent node, and community node support. It does not include SSO, SAML/LDAP authentication, Git version control for workflows, multiple environments, or audit logging. Those features require a paid license.
Self-hosting gives operators full control over data residency, network access, and upgrade timing. It is popular with security-conscious teams, developers who want to extend n8n with custom nodes, and organizations with compliance requirements that make SaaS tools difficult.
n8n Cloud is the hosted managed service operated by n8n GmbH. It handles infrastructure, backups, and updates, but users are subject to execution quotas and monthly billing. Cloud plans include the AI Workflow Builder credits and access to a subdomain at n8n.cloud.
Cloud is simpler to start with: no server setup required, and the editor is accessible immediately after signup. It is the preferred option for teams that want automation without infrastructure management.
n8n ships with more than 400 built-in integration nodes at the time of writing, with the total count (including community nodes) reaching beyond 1,200 as of mid-2026. Integration nodes cover communication platforms, databases, CRM systems, cloud storage, payment processors, developer tools, and AI services.
Some frequently used categories:
| Category | Examples |
|---|---|
| Communication | Gmail, Outlook, Slack, Telegram, WhatsApp Business, Discord |
| CRM | Salesforce, HubSpot, Pipedrive, Zoho CRM |
| Database | PostgreSQL, MySQL, MongoDB, Redis, Supabase |
| Cloud storage | Google Drive, Dropbox, AWS S3, OneDrive |
| Project management | Notion, Asana, Jira, Linear, Trello |
| Developer tools | GitHub, GitLab, Jira, Linear, CircleCI |
| Payment | Stripe, PayPal, Chargebee |
| AI services | OpenAI, Anthropic, Google Gemini, Groq, Replicate |
| Data | Airtable, Google Sheets, Snowflake, BigQuery |
For services without a dedicated node, the generic HTTP Request node handles any REST API. The Code node covers cases requiring custom transformation or business logic.
The n8n community maintains additional nodes published as npm packages. The editor's Settings menu includes a Community Nodes section where administrators can install these packages by name. Community nodes extend n8n to less common services and custom internal tools. The ecosystem has grown significantly alongside the platform's user base.
Following the Series C announcement, n8n stated plans to expand the community node ecosystem further, enabling contributors to publish nodes globally and build businesses around custom integrations. The company indicated this would be a central part of using the new funding round.
n8n operates a workflow template marketplace at n8n.io/workflows. As of mid-2026, the marketplace listed more than 9,500 community-contributed workflow templates covering common automation patterns. Templates can be imported directly into the editor with a single click, providing a starting point rather than building from scratch. Categories include AI agent workflows, lead generation, data sync, DevOps, and document processing.
Cloud plans are billed by execution. One execution equals one complete workflow run, regardless of how many nodes the workflow contains. This per-run model contrasts with Zapier, which charges per step (called a "task"), making n8n substantially cheaper for multi-step workflows.
| Plan | Monthly price (annual billing) | Executions per month | Key features |
|---|---|---|---|
| Starter | €20 | 2,500 | 1 shared project, 5 concurrent executions, 50 AI credits |
| Pro | €50 | 10,000 | 3 shared projects, 20 concurrent executions, 150 AI credits, 7-day insights |
| Business | €667 | 40,000 | SSO/SAML/LDAP, Git version control, multiple environments, 6 shared projects |
| Enterprise | Custom | Custom | Unlimited projects, 200+ concurrent executions, 365-day insights, dedicated SLA |
Annual billing saves approximately 17% versus monthly. A startup discount of 50% off the Business plan is available for companies with fewer than 20 employees.
Teams that want to self-host but need SSO, audit logs, or Git-based workflow versioning can purchase a Self-Hosted Business or Enterprise license separately. The Community Edition remains free for unlimited internal use.
The per-execution pricing means cost scales with workflow frequency rather than workflow complexity. A 50-step workflow triggered once counts as one execution. Zapier and some competing tools count each action as a billable "task," so the same 50-step workflow would cost 50 tasks. For organizations running complex automations at high volume, the difference can be substantial.
| n8n | Zapier | Make | Lindy | Gumloop | |
|---|---|---|---|---|---|
| Pricing model | Per execution | Per task (step) | Per operation | Subscription tiers | Per credit/run |
| Self-hosting | Yes (free) | No | No | No | No |
| Open source / source available | Source available (fair-code) | Closed source | Closed source | Closed source | Closed source |
| AI agent support | Native (LangChain-based) | Zapier Agents | Limited | Native (core feature) | Native (visual AI nodes) |
| Integrations | 400+ built-in, 1,200+ with community | 8,000+ | 1,000+ | 200+ | 200+ |
| Custom code | Yes (JS/Python) | Limited | Limited | No | Limited |
| Target user | Developers and technical teams | Non-technical users | Business/power users | Business users | No-code builders |
| Free tier | Unlimited self-hosted | 100 tasks/month | 1,000 ops/month | Limited | Limited |
| Minimum paid price | €20/month (Cloud) | $19.99/month | $9/month | $49.99/month | $97/month |
Zapier's primary advantage is breadth of integrations (8,000+) and an extremely low barrier to entry. It is the default choice for non-technical users connecting well-known SaaS tools. Its per-task pricing becomes expensive at scale; a 10-step Zap triggered 5,000 times per month costs 50,000 tasks.
Make (formerly Integromat) offers a visual scenario builder with branching logic, routers, and aggregators at a lower price point than Zapier. It lacks self-hosting and has more limited code execution options.
Lindy positions itself as an AI-first assistant layer rather than a workflow builder. Users describe tasks in natural language, and Lindy maintains context to handle inbox management, scheduling, CRM updates, and customer support. It requires less technical setup than n8n but provides less flexibility for custom logic.
Gumloop is a no-code visual AI workflow builder aimed at fast prototyping. It is best suited for content workflows, research automation, and simple data pipelines where quick iteration matters more than customization.
n8n's competitive positioning centers on technical depth, self-hosting, and AI-native workflow building. It is the tool most teams reach for when they need custom logic that exceeds what click-to-connect tools can handle, but do not want to write a full integration from scratch.
Vodafone used n8n to automate security threat intelligence workflows, connecting threat feeds, SIEM systems, and response playbooks. The automation reduced threat response time from hours to minutes and eliminated an estimated £2.2 million in annual operational costs by removing manual triage steps.
Delivery Hero faced around 800 account lockout requests per month. Each required an IT technician to verify the employee's identity and restore access across Okta and Google Workspace, averaging 35 minutes per request. After automating the process with n8n, the company estimated savings of more than 200 hours per month. The initial workflow led to identifying and automating additional manual processes including account offboarding and software license assignments.
Teams build support chatbots using the AI Agent node connected to a knowledge base in a vector store. A customer query triggers the agent, which retrieves relevant documentation via retrieval-augmented generation, composes a response using an LLM, and hands off to a human agent if confidence is low. The human-in-the-loop feature pauses execution and routes to a support queue when needed.
Data engineers use n8n to move data between systems without building custom ETL scripts. A typical workflow polls an API on a schedule, transforms the response using the Code node, and inserts records into a PostgreSQL table or a data warehouse. The visual canvas makes it easy to inspect what each step outputs before deploying.
Marketing teams connect CRM systems, email platforms, and analytics tools. Common patterns include syncing new leads from a form to a CRM, triggering onboarding email sequences based on user actions, and posting to social channels when a new blog post is published.
Development teams use n8n for internal tooling: triggering deployments from a Slack command, posting GitHub PR review reminders to a channel, routing error alerts from monitoring tools to the right on-call engineer, or generating changelogs from commit messages using an LLM node.
Organizations that want to make internal documents searchable by AI systems use n8n to build ingestion pipelines. A workflow watches a Google Drive folder or SharePoint site for new files, extracts text using a document loader node, splits it into chunks, generates embeddings via an OpenAI or Cohere embedding node, and stores the vectors in Pinecone or Qdrant. A separate workflow then provides a chat interface where employees can query the knowledge base and receive AI-generated answers with source citations. The entire pipeline runs on a self-hosted n8n instance, keeping proprietary documents within the organization's infrastructure.
As AI capabilities have matured, teams have begun wiring multiple AI Agent nodes together in a single workflow to handle tasks that benefit from specialization. A supervisor agent might receive a user request, classify it, and hand it to one of several specialist agents (a research agent, a writing agent, a code-review agent) depending on the type of task. Each sub-agent runs its own tool loop and returns a result to the supervisor. n8n's visual canvas makes these multi-agent architectures possible to build without writing orchestration code from scratch.
n8n has a large and active community. The GitHub repository reached 100,000 stars in 2025 and held more than 186,000 stars by the end of 2025. The official community forum at community.n8n.io hosts discussion threads, workflow sharing, and feature requests.
The template marketplace at n8n.io/workflows contained more than 9,500 community-contributed workflows as of mid-2026. GitHub hosts additional community collections, including repositories with thousands of pre-built workflow JSON files covering a wide range of use cases.
n8n runs an official YouTube channel and blog (blog.n8n.io) that publishes tutorials, case studies, and product announcements. Creator profiles on n8n.io let community contributors publish workflows and build public portfolios.
Version 2.0 was the most significant architectural release since the platform's founding. Key changes included:
n8n can now route across multiple AI providers within a single workflow, letting teams use different models for different steps. A reasoning step might use Anthropic Claude for its accuracy on complex tasks, while a text classification step uses a lighter and cheaper model from Groq. The platform's model sub-node abstracts the provider-specific API details.
Following the Series C, the company announced plans to evolve beyond the canvas with new interfaces designed for different user types. The canvas remains the primary interface for technical users building complex workflows. Simpler task-oriented interfaces are in development for business users who need to consume automations without configuring nodes.
Launched in 2025, Evaluations is a feature for testing AI workflow outputs against expected results, allowing teams to benchmark prompt changes or model swaps before deploying. Data Tables provides lightweight internal tabular storage within n8n, reducing the need for external spreadsheets or databases for simple state management.
Despite strong growth, several genuine limitations affect n8n in practice.
The learning curve is steeper than Zapier or Make. Users familiar with click-to-connect tools may find the node canvas overwhelming at first, particularly when building workflows that involve branching logic, loops, or custom JavaScript.
The self-hosted Community Edition requires the operator to handle updates, backups, and infrastructure. A production-grade setup with high availability, SSL termination, and logging adds engineering overhead that a managed SaaS product would abstract away.
For high-throughput use cases, n8n's single-threaded execution model can create bottlenecks when many workflows run concurrently without queue mode configured. Workflows that process large payloads can hit memory limits, and API rate limiting from external services is a common source of failures.
True autonomous agent behavior remains partially constrained by the visual workflow model. Workflows must be defined in advance; genuinely open-ended agent behavior that discovers and calls tools not already configured in the canvas requires workarounds.
The fair-code license means n8n is not freely redistributable as a commercial service. Organizations that want to build a product on top of n8n and charge customers for it need a separate enterprise license, which is a meaningful constraint compared to a fully permissive open source license.
Cloud SLA guarantees and redundancy options are less mature than those of Zapier and Make, which have been operating managed services for longer. This can be a concern for organizations running mission-critical workflows with strict uptime requirements.