| LangGraph | |
|---|---|
| Developer | LangChain Inc. |
| Initial release | January 8, 2024 |
| Stable release | 1.0 (October 2025) |
| Repository | github.com/langchain-ai/langgraph |
| Written in | Python, TypeScript |
| Type | Agent orchestration framework, LLM library |
| License | MIT |
| Lead creator | Nuno Campos (founding engineer, LangChain Inc.) |
| Founders of LangChain | Harrison Chase, Ankush Gola |
| Built on | LangChain, Pregel / Bulk Synchronous Parallel model |
| Common uses | Stateful agents, multi-agent systems, tool use workflows, human-in-the-loop applications |
| Notable users | Klarna, LinkedIn, Uber, JPMorgan, BlackRock, Cisco, Replit, Norwegian Cruise Line, Elastic, AppFolio |
| Related products | LangGraph Studio, LangGraph Platform (renamed LangSmith Deployment in October 2025), LangSmith |
| Website | langchain.com/langgraph |
LangGraph is an open-source low-level orchestration library for building stateful, multi-actor applications powered by large language models. It is developed by LangChain Inc., the company behind the more general-purpose LangChain framework, and it was created primarily by founding engineer Nuno Campos. LangGraph models agent workflows as graphs in which nodes are functions or sub-agents, edges describe the transitions between them, and a shared state object flows through every step. The graph abstraction supports cycles, conditional branching, persistence, streaming, time travel, and human-in-the-loop control, all of which are awkward or impossible to express with the linear chains that dominated earlier LLM frameworks.[1][2]
LangGraph was first released on January 8, 2024 in response to a recurring complaint from LangChain users: the original LangChain Expression Language (LCEL) was elegant for one-shot retrieval-augmented generation pipelines, but it could not naturally express the loop that an agent needs in order to think, call a tool, observe a result, and try again. By the time LangGraph reached its 1.0 milestone in October 2025, it had become the dominant open-source framework for production LLM agents, with around 90 million monthly downloads, hundreds of enterprise deployments, and customer references including Klarna, Uber, LinkedIn, BlackRock, JPMorgan, Cisco, Norwegian Cruise Line, Replit, Elastic, and AppFolio.[3][4][5]
The library is offered under the MIT license. A managed deployment surface called LangGraph Platform, later folded into the broader LangSmith Deployment product, sits on top of the open-source library and handles persistence, scaling, scheduling, and observability for long-running stateful agents in production.[6][7]
LangChain was founded by Harrison Chase as an open-source project in October 2022, only weeks before OpenAI released ChatGPT. LangChain's original promise was a set of standard abstractions, prompt templates, output parsers, retrievers, memory, agent executors, and chains, that let developers swap models and providers without rewriting glue code. By mid-2023, the library had become the de facto entry point for building LLM applications, and LangChain Inc. raised a $25 million Series A from Sequoia Capital in February 2024 at a $200 million valuation.[8][9] In October 2025, the company raised an additional $125 million Series B led by IVP, with participation from CapitalG, Sapphire Ventures, Sequoia, Benchmark, and Amplify, lifting its total funding to $260 million and pushing it to a $1.25 billion valuation.[10][11]
The early LangChain stack had a clear weakness when developers tried to ship real agents. The agent executor was a black box. It would take a user query, call an LLM, parse a tool name out of the response, run the tool, feed the observation back, and repeat until the model emitted a final answer. There was no way to inspect the loop mid-flight, no obvious way to persist state between calls, no way to involve a human in the middle of a long task, and no clean story for branching or coordinating multiple agents. LangChain Expression Language (LCEL), introduced later, made it easy to wire components together with the pipe operator, but LCEL was a directed acyclic graph by design. It could not loop.[12]
Nuno Campos, who joined LangChain as its first employee, led the design of a new abstraction modelled explicitly on graph computation. Inspired by Pregel (the bulk synchronous parallel framework that powered Google's web graph processing) and by Apache Beam, the new library represented agent control flow as a directed graph with explicit cycles, a shared state object, and a Pregel-style execution loop in which all node writes from one superstep become visible at the start of the next. The result, released as an open-source companion to LangChain on January 8, 2024, was named LangGraph.[1][13][14]
LangGraph is small at its core. The library exports four primary concepts (state, nodes, edges, and a graph compiler) plus a persistence layer made up of checkpointers and stores. Most production behavior comes from composing these pieces.
Every LangGraph application is built around a state object. State is defined as either a Python TypedDict or a Pydantic model and represents the entire memory of the graph at a given point in time. Each node in the graph receives the current state as input and returns a partial update. By default the new dictionary fields overwrite the old ones, but each field can declare a reducer function that combines the old and new values, for example by appending to a list of messages instead of replacing it. The combination of typed state plus per-field reducers makes LangGraph state both inspectable and mergeable, which is what enables time travel and branching later.[15]
Nodes are ordinary Python or TypeScript functions. A node receives the current state and returns a dictionary of fields to update. Nodes can be pure functions, LLM calls, retrievers, tool use executors, or even nested subgraphs. They can be async, and they can run in parallel within the same superstep when fanned out by an edge. There is no special node base class to inherit from. Anything callable that takes a state and returns a partial state update is a valid node.
Edges describe how control moves between nodes. There are three kinds. A normal edge is an unconditional transition from node A to node B. A conditional edge is a function that inspects the current state and returns the name of the next node (or a list of names for parallel fan-out). The two reserved edges, START and END, mark the entry and exit points of the graph. Conditional edges are how a LangGraph agent decides whether to keep using tools, hand off to another agent, or finish.[15]
Unlike LCEL, edges in LangGraph can form cycles. The same conditional edge can route back to a node that was already visited, which is exactly what an agent loop looks like: call the LLM, inspect the response, call a tool if requested, return to the LLM, and repeat.
StateGraph is the main builder class. A developer instantiates it with a state schema, adds nodes by name, adds edges between them, sets the entry point, and then calls .compile() to produce a runnable graph. Compilation runs basic structural checks (no orphaned nodes, no edges to undefined targets) and is the place where runtime options such as the checkpointer and the interrupt list are attached. The compiled graph exposes synchronous invoke, asynchronous ainvoke, and a family of streaming methods.[15]
A second graph class, MessageGraph, was provided in early LangGraph for the very common case of an agent whose state is just a list of messages. MessageGraph was effectively syntactic sugar over StateGraph and was deprecated as the prebuilt agent helpers in langgraph.prebuilt (later langchain.agents) absorbed the same convenience.[16]
A checkpointer saves a snapshot of the state every time the graph progresses, keyed by a thread ID. Once a checkpointer is attached, every superstep is durable. If the process restarts, a request times out, or a human walks away from a half-finished approval, the next call with the same thread ID resumes from the last checkpoint instead of starting over. LangGraph ships several checkpointer implementations:
| Checkpointer | Storage | Typical use |
|---|---|---|
MemorySaver | In-process Python dict | Local development and notebooks |
SqliteSaver | SQLite database file | Single-machine demos and small services |
PostgresSaver / AsyncPostgresSaver | PostgreSQL | Production, distributed deployments |
| Redis-based savers (community) | Redis | High-throughput caching scenarios |
| LangGraph Platform managed checkpointer | Managed Postgres on LangChain infra | Cloud and Bring-Your-Own-Cloud deployments |
In-memory checkpointers exist mostly for testing. The recommended production option is PostgresSaver, which the LangGraph Platform also uses under the hood for managed deployments.[17][18]
A second persistence primitive, the Store, was added to support long-term memory across threads. Where a checkpointer captures the running state of one conversation, a Store is a key-value record intended for facts that should outlive any single thread, for example the user's preferences or a profile that an agent has learned over many sessions.[19]
LangGraph supports two complementary mechanisms for pausing execution. Static interrupts, declared at compile time as interrupt_before=['node_name'] or interrupt_after=[...], halt the graph immediately before or after the named node and surface the current state to the caller. The caller can then inspect, edit, or replace the state, and resume the graph by calling it again with None as the input.[20]
Dynamic interrupts use the interrupt() function inside a node body. The function pauses the graph at exactly the line where it is called and returns the value provided by the caller on resume. Dynamic interrupts are useful when the decision about whether to involve a human depends on data that the node has just computed, for example a low-confidence score on a refund request. Both styles rely on the checkpointer to preserve state across the pause.[20]
Because the checkpointer stores the entire state at every superstep, the user can rewind a graph to any earlier point and continue from there with new input. This is exposed through a get_state_history API that returns the ordered list of past checkpoints. A developer can pick one, modify the state, and resume execution from that point. The same feature underlies the "undo" and "what-if" debugging affordances in LangGraph Studio, where an engineer can step backward to before a wrong tool call and rerun the rest of the agent with a corrected input.[21]
LangGraph compiled graphs expose three streaming modes that can be combined in a single subscription. values mode streams the full state after each superstep. updates mode streams only the partial updates that each node returned. messages mode streams LLM tokens as they are generated, which is what user interfaces use to render the typewriter effect. A custom stream channel lets a long-running tool emit progress signals out of band, for example to show "reading file 4 of 12" while a node is still executing.[22]
Two lower-level primitives extend the graph beyond a static set of nodes and edges. Send allows a node or edge to dispatch a payload to a named node with a custom partial state, which is the canonical way to do dynamic fan-out (for example, mapping an agent over an unknown number of subtasks). Command is an object a node can return that combines a state update with an explicit routing instruction. The two together are how prebuilt agent patterns express handoffs without rewriting the graph topology at every iteration. Underneath, LangGraph's runtime is an implementation of the Pregel / Bulk Synchronous Parallel model: nodes communicate only through state channels, and writes from one superstep become visible to readers only at the start of the next.[2][13]
The langgraph.prebuilt package (renamed and partially absorbed into langchain.agents in the 1.0 release cycle) ships ready-made building blocks. create_react_agent constructs a tool-calling ReAct-style agent in a single function call, given a model and a list of tools. ToolNode wraps a list of tool callables into a node that reads tool calls off the latest assistant message, runs them in parallel, and emits the resulting tool messages back into the state. Both helpers are simply convenience constructors that produce the same StateGraph a developer could assemble by hand.[23]
LangChain Expression Language (LCEL) and LangGraph cover overlapping ground but have very different strengths. The table below summarizes the practical trade-offs.
| Capability | LangChain LCEL | LangGraph |
|---|---|---|
| Programming model | Pipe operator over Runnables, declarative DAG | Explicit graph of nodes and edges, imperative |
| Topology | Directed acyclic graph (no loops) | Directed graph with cycles |
| State | Implicit, per-call | Explicit, typed, shared across nodes |
| Persistence | None at the chain level | First-class via checkpointers |
| Human-in-the-loop | Not supported natively | Static and dynamic interrupt primitives |
| Time travel and replay | Not supported | Native via checkpoint history |
| Streaming | Token streaming via callbacks | Token, node, and custom event streams |
| Multi-actor coordination | Awkward (chained or nested chains) | Designed for it (subgraphs, supervisor, swarm) |
| Sweet spot | Single-pass RAG, summarization, prompt + parser pipelines | Long-running agents, multi-agent systems, durable workflows |
| Learning curve | Low for simple pipelines | Moderate, requires graph thinking |
LangChain Inc. has been explicit that the two libraries are complementary rather than competing. LangChain (and LCEL) provides the primitive components, model wrappers, retrievers, parsers, memory adapters, and standardized interfaces, while LangGraph provides the orchestration layer for anything stateful, agentic, or long-running. Since the 1.0 release cycle, LangChain's own agent helpers are built on top of LangGraph internally so that they automatically inherit checkpointing, streaming, and human-in-the-loop support.[12][24]
LangGraph treats single agents and multi-agent systems as the same kind of object: a graph. Multi-agent designs differ only in how the graph is wired and how state is partitioned. Two patterns have become canonical, and a third, hierarchical agent teams, builds on both through subgraphs.
In the supervisor pattern a single "router" agent sits at the center of the graph and decides which specialist to call next based on the current state. Each specialist agent is itself a node (often itself a compiled subgraph) that performs a focused task and returns a result. The supervisor is responsible for sequencing, aggregation, and termination. Because routing is the supervisor's only job, the supervisor's prompt can be small and focused, which makes the pattern easier to debug and more accurate than letting agents decide handoffs on their own. LangChain Inc. ships a dedicated helper library, langgraph-supervisor, that compiles a supervisor graph from a list of named agents and an optional manager LLM.[25][26]
The swarm pattern removes the supervisor and lets agents hand off to each other directly. Each agent has a set of handoff tools whose only job is to return a Command that transfers control to a named peer along with any context to share. There is no central router, so the swarm tends to be faster (one fewer LLM call per hop) but harder to keep on rails. The supervisor pattern wins on accuracy and observability; the swarm pattern wins on latency and fits open-ended collaboration tasks where any agent might reasonably take the next step.[26]
Because every compiled graph is itself a callable that takes a state and returns a state, a graph can be used as a node in another graph. This composition makes hierarchical agent teams natural: a top-level supervisor manages mid-level supervisors, each of which manages its own pool of specialists. Subgraphs can use their own state schema and their own checkpointer, which makes it possible to scope memory and persistence to a single team without mixing it into the parent's state.[27]
LangGraph Studio is a desktop and browser visual debugger for LangGraph applications. It renders the compiled graph topology, shows live state at every superstep, lets the developer rewind to any past checkpoint, edit state in place, and resume execution. The Studio integrates directly with the checkpoint history exposed by the open-source library, which means that the visualization is not a separate trace but the graph's actual run state.[28]
Studio also supports breakpoints (a UI for the same interrupt_before/interrupt_after primitives), side-by-side replays of alternative branches created by editing past state, and a chat-style harness for testing agents under realistic input. It ships as part of the LangGraph Platform but is widely used in local development against the open-source library.[7][28]
LangGraph Platform is the managed deployment surface for LangGraph applications. It was first announced in beta in mid-2024, reached general availability in May 2025, and was rebranded as LangSmith Deployment in October 2025 when LangChain Inc. consolidated its commercial product line under the LangSmith brand.[6][7]
The Platform layer adds horizontally scalable infrastructure for long-running stateful agents. It provides:
By the time of the 1.0 release, around 400 companies had deployed agents on LangGraph Platform, including Cisco, Uber, LinkedIn, BlackRock, and JPMorgan.[3][4]
The Swedish payments company Klarna built its customer-support AI assistant on LangGraph and LangSmith. The assistant handles inquiries from Klarna's roughly 85 million active users and is reported to have reduced customer-resolution time by about 80%. Klarna has cited LangGraph's stateful execution and LangSmith's tracing as the combination that let the assistant move out of an experimental pilot and into a primary support channel.[3]
Replit's coding assistant, marketed as Replit Agent, is built on LangGraph. Michele Catasta, Replit's vice president of AI, has said that LangGraph gives the team "the control and ergonomics we need to build and ship powerful coding agents," and that fine-grained graph control is what made it possible to ship agents to millions of users with the level of reliability they needed in a coding context.[3]
Norwegian Cruise Line uses LangGraph to power guest-facing AI experiences. The company has publicly said that LangGraph's framework for stateful, multi-actor LLM applications has changed how it evaluates and tunes its agents, and that the granular control over the agent's reasoning lets product teams make data-driven decisions about prompt and tool changes rather than relying on opaque agent executors.[3]
Elastic uses LangGraph to orchestrate networks of AI agents for real-time security threat detection in its observability and security products. The company reports that LangGraph-based agents have meaningfully reduced the manual workload for SecOps teams by automating the triage of alerts and the correlation of evidence across signals.[3]
Uber's developer-platform team uses LangGraph for large-scale agent-driven code migrations across the company's monorepos. LinkedIn deployed an internal SQL assistant called SQL Bot, a LangGraph-powered multi-agent system that translates natural-language questions into SQL queries against LinkedIn's data warehouse. AppFolio, which builds property-management software, built a copilot that, the company says, has saved property managers more than ten hours of work per week and roughly doubled decision accuracy after migrating to LangGraph.[4][30]
The agent-framework space saw an explosion of options between 2023 and 2026. The four most-cited alternatives or competitors to LangGraph are Microsoft's AutoGen, the open-source CrewAI, OpenAI's Agents SDK (the successor to the Assistants API and the Swarm experiment), and Anthropic's Claude Code agent harness for software engineering. They differ less in features than in worldview.
| Framework | Vendor | Core metaphor | Strengths | Trade-offs |
|---|---|---|---|---|
| LangGraph | LangChain Inc. | State graph with explicit nodes, edges, and checkpoints | Most fine-grained control; durable state, time travel, native human-in-the-loop, mature observability via LangSmith | Steeper learning curve than chat-style frameworks; requires graph thinking |
| AutoGen | Microsoft Research | Multi-agent conversation | Strong code execution agent; good for research and prototyping; integrates with Azure | Conversation transcript can balloon; less control over branching and durable state |
| CrewAI | CrewAI Inc. (open source) | Role-based crews led by a manager agent | Lowest barrier for non-graph thinkers; mimics human team structure; very fast to start | Less control over deterministic flow; harder to add hard guardrails; SOC 2 still in progress as of 2025 |
| OpenAI Agents SDK | OpenAI | Lightweight agent with function calling and handoffs | Minimal API; quickest path to a working agent on the OpenAI stack; deep tie-in to OpenAI tools | Tied to OpenAI models and platform; thinner story for long-running state, multi-vendor model routing, and self-hosting |
| Claude Code | Anthropic (Claude) | Terminal-based coding agent with permissioned tool use | Excellent for software engineering tasks; tight integration with Claude tool-use behavior | Optimized for coding rather than general orchestration; not a general-purpose framework |
Independent comparisons in 2025 and 2026 from Galileo, DataCamp, Composio, Langfuse, and the Maxim AI guide tend to converge on similar conclusions. AutoGen feels natural when work is structured as a conversation among specialists. CrewAI feels intuitive when work decomposes cleanly into roles. The OpenAI Agents SDK is the fastest way to ship something that works inside the OpenAI ecosystem. Claude Code is best when the task is software engineering. LangGraph is the most production-ready of the open-source options when an application needs deterministic flow control, durable state, replayable runs, and mature observability, especially under regulated workloads in finance, healthcare, and security.[31][32][33][34][35]
The ecosystem is converging. LangChain's own agent helpers are now LangGraph graphs underneath, and several other frameworks have adopted similar ideas about checkpoints and human-in-the-loop primitives. As of 2026, LangGraph and AutoGen are the two open-source frameworks with explicit enterprise certifications; the vendor SDKs from OpenAI and Anthropic inherit their providers' compliance posture; CrewAI is reported to be working toward SOC 2.[31]
| Abstraction | Purpose | Where it lives |
|---|---|---|
StateGraph | Builder for a typed-state graph of nodes and edges | langgraph.graph |
MessageGraph (deprecated) | Convenience builder for message-list state | langgraph.graph |
START, END | Reserved entry and exit pseudo-nodes | langgraph.graph |
| Conditional edge | Function returning the next node name(s) from state | add_conditional_edges on StateGraph |
Send | Dispatch a custom payload to a named node | langgraph.types.Send |
Command | Combine a state update with explicit routing | langgraph.types.Command |
interrupt_before / interrupt_after | Static breakpoints set at compile time | StateGraph.compile arg |
interrupt() | Dynamic pause inside a node body | langgraph.types.interrupt |
MemorySaver, SqliteSaver, PostgresSaver | Checkpointers for state persistence | langgraph.checkpoint.* |
Store | Long-term memory across threads | langgraph.store.* |
create_react_agent | Prebuilt tool-calling ReAct agent | langgraph.prebuilt (and langchain.agents) |
ToolNode | Prebuilt node that runs tool calls in parallel | langgraph.prebuilt |
LangGraph reached its 1.0 stable release in October 2025, alongside a coordinated 1.0 release of LangChain itself. LangChain Inc. positioned 1.0 as the first stable major release of a durable agent framework rather than a feature-heavy launch: most of the 1.0 work was about consolidating APIs, removing churn, and codifying patterns that had already been validated at customers like Uber, LinkedIn, and Klarna. LangGraph 1.0 maintained full backward compatibility, with the only notable change being the deprecation of the langgraph.prebuilt namespace in favor of equivalent helpers in langchain.agents. Python 3.9 support was dropped in line with its end-of-life in October 2025; LangGraph 1.0 requires Python 3.10 or newer.[36]
In the same release window, LangGraph Platform was renamed LangSmith Deployment, completing a multi-quarter consolidation of LangChain Inc.'s commercial offerings under the LangSmith brand: LangSmith Observability, LangSmith Evaluation, and LangSmith Deployment. The open-source LangGraph library kept its name and remained the recommended foundation for new agent development.[6][7]
LangGraph's design choices map directly to the workloads it is best at and the workloads where lighter-weight tools are easier to reach for.