LangGraph.js Concept Guide

LangGraph.js Concept Guide


Welcome to LangGraph.js, a JavaScript library designed for building complex, scalable AI agents using a graph-based state machine. In this guide, we will explore the core concepts behind LangGraph.js and why it excels in creating reliable and fault-tolerant agent systems. We assume you have already learned the basics introduced in the Quick Start guide and want to dive deeper into the fundamental design and internal workings of LangGraph.js.



Background: Agents as Graphs and AI Workflows

While definitions of “AI agents” vary, we define an “agent” as any system that allows a language model to control looped workflows and take actions. Typical LLM agents use a “Reasoning and Acting” (ReAct) style design, applying an LLM to drive a basic loop comprising the following steps:

  1. Reason and plan actions to take.
  2. Take action using tools (regular software functions).
  3. Observe the effects of the tools and appropriately replan or react.

While LLM agents perform well in this regard, naive agent loops can’t provide the reliability users expect at scale. They have a beautiful randomness. Well-designed systems can harness this randomness and apply it reasonably in well-designed composite systems, making them fault-tolerant to errors in LLM outputs because errors will happen.

We believe agents are exciting and novel, but AI design patterns should be applied using good engineering practices from Software 2.0. Some similarities include:

  • AI applications must balance autonomous operations with user control.
  • Agent applications are akin to distributed systems in error tolerance and correction.
  • Multi-agent systems resemble multiplayer network applications in parallelism and conflict resolution.
  • Everyone likes an undo button and version control.

LangGraph.js’s primary StateGraph abstraction is designed to support these and other needs, providing a lower-level API than other agent frameworks (like LangChain’s AgentExecutor), giving you full control over how and where “AI” is applied.

It extends Google’s Pregel graph processing framework, offering fault tolerance and recovery for long-running or error-prone workloads. During development, you can focus on local operations or specific task agents, and the system will assemble these operations into a more robust and scalable application.

Its parallelism and state reduction features allow you to control how to handle conflicting information returned by multiple agents.

Lastly, its persistent versioned checkpoint system enables you to roll back agent states, explore alternative paths, and have full control over what is happening.

The following sections will delve into how these concepts work and why.



Core Design

At its core, LangGraph.js models agent workflows as a state machine. You can define agent behavior using three key components:

  • State: A shared data structure representing a snapshot of the application’s current state. It can be any TypeScript type but is usually an interface or class.
  • Nodes: TypeScript functions encoding the agent logic. They take the current State as input, perform some computation or side effect, and return the updated State.
  • Edges: Control flow rules determining which Node to execute next based on the current State. They can be conditional branches or fixed transitions.

By combining Nodes and Edges, you can create complex looped workflows that evolve the State over time. The true power lies in how LangGraph.js manages these States.

In short: Nodes do work; Edges determine what to do next.

LangGraph.js’s underlying graph algorithm defines a general program using message passing. When a Node completes, it sends messages (States) along one or more Edges to other Nodes. These Nodes run their functions and pass the result messages to the next set of Nodes, and so on. Inspired by Pregel, the program executes in conceptually parallel discrete “supersteps.” When the graph runs, all Nodes start in an inactive state. When an incoming edge (or “channel”) receives a new message (State), the Node becomes active, runs its function, and responds with updates. At the end of each superstep, if no more incoming edge messages exist, Nodes vote to halt, marking themselves as inactive. When all Nodes are inactive and no messages are in transit, the graph terminates.

We’ll illustrate a complete StateGraph execution later, but first, let’s delve into these concepts.



Nodes

In a StateGraph, Nodes are usually TypeScript functions (synchronous or asynchronous), with the first positional argument being the State, and an optional second positional argument being RunnableConfig, containing optional configurable parameters (e.g., thread_id).

Similar to NetworkX, you can add these Nodes to the graph using the addNode method:

Node Example

import { END, START, StateGraph, StateGraphArgs } from "@langchain/langgraph";
import { RunnableConfig } from "@langchain/core/runnables";

interface IState {
  input: string;
  results?: string;
}

// This defines the agent state
const graphState: StateGraphArgs<IState>["channels"] = {
  input: {
    value: (x?: string, y?: string) => y ?? x ?? "",
    default: () => "",
  },
  results: {
    value: (x?: string, y?: string) => y ?? x ?? "",
    default: () => "",
  },
};

function myNode(state: IState, config?: RunnableConfig) {
  console.log("In node:", config?.configurable?.user_id);
  return { results: `Hello, ${state.input}!` };
}

// The second parameter is optional
function myOtherNode(state: IState) {
  return state;
}

const builder = new StateGraph({ channels: graphState })
  .addNode("my_node", myNode)
  .addNode("other_node", myOtherNode)
  .addEdge(START, "my_node")
  .addEdge("my_node", "other_node")
  .addEdge("other_node", END);

const graph = builder.compile();

const result1 = await graph.invoke(
  { input: "Will" },
  { configurable: { user_id: "abcd-123" } }
);

// In node: abcd-123
console.log(result1);
// { input: 'Will', results: 'Hello, Will!' }
Enter fullscreen mode

Exit fullscreen mode

Under the hood, functions are transformed into RunnableToLambda, adding batching and async support to your functions, along with native tracing and debugging.



Edges

Edges define how the logic routes and when the graph decides to stop. Like Nodes, they take the current graph State and return a value.

By default, this value is the name of the next Node or Nodes to send the State to. All these Nodes will run in parallel as part of the next superstep.

If you want to reuse Edges, you can optionally provide a dictionary mapping the Edge’s output to the name of the next Node.

If you always want to go from Node A to Node B, you can use the addEdge method directly.

If you want to conditionally route to one or more Edges (or conditionally terminate), you can use the addConditionalEdges method.

If a Node has multiple outgoing Edges, all those target Nodes will run in parallel in the next superstep.



State Management

LangGraph.js introduces two key concepts for state management: state interfaces and reducers.

State interfaces define the type of the object passed to each Node in the graph.

Reducers define how Node outputs are applied to the current State. For example, you can use a reducer to merge new conversation responses into the conversation history or average outputs from multiple agent Nodes. By annotating your state fields with reducer functions, you can precisely control how data flows through your application.

We’ll illustrate how reducers work with an example. Compare the following two states. Can you guess the output in each case?

State Management

import { END, START, StateGraph } from "@langchain/langgraph";

interface StateA {
  myField: number;
}

const builderA = new StateGraph<StateA>({
  channels: {
    myField: {
      // "Override" is the default behavior:
      value: (_x: number, y: number) => y,
      default: () => 0,
    },
  },
})
  .addNode("my_node", (_state) => ({ myField: 1 }))
  .addEdge(START, "my_node")
  .addEdge("my_node", END);

const graphA = builderA.compile();

console.log(await graphA.invoke({ myField: 5 }));
// { myField: 1 }
Enter fullscreen mode

Exit fullscreen mode

and StateB:

interface StateB {
  myField: number;
}

// add **reducer** defines **how** to apply state updates
// to specific fields.
function add(existing: number, updated?: number) {
  return existing + (updated ?? 0);
}

const builderB = new StateGraph<StateB>({
  channels: {
    myField: {
      value: add,
      default: () => 0,
    },
  },
})
  .addNode("my_node", (_state) => ({ myField: 1 }))
  .addEdge(START, "my_node")
  .addEdge("my_node", END);

const graphB = builderB.compile();

console.log(await graphB.invoke({ myField: 5 }));

// { myField: 6 }
Enter fullscreen mode

Exit fullscreen mode

If you guessed “1” and “6,” you were right!

In the first case (StateA), the result is “1” because your default reducer is a simple override. In the second case (StateB), the result is “6” because we created an add function as the reducer. This function takes the existing state (for that field) and the state update (if provided) and returns the updated value for that state.

Reducers typically tell the graph how to handle updates to that field.

In building a simple chatbot like ChatGPT, the state can be as simple as a list of chat messages. This is the state used by MessageGraph, a lightweight wrapper around

StateGraph, and is a bit more complex than the following:

Root Reducer

import { StateGraph, END, START } from "@langchain/langgraph";

const builderE = new StateGraph({
  channels: {
    __root__: {
      reducer: (x: string[], y?: string[]) => x.concat(y ?? []),
      default: () => [],
    },
  },
})
  .addNode("my_node", (state) => {
    return [`Adding a message to ${state}`];
  })
  .addEdge(START, "my_node")
  .addEdge("my_node", END);

const graphE = builderE.compile();

console.log(await graphE.invoke(["Hi"]));

// ["Hi", 'Added a message to Hi']
Enter fullscreen mode

Exit fullscreen mode

Using shared state in a graph has some design trade-offs. For example, you might think this feels like using dreadful global variables (though this can be mitigated with a namespace parameter). However, shared typed state provides many benefits related to building AI workflows, including:

  • Data flow is fully inspectable before and after each “superstep.”
  • State is mutable, allowing users or other software to write to the same state between supersteps to control the agent’s direction (using updateState).
  • Checkpoints are well-defined, making it easy to save and restore or even fully version-control the entire workflow execution in any storage backend.

We’ll discuss checkpoints in more detail in the next section.



Persistence

Any “intelligent” system needs memory to operate. AI agents are no exception and require memory across one or more time frames:

  • They always need to remember the steps taken in this task (to avoid repeating when answering a given query).
  • They often need to remember the previous rounds of multi-turn conversations with a user (for coreference resolution and additional context).
  • Ideally, they “remember” previous interactions with the user and behavior in a given “environment” (e.g., application context) to behave more personalized and efficiently.

The latter form of memory covers a lot (personalization, optimization, continuous learning, etc.), which is beyond the scope of this article, though it can be easily integrated into any LangGraph.js workflow, and we are actively exploring the best ways to expose this natively.

The first two forms of memory are natively supported by the StateGraph API through checkpoints.



Checkpoints

Checkpoints represent the state of a thread in a (potentially) multi-turn interaction between a user (or users or other systems). Checkpoints created within a single run will have a set of next Nodes to execute when resuming from that state. Checkpoints created at the end of a given run are the same, except no next Nodes to transition (the graph is waiting for user input).

Checkpoints support chat memory and more, allowing you to mark and persist every state the system takes, whether within a single run or across multi-turn interactions. Let’s explore why this is useful:

Single Run Memory

Within a given run, checkpoints are made at every step. This means you can ask your agent to create world peace. When it fails and hits an error, you can always restore its task from its saved checkpoint.

This also allows you to build human-in-the-loop workflows, common in customer support bots, programming assistants, and other applications. You can interrupt the graph’s execution before or after executing a given Node and “escalate” control to a user or support staff. The staff might respond immediately, or they might respond a month later. Regardless, your workflow can resume at any time as if no time has passed.

Multi-Turn Memory

Checkpoints are saved under a thread_id to support multi-turn interactions between a user and the system. For developers, there is no difference when configuring the graph to add multi-turn memory support, as checkpoints work the same way throughout.

If you have some state you want to retain between turns and some state you want to consider “ephemeral,” you can clear the relevant state in the graph’s final Node.

Using checkpoints is as simple as calling compile({ checkpointer: myCheckpointer }), then invoking it with the thread_id in its configurable parameters. You can see more in the next section!



Configuration

For any given graph deployment, you might want some configurable values controlled at runtime. These values differ from graph inputs because they should not be considered state variables. They are more like “out-of-band” communication.

A common example is the thread_id for a conversation, user_id, which LLM to use, the number of documents to return in a retriever, etc. While these can be passed in state, it is better to separate them from the regular data flow.



Example

Let’s review another example to see how our multi-turn memory works! Can you guess what the results and result2 of running this graph will be?

Configuration

import { END, MemorySaver, START, StateGraph } from "@langchain/langgraph";

interface State {
  total: number;
  turn?: string;
}

function addF(existing: number, updated?: number) {
  return existing + (updated ?? 0);
}

const builder = new StateGraph<State>({
  channels: {
    total: {
      value: addF,
      default: () => 0,
    },
  },
})
  .addNode("add_one", (_state) => ({ total: 1 }))
  .addEdge(START, "add_one")
  .addEdge("add_one", END);

const memory = new MemorySaver();

const graphG = builder.compile({ checkpointer: memory });

let threadId = "some-thread";

let config = { configurable: { thread_id: threadId } };

const result = await graphG.invoke({ total: 1, turn: "First Turn" }, config);

const result2 = await graphG.invoke({ turn: "Next Turn" }, config);

const result3 = await graphG.invoke({ total: 5 }, config);

const result4 = await graphG.invoke(
  { total: 5 },
  { configurable: { thread_id: "new-thread-id" } }
);

console.log(result);
// { total: 2, turn: 'First Turn' }
console.log(result2);
// { total: 3, turn: 'Next Turn' }
console.log(result3);
// { total: 9, turn: 'Next Turn' }
console.log(result4);
// { total: 6 }
Enter fullscreen mode

Exit fullscreen mode

For the first run, no checkpoint is found, so the graph runs on the original input. The total value increases from 1 to 2, and the turn is set to “First Turn.”

For the second run, the user provides an update for “turn” but not the total! Since we load from state, the previous result increments by one (in our “add_one” Node), and “turn” is overwritten by the user.

For the third run, “turn” remains unchanged because it is loaded from the checkpoint but not overwritten by the user. The total is incremented by the value provided by the user because it updates the existing value using the add function.

For the fourth run, we use a new thread id, and no checkpoint is found, so the result is just the user’s provided total plus one.

You might notice that this user-facing behavior is equivalent to running the following commands without a checkpointer.

Configuration

const graphB = builder.compile();
const resultB1 = await graphB.invoke({ total: 1, turn: "First Turn" });
const resultB2 = await graphB.invoke({ ...result, turn: "Next Turn" });
const resultB3 = await graphB.invoke({ ...result2, total: result2.total + 5 });
const resultB4 = await graphB.invoke({ total: 5 });

console.log(resultB1);
// { total: 2, turn: 'First Turn' }
console.log(resultB2);
// { total: 3, turn: 'Next Turn' }
console.log(resultB3);
// { total: 9, turn: 'Next Turn' }
console.log(resultB4);
// { total: 6 }
Enter fullscreen mode

Exit fullscreen mode

Run it yourself to confirm the equivalence. User inputs and checkpoint loads are treated like any other state update.

Now that we’ve covered the core concepts of LangGraph.js, it might be more helpful to see how all the pieces come together through an end-to-end example.



StateGraph Single Execution Data Flow

As engineers, we are never satisfied until we know what happens “under the hood.” In the previous sections, we explained some core concepts of LangGraph.js. Now it’s time to show how they come together.

Let’s extend our toy example with a conditional edge and walk through two consecutive invocations.

Data Flow

import { START, END, StateGraph, MemorySaver } from "@langchain/langgraph";

interface State {
  total: number;
}

function addG(existing: number, updated?: number) {
  return existing + (updated ?? 0);
}

const builderH = new StateGraph<State>({
  channels: {
    total: {
      value: addG,
      default: () => 0,
    },
  },
})
  .addNode("add_one", (_state) => ({ total: 1 }))
  .addNode("double", (state) => ({ total: state.total }))
  .addEdge(START, "add_one");

function route(state: State) {
  if (state.total < 6) {
    return "double";
  }
  return END;
}

builderH.addConditionalEdges("add_one", route);
builderH.addEdge("double", "add_one");

const memoryH = new MemorySaver();
const graphH = builderH.compile({ checkpointer: memoryH });
const threadId = "some-thread";
const config = { configurable: { thread_id: threadId } };

for await (const step of await graphH.stream(
  { total: 1 },
  { ...config,

 streamMode: "values" }
)) {
  console.log(step);
}
// 0 checkpoint { total: 1 }
// 1 task null
// 1 task_result null
// 1 checkpoint { total: 2 }
// 2 task null
// 2 task_result null
// 2 checkpoint { total: 4 }
// 3 task null
// 3 task_result null
// 3 checkpoint { total: 5 }
// 4 task null
// 4 task_result null
// 4 checkpoint { total: 10 }
// 5 task null
// 5 task_result null
// 5 checkpoint { total: 11 }
Enter fullscreen mode

Exit fullscreen mode

To trace this run, check the LangSmith link. We will explain the execution step by step below:

  1. First, the graph looks for checkpoints. None is found, so the state initializes with a total of 0.
  2. Next, the graph applies user input as a state update. The reducer adds the input (1) to the existing value (0). At the end of this superstep, the total is (1).
  3. After that, the “add_one” Node is invoked, returning 1.
  4. Next, the reducer applies this update to the existing total (1). The state is now 2.
  5. Then, the conditional edge “route” is called. Since the value is less than 6, we proceed to the ‘double’ Node.
  6. The double Node takes the existing state (2) and returns it. The reducer is then called and adds it to the existing state. The state is now 4.
  7. The graph then loops back to add_one (5), checks the conditional edge, and proceeds since it is <6. After doubling, the total is (10).
  8. The fixed edge loops back to add_one (11), checks the conditional edge, and since it is greater than 6, the program terminates.

For the second run, we will use the same configuration:

Data Flow

const resultH2 = await graphH.invoke({ total: -2, turn: "First Turn" }, config);
console.log(resultH2);
// { total: 10 }
Enter fullscreen mode

Exit fullscreen mode

To trace this run, check the LangSmith link. We will explain the execution step by step below:

  1. First, the graph looks for checkpoints. It loads into memory as the initial state. The total is (11) as previously.
  2. Next, it applies user input updates. The add reducer updates the total from 11 to -9.
  3. After that, the “add_one” Node is invoked, returning 1.
  4. This update is applied using the reducer, raising the value to 10.
  5. Next, the “route” conditional edge is triggered. Since the value is greater than 6, we terminate the program, ending at (11).



Conclusion

That’s it! We have explored the core concepts of LangGraph.js and seen how to use it to create reliable, fault-tolerant agent systems. By modeling agents as state machines, LangGraph.js provides a powerful abstraction for composing scalable and controllable AI workflows.

Remember these key ideas when working with LangGraph.js:

  • Nodes do the work; Edges determine the control flow.
  • Reducers precisely define how state updates are applied at each step.
  • Checkpoints enable memory within single runs and across multi-turn interactions.
  • Interrupts allow you to pause, retrieve, and update the graph’s state for human-in-the-loop workflows.
  • Configurable parameters allow runtime control, independent of the regular data flow.

With these principles, you can harness the full power of LangGraph.js to build advanced AI agent systems.



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.