Last Updated: 3/7/2026
Deep AgentsLangChainLangGraphIntegrationsLearnReferenceContribute
Get started
Capabilities
Production
- Application structure
- Test
- LangSmith Studio
- Agent Chat UI
- LangSmith Deployment
- LangSmith Observability
LangGraph APIs
Quickstart
This quickstart demonstrates how to build a calculator agent using the LangGraph Graph API or the Functional API.
- Use the Graph API if you prefer to define your agent as a graph of nodes and edges.
- Use the Functional API if you prefer to define your agent as a single function.
For conceptual information, see Graph API overview and Functional API overview.
For this example, you will need to set up a Claude (Anthropic) account and get an API key. Then, set the ANTHROPIC_API_KEY environment variable in your terminal.
- Use the Graph API
- Use the Functional API
1. Define tools and model
In this example, we’ll use the Claude Sonnet 4.5 model and define tools for addition, multiplication, and division.
Copy
from langchain.tools import tool from langchain.tools import toolfrom langchain.chat_models import init_chat_model from langchain.chat_models import init_chat_model model = init_chat_model(model = init_chat_model( "claude-sonnet-4-5-20250929", "claude-sonnet-4-5-20250929", temperature=0 temperature = 0)) # Define tools # Define tools @tool @tooldef multiply(a: int, b: int) -> int: def multiply(a: int, b: int) -> int: """Multiply `a` and `b`. """Multiply `a` and `b`. Args: Args: a: First int a: First int b: Second int b: Second int """ """ return a * b return a * b @tool @tooldef add(a: int, b: int) -> int: def add(a: int, b: int) -> int: """Adds `a` and `b`. """Adds `a` and `b`. Args: Args: a: First int a: First int b: Second int b: Second int """ """ return a + b return a + b @tool @tooldef divide(a: int, b: int) -> float: def divide(a: int, b: int) -> float: """Divide `a` and `b`. """Divide `a` and `b`. Args: Args: a: First int a: First int b: Second int b: Second int """ """ return a / b return a / b # Augment the LLM with tools # Augment the LLM with toolstools = [add, multiply, divide] tools = [add, multiply, divide]tools_by_name = {tool.name: tool for tool in tools} tools_by_name = {tool.name: tool for tool in tools}model_with_tools = model.bind_tools(tools) model_with_tools = model.bind_tools(tools) 2. Define state
The graph’s state is used to store the messages and the number of LLM calls.
State in LangGraph persists throughout the agent’s execution.The Annotated type with operator.add ensures that new messages are appended to the existing list rather than replacing it.
Copy
from langchain.messages import AnyMessage from langchain.messages import AnyMessagefrom typing_extensions import TypedDict, Annotated from typing_extensions import TypedDict, Annotated import operator import operator class MessagesState(TypedDict): class MessagesState(TypedDict): messages: Annotated[list[AnyMessage], operator.add] messages: Annotated[list[AnyMessage], operator.add] llm_calls: int llm_calls: int 3. Define model node
The model node is used to call the LLM and decide whether to call a tool or not.
Copy
from langchain.messages import SystemMessage from langchain.messages import SystemMessage def llm_call(state: dict): def llm_call(state: dict): """LLM decides whether to call a tool or not""" """LLM decides whether to call a tool or not""" return { return { "messages": [ "messages": [ model_with_tools.invoke( model_with_tools.invoke( [ [ SystemMessage( SystemMessage( content="You are a helpful assistant tasked with performing arithmetic on a set of inputs." content ="You are a helpful assistant tasked with performing arithmetic on a set of inputs." ) ) ] ] + state["messages"] + state["messages"] ) ) ], ], "llm_calls": state.get('llm_calls', 0) + 1 "llm_calls": state.get('llm_calls', 0) + 1 } } 4. Define tool node
The tool node is used to call the tools and return the results.
Copy
from langchain.messages import ToolMessage from langchain.messages import ToolMessage def tool_node(state: dict): def tool_node(state: dict): """Performs the tool call""" """Performs the tool call""" result = [] result = [] for tool_call in state["messages"][-1].tool_calls: for tool_call in state["messages"][- 1].tool_calls: tool = tools_by_name[tool_call["name"]] tool = tools_by_name[tool_call["name"]] observation = tool.invoke(tool_call["args"]) observation = tool.invoke(tool_call["args"]) result.append(ToolMessage(content=observation, tool_call_id=tool_call["id"])) result.append(ToolMessage(content =observation, tool_call_id =tool_call["id"])) return {"messages": result} return {"messages": result} 5. Define end logic
The conditional edge function is used to route to the tool node or end based upon whether the LLM made a tool call.
Copy
from typing import Literal from typing import Literalfrom langgraph.graph import StateGraph, START, END from langgraph.graph import StateGraph, START, END def should_continue(state: MessagesState) -> Literal["tool_node", END]: def should_continue(state: MessagesState) -> Literal["tool_node", END]: """Decide if we should continue the loop or stop based upon whether the LLM made a tool call""" """Decide if we should continue the loop or stop based upon whether the LLM made a tool call""" messages = state["messages"] messages = state["messages"] last_message = messages[-1] last_message = messages[- 1] # If the LLM makes a tool call, then perform an action # If the LLM makes a tool call, then perform an action if last_message.tool_calls: if last_message.tool_calls: return "tool_node" return "tool_node" # Otherwise, we stop (reply to the user) # Otherwise, we stop (reply to the user) return END return END 6. Build and compile the agent
The agent is built using the StateGraph class and compiled using the compile method.
Copy
# Build workflow # Build workflowagent_builder = StateGraph(MessagesState) agent_builder = StateGraph(MessagesState) # Add nodes # Add nodesagent_builder.add_node("llm_call", llm_call)agent_builder.add_node("llm_call", llm_call)agent_builder.add_node("tool_node", tool_node)agent_builder.add_node("tool_node", tool_node) # Add edges to connect nodes # Add edges to connect nodesagent_builder.add_edge(START, "llm_call")agent_builder.add_edge(START, "llm_call")agent_builder.add_conditional_edges(agent_builder.add_conditional_edges( "llm_call", "llm_call", should_continue, should_continue, ["tool_node", END] ["tool_node", END]))agent_builder.add_edge("tool_node", "llm_call")agent_builder.add_edge("tool_node", "llm_call") # Compile the agent # Compile the agentagent = agent_builder.compile() agent = agent_builder.compile() # Show the agent # Show the agentfrom IPython.display import Image, display from IPython.display import Image, displaydisplay(Image(agent.get_graph(xray=True).draw_mermaid_png()))display(Image(agent.get_graph(xray = True).draw_mermaid_png())) # Invoke # Invokefrom langchain.messages import HumanMessage from langchain.messages import HumanMessagemessages = [HumanMessage(content="Add 3 and 4.")] messages = [HumanMessage(content ="Add 3 and 4.")]messages = agent.invoke({"messages": messages}) messages = agent.invoke({"messages": messages})for m in messages["messages"]: for m in messages["messages"]: m.pretty_print() m.pretty_print()To learn how to trace your agent with LangSmith, see the LangSmith documentation.
Congratulations! You’ve built your first agent using the LangGraph Graph API.
Full code example
Copy
# Step 1: Define tools and model# Step 1: Define tools and model from langchain.tools import tool from langchain.tools import toolfrom langchain.chat_models import init_chat_model from langchain.chat_models import init_chat_model model = init_chat_model(model = init_chat_model( "claude-sonnet-4-5-20250929", "claude-sonnet-4-5-20250929", temperature=0 temperature = 0)) # Define tools # Define tools @tool @tooldef multiply(a: int, b: int) -> int: def multiply(a: int, b: int) -> int: """Multiply `a` and `b`. """Multiply `a` and `b`. Args: Args: a: First int a: First int b: Second int b: Second int """ """ return a * b return a * b @tool @tooldef add(a: int, b: int) -> int: def add(a: int, b: int) -> int: """Adds `a` and `b`. """Adds `a` and `b`. Args: Args: a: First int a: First int b: Second int b: Second int """ """ return a + b return a + b @tool @tooldef divide(a: int, b: int) -> float: def divide(a: int, b: int) -> float: """Divide `a` and `b`. """Divide `a` and `b`. Args: Args: a: First int a: First int b: Second int b: Second int """ """ return a / b return a / b # Augment the LLM with tools # Augment the LLM with toolstools = [add, multiply, divide] tools = [add, multiply, divide]tools_by_name = {tool.name: tool for tool in tools} tools_by_name = {tool.name: tool for tool in tools}model_with_tools = model.bind_tools(tools) model_with_tools = model.bind_tools(tools) # Step 2: Define state# Step 2: Define state from langchain.messages import AnyMessage from langchain.messages import AnyMessagefrom typing_extensions import TypedDict, Annotated from typing_extensions import TypedDict, Annotated import operator import operator class MessagesState(TypedDict): class MessagesState(TypedDict): messages: Annotated[list[AnyMessage], operator.add] messages: Annotated[list[AnyMessage], operator.add] llm_calls: int llm_calls: int # Step 3: Define model node# Step 3: Define model nodefrom langchain.messages import SystemMessage from langchain.messages import SystemMessage def llm_call(state: dict): def llm_call(state: dict): """LLM decides whether to call a tool or not""" """LLM decides whether to call a tool or not""" return { return { "messages": [ "messages": [ model_with_tools.invoke( model_with_tools.invoke( [ [ SystemMessage( SystemMessage( content="You are a helpful assistant tasked with performing arithmetic on a set of inputs." content ="You are a helpful assistant tasked with performing arithmetic on a set of inputs." ) ) ] ] + state["messages"] + state["messages"] ) ) ], ], "llm_calls": state.get('llm_calls', 0) + 1 "llm_calls": state.get('llm_calls', 0) + 1 } } # Step 4: Define tool node# Step 4: Define tool node from langchain.messages import ToolMessage from langchain.messages import ToolMessage def tool_node(state: dict): def tool_node(state: dict): """Performs the tool call""" """Performs the tool call""" result = [] result = [] for tool_call in state["messages"][-1].tool_calls: for tool_call in state["messages"][- 1].tool_calls: tool = tools_by_name[tool_call["name"]] tool = tools_by_name[tool_call["name"]] observation = tool.invoke(tool_call["args"]) observation = tool.invoke(tool_call["args"]) result.append(ToolMessage(content=observation, tool_call_id=tool_call["id"])) result.append(ToolMessage(content =observation, tool_call_id =tool_call["id"])) return {"messages": result} return {"messages": result} # Step 5: Define logic to determine whether to end# Step 5: Define logic to determine whether to end from typing import Literal from typing import Literalfrom langgraph.graph import StateGraph, START, END from langgraph.graph import StateGraph, START, END # Conditional edge function to route to the tool node or end based upon whether the LLM made a tool call # Conditional edge function to route to the tool node or end based upon whether the LLM made a tool calldef should_continue(state: MessagesState) -> Literal["tool_node", END]: def should_continue(state: MessagesState) -> Literal["tool_node", END]: """Decide if we should continue the loop or stop based upon whether the LLM made a tool call""" """Decide if we should continue the loop or stop based upon whether the LLM made a tool call""" messages = state["messages"] messages = state["messages"] last_message = messages[-1] last_message = messages[- 1] # If the LLM makes a tool call, then perform an action # If the LLM makes a tool call, then perform an action if last_message.tool_calls: if last_message.tool_calls: return "tool_node" return "tool_node" # Otherwise, we stop (reply to the user) # Otherwise, we stop (reply to the user) return END return END # Step 6: Build agent# Step 6: Build agent # Build workflow # Build workflowagent_builder = StateGraph(MessagesState) agent_builder = StateGraph(MessagesState) # Add nodes # Add nodesagent_builder.add_node("llm_call", llm_call)agent_builder.add_node("llm_call", llm_call)agent_builder.add_node("tool_node", tool_node)agent_builder.add_node("tool_node", tool_node) # Add edges to connect nodes # Add edges to connect nodesagent_builder.add_edge(START, "llm_call")agent_builder.add_edge(START, "llm_call")agent_builder.add_conditional_edges(agent_builder.add_conditional_edges( "llm_call", "llm_call", should_continue, should_continue, ["tool_node", END] ["tool_node", END]))agent_builder.add_edge("tool_node", "llm_call")agent_builder.add_edge("tool_node", "llm_call") # Compile the agent # Compile the agentagent = agent_builder.compile() agent = agent_builder.compile() from IPython.display import Image, display from IPython.display import Image, display # Show the agent # Show the agentdisplay(Image(agent.get_graph(xray=True).draw_mermaid_png()))display(Image(agent.get_graph(xray = True).draw_mermaid_png())) # Invoke # Invokefrom langchain.messages import HumanMessage from langchain.messages import HumanMessagemessages = [HumanMessage(content="Add 3 and 4.")] messages = [HumanMessage(content ="Add 3 and 4.")]messages = agent.invoke({"messages": messages}) messages = agent.invoke({"messages": messages})for m in messages["messages"]: for m in messages["messages"]: m.pretty_print() m.pretty_print() 1. Define tools and model
In this example, we’ll use the Claude Sonnet 4.5 model and define tools for addition, multiplication, and division.
Copy
from langchain.tools import tool from langchain.tools import toolfrom langchain.chat_models import init_chat_model from langchain.chat_models import init_chat_model model = init_chat_model(model = init_chat_model( "claude-sonnet-4-5-20250929", "claude-sonnet-4-5-20250929", temperature=0 temperature = 0)) # Define tools # Define tools @tool @tooldef multiply(a: int, b: int) -> int: def multiply(a: int, b: int) -> int: """Multiply `a` and `b`. """Multiply `a` and `b`. Args: Args: a: First int a: First int b: Second int b: Second int """ """ return a * b return a * b @tool @tooldef add(a: int, b: int) -> int: def add(a: int, b: int) -> int: """Adds `a` and `b`. """Adds `a` and `b`. Args: Args: a: First int a: First int b: Second int b: Second int """ """ return a + b return a + b @tool @tooldef divide(a: int, b: int) -> float: def divide(a: int, b: int) -> float: """Divide `a` and `b`. """Divide `a` and `b`. Args: Args: a: First int a: First int b: Second int b: Second int """ """ return a / b return a / b # Augment the LLM with tools # Augment the LLM with toolstools = [add, multiply, divide] tools = [add, multiply, divide]tools_by_name = {tool.name: tool for tool in tools} tools_by_name = {tool.name: tool for tool in tools}model_with_tools = model.bind_tools(tools) model_with_tools = model.bind_tools(tools) from langgraph.graph import add_messages from langgraph.graph import add_messagesfrom langchain.messages import (from langchain.messages import ( SystemMessage, SystemMessage, HumanMessage, HumanMessage, ToolCall, ToolCall,))from langchain_core.messages import BaseMessage from langchain_core.messages import BaseMessagefrom langgraph.func import entrypoint, task from langgraph.func import entrypoint, task 2. Define model node
The model node is used to call the LLM and decide whether to call a tool or not.
The @task decorator marks a function as a task that can be executed as part of the agent. Tasks can be called synchronously or asynchronously within your entrypoint function.
Copy
@task @taskdef call_llm(messages: list[BaseMessage]): def call_llm(messages: list[BaseMessage]): """LLM decides whether to call a tool or not""" """LLM decides whether to call a tool or not""" return model_with_tools.invoke( return model_with_tools.invoke( [ [ SystemMessage( SystemMessage( content="You are a helpful assistant tasked with performing arithmetic on a set of inputs." content ="You are a helpful assistant tasked with performing arithmetic on a set of inputs." ) ) ] ] + messages + messages ) ) 3. Define tool node
The tool node is used to call the tools and return the results.
Copy
@task @taskdef call_tool(tool_call: ToolCall): def call_tool(tool_call: ToolCall): """Performs the tool call""" """Performs the tool call""" tool = tools_by_name[tool_call["name"]] tool = tools_by_name[tool_call["name"]] return tool.invoke(tool_call) return tool.invoke(tool_call) 4. Define agent
The agent is built using the @entrypoint function.
In the Functional API, instead of defining nodes and edges explicitly, you write standard control flow logic (loops, conditionals) within a single function.
Copy
@entrypoint() @entrypoint()def agent(messages: list[BaseMessage]): def agent(messages: list[BaseMessage]): model_response = call_llm(messages).result() model_response = call_llm(messages).result() while True: while True: if not model_response.tool_calls: if not model_response.tool_calls: break break # Execute tools # Execute tools tool_result_futures = [ tool_result_futures = [ call_tool(tool_call) for tool_call in model_response.tool_calls call_tool(tool_call) for tool_call in model_response.tool_calls ] ] tool_results = [fut.result() for fut in tool_result_futures] tool_results = [fut.result() for fut in tool_result_futures] messages = add_messages(messages, [model_response, *tool_results]) messages = add_messages(messages, [model_response, *tool_results]) model_response = call_llm(messages).result() model_response = call_llm(messages).result() messages = add_messages(messages, model_response) messages = add_messages(messages, model_response) return messages return messages # Invoke # Invokemessages = [HumanMessage(content="Add 3 and 4.")] messages = [HumanMessage(content ="Add 3 and 4.")]for chunk in agent.stream(messages, stream_mode="updates"): for chunk in agent.stream(messages, stream_mode = "updates"): print(chunk) print(chunk) print("\n") print(" \n ")To learn how to trace your agent with LangSmith, see the LangSmith documentation.
Congratulations! You’ve built your first agent using the LangGraph Functional API.
Full code example
Copy
# Step 1: Define tools and model# Step 1: Define tools and model from langchain.tools import tool from langchain.tools import toolfrom langchain.chat_models import init_chat_model from langchain.chat_models import init_chat_model model = init_chat_model(model = init_chat_model( "claude-sonnet-4-5-20250929", "claude-sonnet-4-5-20250929", temperature=0 temperature = 0)) # Define tools # Define tools @tool @tooldef multiply(a: int, b: int) -> int: def multiply(a: int, b: int) -> int: """Multiply `a` and `b`. """Multiply `a` and `b`. Args: Args: a: First int a: First int b: Second int b: Second int """ """ return a * b return a * b @tool @tooldef add(a: int, b: int) -> int: def add(a: int, b: int) -> int: """Adds `a` and `b`. """Adds `a` and `b`. Args: Args: a: First int a: First int b: Second int b: Second int """ """ return a + b return a + b @tool @tooldef divide(a: int, b: int) -> float: def divide(a: int, b: int) -> float: """Divide `a` and `b`. """Divide `a` and `b`. Args: Args: a: First int a: First int b: Second int b: Second int """ """ return a / b return a / b # Augment the LLM with tools # Augment the LLM with toolstools = [add, multiply, divide] tools = [add, multiply, divide]tools_by_name = {tool.name: tool for tool in tools} tools_by_name = {tool.name: tool for tool in tools}model_with_tools = model.bind_tools(tools) model_with_tools = model.bind_tools(tools) from langgraph.graph import add_messages from langgraph.graph import add_messagesfrom langchain.messages import (from langchain.messages import ( SystemMessage, SystemMessage, HumanMessage, HumanMessage, ToolCall, ToolCall,))from langchain_core.messages import BaseMessage from langchain_core.messages import BaseMessagefrom langgraph.func import entrypoint, task from langgraph.func import entrypoint, task # Step 2: Define model node# Step 2: Define model node @task @taskdef call_llm(messages: list[BaseMessage]): def call_llm(messages: list[BaseMessage]): """LLM decides whether to call a tool or not""" """LLM decides whether to call a tool or not""" return model_with_tools.invoke( return model_with_tools.invoke( [ [ SystemMessage( SystemMessage( content="You are a helpful assistant tasked with performing arithmetic on a set of inputs." content ="You are a helpful assistant tasked with performing arithmetic on a set of inputs." ) ) ] ] + messages + messages ) ) # Step 3: Define tool node# Step 3: Define tool node @task @taskdef call_tool(tool_call: ToolCall): def call_tool(tool_call: ToolCall): """Performs the tool call""" """Performs the tool call""" tool = tools_by_name[tool_call["name"]] tool = tools_by_name[tool_call["name"]] return tool.invoke(tool_call) return tool.invoke(tool_call) # Step 4: Define agent# Step 4: Define agent @entrypoint() @entrypoint()def agent(messages: list[BaseMessage]): def agent(messages: list[BaseMessage]): model_response = call_llm(messages).result() model_response = call_llm(messages).result() while True: while True: if not model_response.tool_calls: if not model_response.tool_calls: break break # Execute tools # Execute tools tool_result_futures = [ tool_result_futures = [ call_tool(tool_call) for tool_call in model_response.tool_calls call_tool(tool_call) for tool_call in model_response.tool_calls ] ] tool_results = [fut.result() for fut in tool_result_futures] tool_results = [fut.result() for fut in tool_result_futures] messages = add_messages(messages, [model_response, *tool_results]) messages = add_messages(messages, [model_response, *tool_results]) model_response = call_llm(messages).result() model_response = call_llm(messages).result() messages = add_messages(messages, model_response) messages = add_messages(messages, model_response) return messages return messages # Invoke # Invokemessages = [HumanMessage(content="Add 3 and 4.")] messages = [HumanMessage(content ="Add 3 and 4.")]for chunk in agent.stream(messages, stream_mode="updates"): for chunk in agent.stream(messages, stream_mode = "updates"): print(chunk) print(chunk) print("\n") print(" \n ")Edit this page on GitHub or file an issue .
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.
Was this page helpful?