Skip to Content
Workflows Agents

Last Updated: 3/7/2026


Docs by LangChain home page

Deep AgentsLangChainLangGraphIntegrationsLearnReferenceContribute

Get started
Capabilities
Production
LangGraph APIs

Get started

Workflows and agents

This guide reviews common workflow and agent patterns.

  • Workflows have predetermined code paths and are designed to operate in a certain order.
  • Agents are dynamic and define their own processes and tool usage.

LangGraph offers several benefits when building agents and workflows, including persistence, streaming, and support for debugging as well as deployment.

Setup

To build a workflow or agent, you can use any chat model that supports structured outputs and tool calling. The following example uses Anthropic:

  1. Install dependencies:

Copy

pip install langchain_core langchain-anthropic langgraph pip install langchain_core langchain-anthropic langgraph
  1. Initialize the LLM:

Copy

import os import os import getpass import getpass from langchain_anthropic import ChatAnthropic from langchain_anthropic import ChatAnthropic def _set_env(var: str): def _set_env(var: str): if not os.environ.get(var): if not os. environ. get(var): os.environ[var] = getpass.getpass(f"{var}: ") os. environ[var] = getpass. getpass(f "{var}: ") _set_env("ANTHROPIC_API_KEY") _set_env("ANTHROPIC_API_KEY") llm = ChatAnthropic(model="claude-sonnet-4-6") llm = ChatAnthropic(model ="claude-sonnet-4-6")

LLMs and augmentations

Workflows and agentic systems are based on LLMs and the various augmentations you add to them. Tool calling, structured outputs, and short term memory are a few options for tailoring LLMs to your needs.

Copy

# Schema for structured output # Schema for structured outputfrom pydantic import BaseModel, Field from pydantic import BaseModel, Field class SearchQuery(BaseModel): class SearchQuery(BaseModel): search_query: str = Field(None, description="Query that is optimized web search.") search_query: str = Field(None, description ="Query that is optimized web search.") justification: str = Field( justification: str = Field( None, description="Why this query is relevant to the user's request." None, description ="Why this query is relevant to the user's request." ) ) # Augment the LLM with schema for structured output # Augment the LLM with schema for structured outputstructured_llm = llm.with_structured_output(SearchQuery) structured_llm = llm. with_structured_output(SearchQuery) # Invoke the augmented LLM # Invoke the augmented LLMoutput = structured_llm.invoke("How does Calcium CT score relate to high cholesterol?") output = structured_llm. invoke("How does Calcium CT score relate to high cholesterol?") # Define a tool # Define a tooldef multiply(a: int, b: int) -> int: def multiply(a: int, b: int) -> int: return a * b return a * b # Augment the LLM with tools # Augment the LLM with toolsllm_with_tools = llm.bind_tools([multiply]) llm_with_tools = llm. bind_tools([multiply]) # Invoke the LLM with input that triggers the tool call # Invoke the LLM with input that triggers the tool callmsg = llm_with_tools.invoke("What is 2 times 3?") msg = llm_with_tools. invoke("What is 2 times 3?") # Get the tool call # Get the tool callmsg.tool_calls msg. tool_calls

Prompt chaining

Prompt chaining is when each LLM call processes the output of the previous call. It’s often used for performing well-defined tasks that can be broken down into smaller, verifiable steps. Some examples include:

  • Translating documents into different languages
  • Verifying generated content for consistency

Copy

from typing_extensions import TypedDict from typing_extensions import TypedDictfrom langgraph.graph import StateGraph, START, END from langgraph. graph import StateGraph, START, ENDfrom IPython.display import Image, display from IPython. display import Image, display # Graph state # Graph stateclass State(TypedDict): class State(TypedDict): topic: str topic: str joke: str joke: str improved_joke: str improved_joke: str final_joke: str final_joke: str # Nodes # Nodesdef generate_joke(state: State): def generate_joke(state: State): """First LLM call to generate initial joke""" """First LLM call to generate initial joke""" msg = llm.invoke(f"Write a short joke about {state['topic']}") msg = llm. invoke(f "Write a short joke about {state[' topic ']} ") return {"joke": msg.content} return {"joke": msg. content} def check_punchline(state: State): def check_punchline(state: State): """Gate function to check if the joke has a punchline""" """Gate function to check if the joke has a punchline""" # Simple check - does the joke contain "?" or "!" # Simple check - does the joke contain "?" or "!" if "?" in state["joke"] or "!" in state["joke"]: if "?" in state[" joke "] or "!" in state[" joke "]: return "Pass" return "Pass" return "Fail" return "Fail" def improve_joke(state: State): def improve_joke(state: State): """Second LLM call to improve the joke""" """Second LLM call to improve the joke""" msg = llm.invoke(f"Make this joke funnier by adding wordplay: {state['joke']}") msg = llm. invoke(f"Make this joke funnier by adding wordplay: {state[' joke ']} ") return {"improved_joke": msg.content} return {"improved_joke": msg. content} def polish_joke(state: State): def polish_joke(state: State): """Third LLM call for final polish""" """Third LLM call for final polish""" msg = llm.invoke(f"Add a surprising twist to this joke: {state['improved_joke']}") msg = llm. invoke(f"Add a surprising twist to this joke: {state[' improved_joke ']} ") return {"final_joke": msg.content} return {"final_joke": msg. content} # Build workflow # Build workflowworkflow = StateGraph(State) workflow = StateGraph(State) # Add nodes # Add nodesworkflow.add_node("generate_joke", generate_joke) workflow. add_node("generate_joke", generate_joke)workflow.add_node("improve_joke", improve_joke) workflow. add_node("improve_joke", improve_joke)workflow.add_node("polish_joke", polish_joke) workflow. add_node("polish_joke", polish_joke) # Add edges to connect nodes # Add edges to connect nodesworkflow.add_edge(START, "generate_joke") workflow. add_edge(START, "generate_joke")workflow.add_conditional_edges(workflow. add_conditional_edges( "generate_joke", check_punchline, {"Fail": "improve_joke", "Pass": END} "generate_joke", check_punchline, {"Fail": "improve_joke", "Pass": END}))workflow.add_edge("improve_joke", "polish_joke") workflow. add_edge("improve_joke", "polish_joke")workflow.add_edge("polish_joke", END) workflow. add_edge("polish_joke", END) # Compile # Compilechain = workflow.compile() chain = workflow. compile() # Show workflow # Show workflowdisplay(Image(chain.get_graph().draw_mermaid_png())) display(Image(chain. get_graph(). draw_mermaid_png())) # Invoke # Invokestate = chain.invoke({"topic": "cats"}) state = chain. invoke({"topic": "cats"})print("Initial joke:") print("Initial joke:")print(state["joke"]) print(state[" joke "])print("\n--- --- ---\n") print(" \n --- --- --- \n ")if "improved_joke" in state: if "improved_joke" in state: print("Improved joke:") print("Improved joke:") print(state["improved_joke"]) print(state[" improved_joke "]) print("\n--- --- ---\n") print(" \n --- --- --- \n ") print("Final joke:") print("Final joke:") print(state["final_joke"]) print(state[" final_joke "])else: else: print("Final joke:") print("Final joke:") print(state["joke"]) print(state[" joke "])

Parallelization

With parallelization, LLMs work simultaneously on a task. This is either done by running multiple independent subtasks at the same time, or running the same task multiple times to check for different outputs. Parallelization is commonly used to:

  • Split up subtasks and run them in parallel, which increases speed
  • Run tasks multiple times to check for different outputs, which increases confidence

Some examples include:

  • Running one subtask that processes a document for keywords, and a second subtask to check for formatting errors
  • Running a task multiple times that scores a document for accuracy based on different criteria, like the number of citations, the number of sources used, and the quality of the sources

Copy

# Graph state # Graph stateclass State(TypedDict): class State(TypedDict): topic: str topic: str joke: str joke: str story: str story: str poem: str poem: str combined_output: str combined_output: str # Nodes # Nodesdef call_llm_1(state: State): def call_llm_1(state: State): """First LLM call to generate initial joke""" """First LLM call to generate initial joke""" msg = llm.invoke(f"Write a joke about {state['topic']}") msg = llm. invoke(f "Write a joke about {state[' topic ']} ") return {"joke": msg.content} return {"joke": msg. content} def call_llm_2(state: State): def call_llm_2(state: State): """Second LLM call to generate story""" """Second LLM call to generate story""" msg = llm.invoke(f"Write a story about {state['topic']}") msg = llm. invoke(f "Write a story about {state[' topic ']} ") return {"story": msg.content} return {"story": msg. content} def call_llm_3(state: State): def call_llm_3(state: State): """Third LLM call to generate poem""" """Third LLM call to generate poem""" msg = llm.invoke(f"Write a poem about {state['topic']}") msg = llm. invoke(f "Write a poem about {state[' topic ']} ") return {"poem": msg.content} return {"poem": msg. content} def aggregator(state: State): def aggregator(state: State): """Combine the joke, story and poem into a single output""" """Combine the joke, story and poem into a single output""" combined = f"Here's a story, joke, and poem about {state['topic']}!\n\n" combined = f"Here's a story, joke, and poem about {state[' topic ']}! \n\n " combined += f"STORY:\n{state['story']}\n\n" combined += f"STORY:\n{state[' story ']}\n\n " combined += f"JOKE:\n{state['joke']}\n\n" combined += f"JOKE:\n{state[' joke ']}\n\n " combined += f"POEM:\n{state['poem']}" combined += f"POEM:\n{state[' poem ']} " return {"combined_output": combined} return {"combined_output": combined} # Build workflow # Build workflowparallel_builder = StateGraph(State) parallel_builder = StateGraph(State) # Add nodes # Add nodesparallel_builder.add_node("call_llm_1", call_llm_1) parallel_builder. add_node("call_llm_1", call_llm_1)parallel_builder.add_node("call_llm_2", call_llm_2) parallel_builder. add_node("call_llm_2", call_llm_2)parallel_builder.add_node("call_llm_3", call_llm_3) parallel_builder. add_node("call_llm_3", call_llm_3)parallel_builder.add_node("aggregator", aggregator) parallel_builder. add_node("aggregator", aggregator) # Add edges to connect nodes # Add edges to connect nodesparallel_builder.add_edge(START, "call_llm_1") parallel_builder. add_edge(START, "call_llm_1")parallel_builder.add_edge(START, "call_llm_2") parallel_builder. add_edge(START, "call_llm_2")parallel_builder.add_edge(START, "call_llm_3") parallel_builder. add_edge(START, "call_llm_3")parallel_builder.add_edge("call_llm_1", "aggregator") parallel_builder. add_edge("call_llm_1", "aggregator")parallel_builder.add_edge("call_llm_2", "aggregator") parallel_builder. add_edge("call_llm_2", "aggregator")parallel_builder.add_edge("call_llm_3", "aggregator") parallel_builder. add_edge("call_llm_3", "aggregator")parallel_builder.add_edge("aggregator", END) parallel_builder. add_edge("aggregator", END)parallel_workflow = parallel_builder.compile() parallel_workflow = parallel_builder. compile() # Show workflow # Show workflowdisplay(Image(parallel_workflow.get_graph().draw_mermaid_png())) display(Image(parallel_workflow. get_graph(). draw_mermaid_png())) # Invoke # Invokestate = parallel_workflow.invoke({"topic": "cats"}) state = parallel_workflow. invoke({"topic": "cats"})print(state["combined_output"]) print(state[" combined_output "])

Routing

Routing workflows process inputs and then directs them to context-specific tasks. This allows you to define specialized flows for complex tasks. For example, a workflow built to answer product related questions might process the type of question first, and then route the request to specific processes for pricing, refunds, returns, etc.

Copy

from typing_extensions import Literal from typing_extensions import Literalfrom langchain.messages import HumanMessage, SystemMessage from langchain. messages import HumanMessage, SystemMessage # Schema for structured output to use as routing logic # Schema for structured output to use as routing logicclass Route(BaseModel): class Route(BaseModel): step: Literal["poem", "story", "joke"] = Field( step: Literal[" poem ", " story ", " joke "] = Field( None, description="The next step in the routing process" None, description = "The next step in the routing process" ) ) # Augment the LLM with schema for structured output # Augment the LLM with schema for structured outputrouter = llm.with_structured_output(Route) router = llm. with_structured_output(Route) # State # Stateclass State(TypedDict): class State(TypedDict): input: str input: str decision: str decision: str output: str output: str # Nodes # Nodesdef llm_call_1(state: State): def llm_call_1(state: State): """Write a story""" """Write a story""" result = llm.invoke(state["input"]) result = llm. invoke(state[" input "]) return {"output": result.content} return {"output": result. content} def llm_call_2(state: State): def llm_call_2(state: State): """Write a joke""" """Write a joke""" result = llm.invoke(state["input"]) result = llm. invoke(state[" input "]) return {"output": result.content} return {"output": result. content} def llm_call_3(state: State): def llm_call_3(state: State): """Write a poem""" """Write a poem""" result = llm.invoke(state["input"]) result = llm. invoke(state[" input "]) return {"output": result.content} return {"output": result. content} def llm_call_router(state: State): def llm_call_router(state: State): """Route the input to the appropriate node""" """Route the input to the appropriate node""" # Run the augmented LLM with structured output to serve as routing logic # Run the augmented LLM with structured output to serve as routing logic decision = router.invoke( decision = router. invoke( [ [ SystemMessage( SystemMessage( content="Route the input to story, joke, or poem based on the user's request." content ="Route the input to story, joke, or poem based on the user's request." ), ), HumanMessage(content=state["input"]), HumanMessage(content = state[" input "]), ] ] ) ) return {"decision": decision.step} return {"decision": decision. step} # Conditional edge function to route to the appropriate node # Conditional edge function to route to the appropriate nodedef route_decision(state: State): def route_decision(state: State): # Return the node name you want to visit next # Return the node name you want to visit next if state["decision"] == "story": if state[" decision "] == "story": return "llm_call_1" return "llm_call_1" elif state["decision"] == "joke": elif state[" decision "] == "joke": return "llm_call_2" return "llm_call_2" elif state["decision"] == "poem": elif state[" decision "] == "poem": return "llm_call_3" return "llm_call_3" # Build workflow # Build workflowrouter_builder = StateGraph(State) router_builder = StateGraph(State) # Add nodes # Add nodesrouter_builder.add_node("llm_call_1", llm_call_1) router_builder. add_node("llm_call_1", llm_call_1)router_builder.add_node("llm_call_2", llm_call_2) router_builder. add_node("llm_call_2", llm_call_2)router_builder.add_node("llm_call_3", llm_call_3) router_builder. add_node("llm_call_3", llm_call_3)router_builder.add_node("llm_call_router", llm_call_router) router_builder. add_node("llm_call_router", llm_call_router) # Add edges to connect nodes # Add edges to connect nodesrouter_builder.add_edge(START, "llm_call_router") router_builder. add_edge(START, "llm_call_router")router_builder.add_conditional_edges(router_builder. add_conditional_edges( "llm_call_router", "llm_call_router", route_decision, route_decision, { # Name returned by route_decision : Name of next node to visit { # Name returned by route_decision : Name of next node to visit "llm_call_1": "llm_call_1", "llm_call_1": "llm_call_1", "llm_call_2": "llm_call_2", "llm_call_2": "llm_call_2", "llm_call_3": "llm_call_3", "llm_call_3": "llm_call_3", }, },))router_builder.add_edge("llm_call_1", END) router_builder. add_edge("llm_call_1", END)router_builder.add_edge("llm_call_2", END) router_builder. add_edge("llm_call_2", END)router_builder.add_edge("llm_call_3", END) router_builder. add_edge("llm_call_3", END) # Compile workflow # Compile workflowrouter_workflow = router_builder.compile() router_workflow = router_builder. compile() # Show the workflow # Show the workflowdisplay(Image(router_workflow.get_graph().draw_mermaid_png())) display(Image(router_workflow. get_graph(). draw_mermaid_png())) # Invoke # Invokestate = router_workflow.invoke({"input": "Write me a joke about cats"}) state = router_workflow. invoke({"input": "Write me a joke about cats"})print(state["output"]) print(state[" output "])

Orchestrator-worker

In an orchestrator-worker configuration, the orchestrator:

  • Breaks down tasks into subtasks
  • Delegates subtasks to workers
  • Synthesizes worker outputs into a final result

Orchestrator-worker workflows provide more flexibility and are often used when subtasks cannot be predefined the way they can with parallelization. This is common with workflows that write code or need to update content across multiple files. For example, a workflow that needs to update installation instructions for multiple Python libraries across an unknown number of documents might use this pattern.

Copy

from typing import Annotated, List from typing import Annotated, List import operator import operator # Schema for structured output to use in planning # Schema for structured output to use in planningclass Section(BaseModel): class Section(BaseModel): name: str = Field( name: str = Field( description="Name for this section of the report.", description ="Name for this section of the report.", ) ) description: str = Field( description: str = Field( description="Brief overview of the main topics and concepts to be covered in this section.", description ="Brief overview of the main topics and concepts to be covered in this section.", ) ) class Sections(BaseModel): class Sections(BaseModel): sections: List[Section] = Field( sections: List[Section] = Field( description="Sections of the report.", description ="Sections of the report.", ) ) # Augment the LLM with schema for structured output # Augment the LLM with schema for structured outputplanner = llm.with_structured_output(Sections) planner = llm. with_structured_output(Sections)

Creating workers in LangGraph

Orchestrator-worker workflows are common and LangGraph has built-in support for them. The Send API lets you dynamically create worker nodes and send them specific inputs. Each worker has its own state, and all worker outputs are written to a shared state key that is accessible to the orchestrator graph. This gives the orchestrator access to all worker output and allows it to synthesize them into a final output. The example below iterates over a list of sections and uses the Send API to send a section to each worker.

Copy

from langgraph.types import Send from langgraph. types import Send # Graph state # Graph stateclass State(TypedDict): class State(TypedDict): topic: str # Report topic topic: str # Report topic sections: list[Section] # List of report sections sections: list[Section] # List of report sections completed_sections: Annotated[ completed_sections: Annotated[ list, operator.add list, operator. add ] # All workers write to this key in parallel ] # All workers write to this key in parallel final_report: str # Final report final_report: str # Final report # Worker state # Worker stateclass WorkerState(TypedDict): class WorkerState(TypedDict): section: Section section: Section completed_sections: Annotated[list, operator.add] completed_sections: Annotated[list, operator. add] # Nodes # Nodesdef orchestrator(state: State): def orchestrator(state: State): """Orchestrator that generates a plan for the report""" """Orchestrator that generates a plan for the report""" # Generate queries # Generate queries report_sections = planner.invoke( report_sections = planner. invoke( [ [ SystemMessage(content="Generate a plan for the report."), SystemMessage(content ="Generate a plan for the report."), HumanMessage(content=f"Here is the report topic: {state['topic']}"), HumanMessage(content = f"Here is the report topic: {state[' topic ']} "), ] ] ) ) return {"sections": report_sections.sections} return {"sections": report_sections. sections} def llm_call(state: WorkerState): def llm_call(state: WorkerState): """Worker writes a section of the report""" """Worker writes a section of the report""" # Generate section # Generate section section = llm.invoke( section = llm. invoke( [ [ SystemMessage( SystemMessage( content="Write a report section following the provided name and description. Include no preamble for each section. Use markdown formatting." content ="Write a report section following the provided name and description. Include no preamble for each section. Use markdown formatting." ), ), HumanMessage( HumanMessage( content=f"Here is the section name: {state['section'].name} and description: {state['section'].description}" content = f"Here is the section name: {state[' section ']. name} and description: {state[' section ']. description} " ), ), ] ] ) ) # Write the updated section to completed sections # Write the updated section to completed sections return {"completed_sections": [section.content]} return {"completed_sections": [section. content]} def synthesizer(state: State): def synthesizer(state: State): """Synthesize full report from sections""" """Synthesize full report from sections""" # List of completed sections # List of completed sections completed_sections = state["completed_sections"] completed_sections = state[" completed_sections "] # Format completed section to str to use as context for final sections # Format completed section to str to use as context for final sections completed_report_sections = "\n\n---\n\n".join(completed_sections) completed_report_sections = " \n\n --- \n\n ". join(completed_sections) return {"final_report": completed_report_sections} return {"final_report": completed_report_sections} # Conditional edge function to create llm_call workers that each write a section of the report # Conditional edge function to create llm_call workers that each write a section of the reportdef assign_workers(state: State): def assign_workers(state: State): """Assign a worker to each section in the plan""" """Assign a worker to each section in the plan""" # Kick off section writing in parallel via Send() API # Kick off section writing in parallel via Send() API return [Send("llm_call", {"section": s}) for s in state["sections"]] return [Send("llm_call", {"section": s}) for s in state[" sections "]] # Build workflow # Build workfloworchestrator_worker_builder = StateGraph(State) orchestrator_worker_builder = StateGraph(State) # Add the nodes # Add the nodesorchestrator_worker_builder.add_node("orchestrator", orchestrator) orchestrator_worker_builder. add_node("orchestrator", orchestrator)orchestrator_worker_builder.add_node("llm_call", llm_call) orchestrator_worker_builder. add_node("llm_call", llm_call)orchestrator_worker_builder.add_node("synthesizer", synthesizer) orchestrator_worker_builder. add_node("synthesizer", synthesizer) # Add edges to connect nodes # Add edges to connect nodesorchestrator_worker_builder.add_edge(START, "orchestrator") orchestrator_worker_builder. add_edge(START, "orchestrator")orchestrator_worker_builder.add_conditional_edges(orchestrator_worker_builder. add_conditional_edges( "orchestrator", assign_workers, ["llm_call"] "orchestrator", assign_workers, ["llm_call"]))orchestrator_worker_builder.add_edge("llm_call", "synthesizer") orchestrator_worker_builder. add_edge("llm_call", "synthesizer")orchestrator_worker_builder.add_edge("synthesizer", END) orchestrator_worker_builder. add_edge("synthesizer", END) # Compile the workflow # Compile the workfloworchestrator_worker = orchestrator_worker_builder.compile() orchestrator_worker = orchestrator_worker_builder. compile() # Show the workflow # Show the workflowdisplay(Image(orchestrator_worker.get_graph().draw_mermaid_png())) display(Image(orchestrator_worker. get_graph(). draw_mermaid_png())) # Invoke # Invokestate = orchestrator_worker.invoke({"topic": "Create a report on LLM scaling laws"}) state = orchestrator_worker. invoke({"topic": "Create a report on LLM scaling laws"}) from IPython.display import Markdown from IPython. display import MarkdownMarkdown(state["final_report"]) Markdown(state[" final_report "])

Evaluator-optimizer

In evaluator-optimizer workflows, one LLM call creates a response and the other evaluates that response. If the evaluator or a human-in-the-loop determines the response needs refinement, feedback is provided and the response is recreated. This loop continues until an acceptable response is generated. Evaluator-optimizer workflows are commonly used when there’s particular success criteria for a task, but iteration is required to meet that criteria. For example, there’s not always a perfect match when translating text between two languages. It might take a few iterations to generate a translation with the same meaning across the two languages.

Copy

# Graph state # Graph stateclass State(TypedDict): class State(TypedDict): joke: str joke: str topic: str topic: str feedback: str feedback: str funny_or_not: str funny_or_not: str # Schema for structured output to use in evaluation # Schema for structured output to use in evaluationclass Feedback(BaseModel): class Feedback(BaseModel): grade: Literal["funny", "not funny"] = Field( grade: Literal[" funny ", " not funny "] = Field( description="Decide if the joke is funny or not.", description ="Decide if the joke is funny or not.", ) ) feedback: str = Field( feedback: str = Field( description="If the joke is not funny, provide feedback on how to improve it.", description ="If the joke is not funny, provide feedback on how to improve it.", ) ) # Augment the LLM with schema for structured output # Augment the LLM with schema for structured outputevaluator = llm.with_structured_output(Feedback) evaluator = llm. with_structured_output(Feedback) # Nodes # Nodesdef llm_call_generator(state: State): def llm_call_generator(state: State): """LLM generates a joke""" """LLM generates a joke""" if state.get("feedback"): if state. get("feedback"): msg = llm.invoke( msg = llm. invoke( f"Write a joke about {state['topic']} but take into account the feedback: {state['feedback']}" f "Write a joke about {state[' topic ']} but take into account the feedback: {state[' feedback ']} " ) ) else: else: msg = llm.invoke(f"Write a joke about {state['topic']}") msg = llm. invoke(f "Write a joke about {state[' topic ']} ") return {"joke": msg.content} return {"joke": msg. content} def llm_call_evaluator(state: State): def llm_call_evaluator(state: State): """LLM evaluates the joke""" """LLM evaluates the joke""" grade = evaluator.invoke(f"Grade the joke {state['joke']}") grade = evaluator. invoke(f "Grade the joke {state[' joke ']} ") return {"funny_or_not": grade.grade, "feedback": grade.feedback} return {"funny_or_not": grade. grade, "feedback": grade. feedback} # Conditional edge function to route back to joke generator or end based upon feedback from the evaluator # Conditional edge function to route back to joke generator or end based upon feedback from the evaluatordef route_joke(state: State): def route_joke(state: State): """Route back to joke generator or end based upon feedback from the evaluator""" """Route back to joke generator or end based upon feedback from the evaluator""" if state["funny_or_not"] == "funny": if state[" funny_or_not "] == "funny": return "Accepted" return "Accepted" elif state["funny_or_not"] == "not funny": elif state[" funny_or_not "] == "not funny": return "Rejected + Feedback" return "Rejected + Feedback" # Build workflow # Build workflowoptimizer_builder = StateGraph(State) optimizer_builder = StateGraph(State) # Add the nodes # Add the nodesoptimizer_builder.add_node("llm_call_generator", llm_call_generator) optimizer_builder. add_node("llm_call_generator", llm_call_generator)optimizer_builder.add_node("llm_call_evaluator", llm_call_evaluator) optimizer_builder. add_node("llm_call_evaluator", llm_call_evaluator) # Add edges to connect nodes # Add edges to connect nodesoptimizer_builder.add_edge(START, "llm_call_generator") optimizer_builder. add_edge(START, "llm_call_generator")optimizer_builder.add_edge("llm_call_generator", "llm_call_evaluator") optimizer_builder. add_edge("llm_call_generator", "llm_call_evaluator")optimizer_builder.add_conditional_edges(optimizer_builder. add_conditional_edges( "llm_call_evaluator", "llm_call_evaluator", route_joke, route_joke, { # Name returned by route_joke : Name of next node to visit { # Name returned by route_joke : Name of next node to visit "Accepted": END, "Accepted": END, "Rejected + Feedback": "llm_call_generator", "Rejected + Feedback": "llm_call_generator", }, },)) # Compile the workflow # Compile the workflowoptimizer_workflow = optimizer_builder.compile() optimizer_workflow = optimizer_builder. compile() # Show the workflow # Show the workflowdisplay(Image(optimizer_workflow.get_graph().draw_mermaid_png())) display(Image(optimizer_workflow. get_graph(). draw_mermaid_png())) # Invoke # Invokestate = optimizer_workflow.invoke({"topic": "Cats"}) state = optimizer_workflow. invoke({"topic": "Cats"})print(state["joke"]) print(state[" joke "])

Agents

Agents are typically implemented as an LLM performing actions using tools. They operate in continuous feedback loops, and are used in situations where problems and solutions are unpredictable. Agents have more autonomy than workflows, and can make decisions about the tools they use and how to solve problems. You can still define the available toolset and guidelines for how agents behave.

To get started with agents, see the quickstart or read more about how they work in LangChain.

Using tools

Copy

from langchain.tools import tool from langchain. tools import tool # Define tools # Define tools @tool @tooldef multiply(a: int, b: int) -> int: def multiply(a: int, b: int) -> int: """Multiply `a` and `b`. """Multiply `a` and `b`. Args: Args: a: First int a: First int b: Second int b: Second int """ """ return a * b return a * b @tool @tooldef add(a: int, b: int) -> int: def add(a: int, b: int) -> int: """Adds `a` and `b`. """Adds `a` and `b`. Args: Args: a: First int a: First int b: Second int b: Second int """ """ return a + b return a + b @tool @tooldef divide(a: int, b: int) -> float: def divide(a: int, b: int) -> float: """Divide `a` and `b`. """Divide `a` and `b`. Args: Args: a: First int a: First int b: Second int b: Second int """ """ return a / b return a / b # Augment the LLM with tools # Augment the LLM with toolstools = [add, multiply, divide] tools = [add, multiply, divide]tools_by_name = {tool.name: tool for tool in tools} tools_by_name = {tool. name: tool for tool in tools}llm_with_tools = llm.bind_tools(tools) llm_with_tools = llm. bind_tools(tools)

Copy

from langgraph.graph import MessagesState from langgraph. graph import MessagesStatefrom langchain.messages import SystemMessage, HumanMessage, ToolMessage from langchain. messages import SystemMessage, HumanMessage, ToolMessage # Nodes # Nodesdef llm_call(state: MessagesState): def llm_call(state: MessagesState): """LLM decides whether to call a tool or not""" """LLM decides whether to call a tool or not""" return { return { "messages": [ "messages": [ llm_with_tools.invoke( llm_with_tools. invoke( [ [ SystemMessage( SystemMessage( content="You are a helpful assistant tasked with performing arithmetic on a set of inputs." content ="You are a helpful assistant tasked with performing arithmetic on a set of inputs." ) ) ] ] + state["messages"] + state[" messages "] ) ) ] ] } } def tool_node(state: dict): def tool_node(state: dict): """Performs the tool call""" """Performs the tool call""" result = [] result = [] for tool_call in state["messages"][-1].tool_calls: for tool_call in state[" messages "][- 1]. tool_calls: tool = tools_by_name[tool_call["name"]] tool = tools_by_name[tool_call[" name "]] observation = tool.invoke(tool_call["args"]) observation = tool. invoke(tool_call[" args "]) result.append(ToolMessage(content=observation, tool_call_id=tool_call["id"])) result. append(ToolMessage(content = observation, tool_call_id = tool_call[" id "])) return {"messages": result} return {"messages": result} # Conditional edge function to route to the tool node or end based upon whether the LLM made a tool call # Conditional edge function to route to the tool node or end based upon whether the LLM made a tool calldef should_continue(state: MessagesState) -> Literal["tool_node", END]: def should_continue(state: MessagesState) -> Literal[" tool_node ", END]: """Decide if we should continue the loop or stop based upon whether the LLM made a tool call""" """Decide if we should continue the loop or stop based upon whether the LLM made a tool call""" messages = state["messages"] messages = state[" messages "] last_message = messages[-1] last_message = messages[- 1] # If the LLM makes a tool call, then perform an action # If the LLM makes a tool call, then perform an action if last_message.tool_calls: if last_message. tool_calls: return "tool_node" return "tool_node" # Otherwise, we stop (reply to the user) # Otherwise, we stop (reply to the user) return END return END # Build workflow # Build workflowagent_builder = StateGraph(MessagesState) agent_builder = StateGraph(MessagesState) # Add nodes # Add nodesagent_builder.add_node("llm_call", llm_call) agent_builder. add_node("llm_call", llm_call)agent_builder.add_node("tool_node", tool_node) agent_builder. add_node("tool_node", tool_node) # Add edges to connect nodes # Add edges to connect nodesagent_builder.add_edge(START, "llm_call") agent_builder. add_edge(START, "llm_call")agent_builder.add_conditional_edges(agent_builder. add_conditional_edges( "llm_call", "llm_call", should_continue, should_continue, ["tool_node", END] ["tool_node", END]))agent_builder.add_edge("tool_node", "llm_call") agent_builder. add_edge("tool_node", "llm_call") # Compile the agent # Compile the agentagent = agent_builder.compile() agent = agent_builder. compile() # Show the agent # Show the agentdisplay(Image(agent.get_graph(xray=True).draw_mermaid_png())) display(Image(agent. get_graph(xray = True). draw_mermaid_png())) # Invoke # Invokemessages = [HumanMessage(content="Add 3 and 4.")] messages = [HumanMessage(content ="Add 3 and 4.")]messages = agent.invoke({"messages": messages}) messages = agent. invoke({"messages": messages})for m in messages["messages"]: for m in messages[" messages "]: m.pretty_print() m. pretty_print()

Edit this page on GitHub  or file an issue .

Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

Was this page helpful?

Thinking in LangGraphPersistence