Last Updated: 3/7/2026
Deep AgentsLangChainLangGraphIntegrationsLearnReferenceContribute
Get started
Capabilities
Production
- Application structure
- Test
- LangSmith Studio
- Agent Chat UI
- LangSmith Deployment
- LangSmith Observability
LangGraph APIs
Run a local server
This guide shows you how to run a LangGraph application locally.
Prerequisites
Before you begin, ensure you have the following:
- An API key for LangSmith - free to sign up
1. Install the LangGraph CLI
Copy
# Python >= 3.11 is required.# Python >= 3.11 is required.pip install -U "langgraph-cli[inmem]" pip install -U "langgraph-cli[inmem]" 2. Create a LangGraph app
Create a new app from the new-langgraph-project-python template. This template demonstrates a single-node application you can extend with your own logic.
Copy
langgraph new path/to/your/app --template new-langgraph-project-python langgraph new path/to/your/app --template new-langgraph-project-pythonAdditional templates If you use langgraph new without specifying a template, you will be presented with an interactive menu that will allow you to choose from a list of available templates.
3. Install dependencies
In the root of your new LangGraph app, install the dependencies in edit mode so your local changes are used by the server:
Copy
cd path/to/your/app cd path/to/your/apppip install -e . pip install -e . 4. Create a .env file
You will find a .env.example in the root of your new LangGraph app. Create a .env file in the root of your new LangGraph app and copy the contents of the .env.example file into it, filling in the necessary API keys:
Copy
LANGSMITH_API_KEY=lsv2... LANGSMITH_API_KEY =lsv2... 5. Launch Agent server
Start the LangGraph API server locally:
Copy
langgraph dev langgraph devSample output:
Copy
INFO:langgraph_api.cli:INFO:langgraph_api.cli: Welcome to Welcome to ╦ ┌─┐┌┐┌┌─┐╔═╗┬─┐┌─┐┌─┐┬ ┬ ╦ ┌─┐┌┐┌┌─┐╔═╗┬─┐┌─┐┌─┐┬ ┬ ║ ├─┤││││ ┬║ ╦├┬┘├─┤├─┘├─┤ ║ ├─┤││││ ┬║ ╦├┬┘├─┤├─┘├─┤ ╩═╝┴ ┴┘└┘└─┘╚═╝┴└─┴ ┴┴ ┴ ┴ ╩═╝┴ ┴┘└┘└─┘╚═╝┴└─┴ ┴┴ ┴ ┴ - 🚀 API: http://127.0.0.1:2024 - 🚀 API: http://127.0.0.1:2024 - 🎨 Studio UI: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024 - 🎨 Studio UI: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024 - 📚 API Docs: http://127.0.0.1:2024/docs - 📚 API Docs: http://127.0.0.1:2024/docs This in-memory server is designed for development and testing.This in-memory server is designed for development and testing.For production use, please use LangSmith Deployment.For production use, please use LangSmith Deployment. The langgraph dev command starts Agent Server in an in-memory mode. This mode is suitable for development and testing purposes. For production use, deploy Agent Server with access to a persistent storage backend. For more information, see the Platform setup overview.
6. Test your application in Studio
Studio is a specialized UI that you can connect to LangGraph API server to visualize, interact with, and debug your application locally. Test your graph in Studio by visiting the URL provided in the output of the langgraph dev command:
Copy
> - LangGraph Studio Web UI: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024> - LangGraph Studio Web UI: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024 For an Agent Server running on a custom host/port, update the baseUrl query parameter in the URL. For example, if your server is running on http://myhost:3000:
Copy
https://smith.langchain.com/studio/?baseUrl=http://myhost:3000https://smith.langchain.com/studio/?baseUrl=http://myhost:3000 Safari compatibility
Use the --tunnel flag with your command to create a secure tunnel, as Safari has limitations when connecting to localhost servers:
Copy
langgraph dev --tunnel langgraph dev --tunnel 7. Test the API
- Python SDK (async)
- Python SDK (sync)
- Rest API
-
Install the LangGraph Python SDK:
Copy
pip install langgraph-sdk pip install langgraph-sdk -
Send a message to the assistant (threadless run):
Copy
from langgraph_sdk import get_client from langgraph_sdk import get_client import asyncio import asyncio client = get_client(url="http://localhost:2024") client = get_client(url ="http://localhost:2024") async def main(): async def main(): async for chunk in client.runs.stream( async for chunk in client.runs.stream( None, # Threadless run None, # Threadless run "agent", # Name of assistant. Defined in langgraph.json. "agent", # Name of assistant. Defined in langgraph.json. input={ input ={ "messages": [{ "messages": [{ "role": "human", "role": "human", "content": "What is LangGraph?", "content": "What is LangGraph?", }], }], }, }, ): ): print(f"Receiving new event of type: {chunk.event}...") print(f"Receiving new event of type: {chunk.event}...") print(chunk.data) print(chunk.data) print("\n\n") print(" \n\n ") asyncio.run(main())asyncio.run(main()) -
Install the LangGraph Python SDK:
Copy
pip install langgraph-sdk pip install langgraph-sdk -
Send a message to the assistant (threadless run):
Copy
from langgraph_sdk import get_sync_client from langgraph_sdk import get_sync_client client = get_sync_client(url="http://localhost:2024") client = get_sync_client(url ="http://localhost:2024") for chunk in client.runs.stream(for chunk in client.runs.stream( None, # Threadless run None, # Threadless run "agent", # Name of assistant. Defined in langgraph.json. "agent", # Name of assistant. Defined in langgraph.json. input={ input ={ "messages": [{ "messages": [{ "role": "human", "role": "human", "content": "What is LangGraph?", "content": "What is LangGraph?", }], }], }, }, stream_mode="messages-tuple", stream_mode ="messages-tuple",):): print(f"Receiving new event of type: {chunk.event}...") print(f"Receiving new event of type: {chunk.event}...") print(chunk.data) print(chunk.data) print("\n\n") print(" \n\n ")
Copy
curl -s --request POST \ curl -s --request POST \ --url "http://localhost:2024/runs/stream" \ --url "http://localhost:2024/runs/stream" \ --header 'Content-Type: application/json' \ --header 'Content-Type: application/json' \ --data "{ --data "{ \"assistant_id\": \"agent\", \" assistant_id \": \" agent \", \"input\": { \" input \": { \"messages\": [ \" messages \": [ { { \"role\": \"human\", \" role \": \" human \", \"content\": \"What is LangGraph?\" \" content \": \" What is LangGraph? \" } } ] ] }, }, \"stream_mode\": \"messages-tuple\" \" stream_mode \": \"messages-tuple \" }" }" Next steps
Now that you have a LangGraph app running locally, take your journey further by exploring deployment and advanced features:
- Deployment quickstart: Deploy your LangGraph app using LangSmith.
- LangSmith: Learn about foundational LangSmith concepts.
- SDK Reference : Explore the SDK API Reference.
Edit this page on GitHub or file an issue .
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.
Was this page helpful?