Skip to Content
Studio

Last Updated: 3/7/2026


Docs by LangChain home page

Deep AgentsLangChainLangGraphIntegrationsLearnReferenceContribute

Get started
Capabilities
Production
LangGraph APIs

Production

LangSmith Studio

When building agents with LangChain locally, it’s helpful to visualize what’s happening inside your agent, interact with it in real-time, and debug issues as they occur. LangSmith Studio is a free visual interface for developing and testing your LangChain agents from your local machine. Studio connects to your locally running agent to show you each step your agent takes: the prompts sent to the model, tool calls and their results, and the final output. You can test different inputs, inspect intermediate states, and iterate on your agent’s behavior without additional code or deployment. This pages describes how to set up Studio with your local LangChain agent.

Prerequisites

Before you begin, ensure you have the following:

  • A LangSmith account: Sign up (for free) or log in at smith.langchain.com .
  • A LangSmith API key: Follow the Create an API key guide.
  • If you don’t want data traced to LangSmith, set LANGSMITH_TRACING=false in your application’s .env file. With tracing disabled, no data leaves your local server.

Set up local Agent server

1. Install the LangGraph CLI

The LangGraph CLI provides a local development server (also called Agent Server) that connects your agent to Studio.

Copy

# Python >= 3.11 is required.# Python >= 3.11 is required.pip install --upgrade "langgraph-cli[inmem]" pip install --upgrade "langgraph-cli[inmem]"

2. Prepare your agent

If you already have a LangChain agent, you can use it directly. This example uses a simple email agent:

agent.py

Copy

from langchain.agents import create_agent from langchain.agents import create_agent def send_email(to: str, subject: str, body: str): def send_email(to: str, subject: str, body: str): """Send an email""" """Send an email""" email = { email = { "to": to, "to": to, "subject": subject, "subject": subject, "body": body "body": body } } # ... email sending logic # ... email sending logic return f"Email sent to {to}" return f "Email sent to {to} " agent = create_agent(agent = create_agent( "gpt-4.1", "gpt-4.1", tools=[send_email], tools =[send_email], system_prompt="You are an email assistant. Always use the send_email tool.", system_prompt ="You are an email assistant. Always use the send_email tool.",))

3. Environment variables

Studio requires a LangSmith API key to connect your local agent. Create a .env file in the root of your project and add your API key from LangSmith .

Ensure your .env file is not committed to version control, such as Git.

.env

Copy

LANGSMITH_API_KEY=lsv2... LANGSMITH_API_KEY =lsv2...

4. Create a LangGraph config file

The LangGraph CLI uses a configuration file to locate your agent and manage dependencies. Create a langgraph.json file in your app’s directory:

langgraph.json

Copy

{{ "dependencies": ["."], "dependencies": ["."], "graphs": { "graphs": { "agent": "./src/agent.py:agent" "agent": "./src/agent.py:agent" }, }, "env": ".env" "env": ".env"}}

The create_agent function automatically returns a compiled LangGraph graph, which is what the graphs key expects in the configuration file.

For detailed explanations of each key in the JSON object of the configuration file, refer to the LangGraph configuration file reference.

At this point, the project structure will look like this:

Copy

my-app/my-app/ ├── src ├── src│ └── agent.py │ └── agent.py├── .env ├── .env└── langgraph.json └── langgraph.json

5. Install dependencies

Install your project dependencies from the root directory:

Copy

pip install langchain langchain-openai pip install langchain langchain-openai

6. View your agent in Studio

Start the development server to connect your agent to Studio:

Copy

langgraph dev langgraph dev

Safari blocks localhost connections to Studio. To work around this, run the above command with --tunnel to access Studio via a secure tunnel. You’ll need to manually add the tunnel URL to allowed origins by clicking Connect to a local server in the Studio UI. See the troubleshooting guide for steps.

Once the server is running, your agent is accessible both via API at http://127.0.0.1:2024 and through the Studio UI at https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024:With Studio connected to your local agent, you can iterate quickly on your agent’s behavior. Run a test input, inspect the full execution trace including prompts, tool arguments, return values, and token/latency metrics. When something goes wrong, Studio captures exceptions with the surrounding state to help you understand what happened. The development server supports hot-reloading—make changes to prompts or tool signatures in your code, and Studio reflects them immediately. Re-run conversation threads from any step to test your changes without starting over. This workflow scales from simple single-tool agents to complex multi-node graphs. For more information on how to run Studio, refer to the following guides in the LangSmith docs:

Video guide


Edit this page on GitHub  or file an issue .

Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

Was this page helpful?

TestAgent Chat UI