Skip to main content

LangGraph

Call LangGraph agents through LiteLLM using the OpenAI chat completions format.

PropertyDetails
DescriptionLangGraph is a framework for building stateful, multi-actor applications with LLMs. LiteLLM supports calling LangGraph agents via their streaming and non-streaming endpoints.
Provider Route on LiteLLMlanggraph/{agent_id}
Provider DocLangGraph Platform ↗

Prerequisites: You need a running LangGraph server. See Setting Up a Local LangGraph Server below.

Quick Start​

Model Format​

Model Format
langgraph/{agent_id}

Example:

  • langgraph/agent - calls the default agent

LiteLLM Python SDK​

Basic LangGraph Completion
import litellm

response = litellm.completion(
model="langgraph/agent",
messages=[
{"role": "user", "content": "What is 25 * 4?"}
],
api_base="http://localhost:2024",
)

print(response.choices[0].message.content)
Streaming LangGraph Response
import litellm

response = litellm.completion(
model="langgraph/agent",
messages=[
{"role": "user", "content": "What is the weather in Tokyo?"}
],
api_base="http://localhost:2024",
stream=True,
)

for chunk in response:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")

LiteLLM Proxy​

1. Configure your model in config.yaml​

LiteLLM Proxy Configuration
model_list:
- model_name: langgraph-agent
litellm_params:
model: langgraph/agent
api_base: http://localhost:2024

2. Start the LiteLLM Proxy​

Start LiteLLM Proxy
litellm --config config.yaml

3. Make requests to your LangGraph agent​

Basic Request
curl http://localhost:4000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $LITELLM_API_KEY" \
-d '{
"model": "langgraph-agent",
"messages": [
{"role": "user", "content": "What is 25 * 4?"}
]
}'
Streaming Request
curl http://localhost:4000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $LITELLM_API_KEY" \
-d '{
"model": "langgraph-agent",
"messages": [
{"role": "user", "content": "What is the weather in Tokyo?"}
],
"stream": true
}'

Environment Variables​

VariableDescription
LANGGRAPH_API_BASEBase URL of your LangGraph server (default: http://localhost:2024)
LANGGRAPH_API_KEYOptional API key for authentication

Supported Parameters​

ParameterTypeDescription
modelstringThe agent ID in format langgraph/{agent_id}
messagesarrayChat messages in OpenAI format
streambooleanEnable streaming responses
api_basestringLangGraph server URL
api_keystringOptional API key

Setting Up a Local LangGraph Server​

Before using LiteLLM with LangGraph, you need a running LangGraph server.

Prerequisites​

  • Python 3.11+
  • An LLM API key (OpenAI or Google Gemini)

1. Install the LangGraph CLI​

pip install "langgraph-cli[inmem]"

2. Create a new LangGraph project​

langgraph new my-agent --template new-langgraph-project-python
cd my-agent

3. Install dependencies​

pip install -e .

4. Set your API key​

echo "OPENAI_API_KEY=your_key_here" > .env

5. Start the server​

langgraph dev

The server will start at http://localhost:2024.

Verify the server is running​

curl -s --request POST \
--url "http://localhost:2024/runs/wait" \
--header 'Content-Type: application/json' \
--data '{
"assistant_id": "agent",
"input": {
"messages": [{"role": "human", "content": "Hello!"}]
}
}'

LiteLLM A2A Gateway​

You can also connect to LangGraph agents through LiteLLM's A2A (Agent-to-Agent) Gateway UI. This provides a visual way to register and test agents without writing code.

1. Navigate to Agents​

From the sidebar, click "Agents" to open the agent management page, then click "+ Add New Agent".

Navigate to Agents

2. Select LangGraph Agent Type​

Click "A2A Standard" to see available agent types, then search for "langgraph" and select "Connect to LangGraph agents via the LangGraph Platform API".

Select A2A Standard

Select LangGraph

3. Configure the Agent​

Fill in the following fields:

  • Agent Name - A unique identifier (e.g., lan-agent)
  • LangGraph API Base - Your LangGraph server URL, typically http://127.0.0.1:2024/
  • API Key - Optional. LangGraph doesn't require an API key by default
  • Assistant ID - Not used by LangGraph, you can enter any string here

Enter Agent Name

Enter API Base

Click "Create Agent" to save.

Create Agent

4. Test in Playground​

Go to "Playground" in the sidebar to test your agent. Change the endpoint type to /v1/a2a/message/send.

Go to Playground

Select A2A Endpoint

5. Select Your Agent and Send a Message​

Pick your LangGraph agent from the dropdown and send a test message.

Select Agent

Send Message

The agent responds with its capabilities. You can now interact with your LangGraph agent through the A2A protocol.

Agent Response

Further Reading​