Skip to main content

2 posts tagged with "anthropic"

View All Tags

Day 0 Support: Claude Opus 4.6

Sameer Kankute
SWE @ LiteLLM (LLM Translation)
Ishaan Jaff
CTO, LiteLLM
Krrish Dholakia
CEO, LiteLLM

LiteLLM now supports Claude Opus 4.6 on Day 0. Use it across Anthropic, Azure, Vertex AI, and Bedrock through the LiteLLM AI Gateway.

Docker Image​

docker pull ghcr.io/berriai/litellm:litellm_stable_release_branch-v1.80.0-stable.opus-4-6

Usage - Anthropic​

1. Setup config.yaml

model_list:
- model_name: claude-opus-4-6
litellm_params:
model: anthropic/claude-opus-4-6
api_key: os.environ/ANTHROPIC_API_KEY

2. Start the proxy

docker run -d \
-p 4000:4000 \
-e ANTHROPIC_API_KEY=$ANTHROPIC_API_KEY \
-v $(pwd)/config.yaml:/app/config.yaml \
ghcr.io/berriai/litellm:litellm_stable_release_branch-v1.80.0-stable.opus-4-6 \
--config /app/config.yaml

3. Test it!

curl --location 'http://0.0.0.0:4000/chat/completions' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer $LITELLM_KEY' \
--data '{
"model": "claude-opus-4-6",
"messages": [
{
"role": "user",
"content": "what llm are you"
}
]
}'

Usage - Azure​

1. Setup config.yaml

model_list:
- model_name: claude-opus-4-6
litellm_params:
model: azure_ai/claude-opus-4-6
api_key: os.environ/AZURE_AI_API_KEY
api_base: os.environ/AZURE_AI_API_BASE # https://<resource>.services.ai.azure.com

2. Start the proxy

docker run -d \
-p 4000:4000 \
-e AZURE_AI_API_KEY=$AZURE_AI_API_KEY \
-e AZURE_AI_API_BASE=$AZURE_AI_API_BASE \
-v $(pwd)/config.yaml:/app/config.yaml \
ghcr.io/berriai/litellm:litellm_stable_release_branch-v1.80.0-stable.opus-4-6 \
--config /app/config.yaml

3. Test it!

curl --location 'http://0.0.0.0:4000/chat/completions' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer $LITELLM_KEY' \
--data '{
"model": "claude-opus-4-6",
"messages": [
{
"role": "user",
"content": "what llm are you"
}
]
}'

Usage - Vertex AI​

1. Setup config.yaml

model_list:
- model_name: claude-opus-4-6
litellm_params:
model: vertex_ai/claude-opus-4-6
vertex_project: os.environ/VERTEX_PROJECT
vertex_location: us-east5

2. Start the proxy

docker run -d \
-p 4000:4000 \
-e VERTEX_PROJECT=$VERTEX_PROJECT \
-e GOOGLE_APPLICATION_CREDENTIALS=/app/credentials.json \
-v $(pwd)/config.yaml:/app/config.yaml \
-v $(pwd)/credentials.json:/app/credentials.json \
ghcr.io/berriai/litellm:litellm_stable_release_branch-v1.80.0-stable.opus-4-6 \
--config /app/config.yaml

3. Test it!

curl --location 'http://0.0.0.0:4000/chat/completions' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer $LITELLM_KEY' \
--data '{
"model": "claude-opus-4-6",
"messages": [
{
"role": "user",
"content": "what llm are you"
}
]
}'

Usage - Bedrock​

1. Setup config.yaml

model_list:
- model_name: claude-opus-4-6
litellm_params:
model: bedrock/anthropic.claude-opus-4-6-v1:0
aws_access_key_id: os.environ/AWS_ACCESS_KEY_ID
aws_secret_access_key: os.environ/AWS_SECRET_ACCESS_KEY
aws_region_name: us-east-1

2. Start the proxy

docker run -d \
-p 4000:4000 \
-e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \
-e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
-v $(pwd)/config.yaml:/app/config.yaml \
ghcr.io/berriai/litellm:litellm_stable_release_branch-v1.80.0-stable.opus-4-6 \
--config /app/config.yaml

3. Test it!

curl --location 'http://0.0.0.0:4000/chat/completions' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer $LITELLM_KEY' \
--data '{
"model": "claude-opus-4-6",
"messages": [
{
"role": "user",
"content": "what llm are you"
}
]
}'

Compaction​

Litellm supports enabling compaction for the new claude-opus-4-6.

Enabling Compaction​

To enable compaction, add the context_management parameter with the compact_20260112 edit type:

curl --location 'http://0.0.0.0:4000/chat/completions' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer $LITELLM_KEY' \
--data '{
"model": "claude-opus-4-6",
"messages": [
{
"role": "user",
"content": "What is the weather in San Francisco?"
}
],
"context_management": {
"edits": [
{
"type": "compact_20260112"
}
]
},
"max_tokens": 100
}'

All the parameters supported for context_management by anthropic are supported and can be directly added. Litellm automatically adds the compact-2026-01-12 beta header in the request.

Response with Compaction Block​

The response will include the compaction summary in provider_specific_fields.compaction_blocks:

{
"id": "chatcmpl-a6c105a3-4b25-419e-9551-c800633b6cb2",
"created": 1770357619,
"model": "claude-opus-4-6",
"object": "chat.completion",
"choices": [
{
"finish_reason": "length",
"index": 0,
"message": {
"content": "I don't have access to real-time data, so I can't provide the current weather in San Francisco. To get up-to-date weather information, I'd recommend checking:\n\n- **Weather websites** like weather.com, accuweather.com, or wunderground.com\n- **Search engines** – just Google \"San Francisco weather\"\n- **Weather apps** on your phone (e.g., Apple Weather, Google Weather)\n- **National",
"role": "assistant",
"provider_specific_fields": {
"compaction_blocks": [
{
"type": "compaction",
"content": "Summary of the conversation: The user requested help building a web scraper..."
}
]
}
}
}
],
"usage": {
"completion_tokens": 100,
"prompt_tokens": 86,
"total_tokens": 186
}
}

Using Compaction Blocks in Follow-up Requests​

To continue the conversation with compaction, include the compaction block in the assistant message's provider_specific_fields:

curl --location 'http://0.0.0.0:4000/chat/completions' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer $LITELLM_KEY' \
--data '{
"model": "claude-opus-4-6",
"messages": [
{
"role": "user",
"content": "How can I build a web scraper?"
},
{
"role": "assistant",
"content": [
{
"type": "text",
"text": "Certainly! To build a basic web scraper, you'll typically use a programming language like Python along with libraries such as `requests` (for fetching web pages) and `BeautifulSoup` (for parsing HTML). Here's a basic example:\n\n```python\nimport requests\nfrom bs4 import BeautifulSoup\n\nurl = 'https://example.com'\nresponse = requests.get(url)\nsoup = BeautifulSoup(response.text, 'html.parser')\n\n# Extract and print all text\ntext = soup.get_text()\nprint(text)\n```\n\nLet me know what you're interested in scraping or if you need help with a specific website!"
}
],
"provider_specific_fields": {
"compaction_blocks": [
{
"type": "compaction",
"content": "Summary of the conversation: The user asked how to build a web scraper, and the assistant gave an overview using Python with requests and BeautifulSoup."
}
]
}
},
{
"role": "user",
"content": "How do I use it to scrape product prices?"
}
],
"context_management": {
"edits": [
{
"type": "compact_20260112"
}
]
},
"max_tokens": 100
}'

Streaming Support​

Compaction blocks are also supported in streaming mode. You'll receive:

  • compaction_start event when a compaction block begins
  • compaction_delta events with the compaction content
  • The accumulated compaction_blocks in provider_specific_fields

Adaptive Thinking​

LiteLLM supports adaptive thinking through the reasoning_effort parameter:

curl --location 'http://0.0.0.0:4000/chat/completions' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer $LITELLM_KEY' \
--data '{
"model": "claude-opus-4-6",
"messages": [
{
"role": "user",
"content": "Solve this complex problem: What is the optimal strategy for..."
}
],
"reasoning_effort": "high"
}'

Effort Levels​

Four effort levels available: low, medium, high (default), and max. Pass directly via the output_config parameter:

curl --location 'http://0.0.0.0:4000/chat/completions' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer $LITELLM_KEY' \
--data '{
"model": "claude-opus-4-6",
"messages": [
{
"role": "user",
"content": "Explain quantum computing"
}
],
"output_config": {
"effort": "medium"
}

}'

You can use reasoning effort plus output_config to have more control on the model.

1M Token Context (Beta)​

Opus 4.6 supports 1M token context. Premium pricing applies for prompts exceeding 200k tokens ($10/$37.50 per million input/output tokens). LiteLLM supports cost calculations for 1M token contexts.

US-Only Inference​

Available at 1.1× token pricing. LiteLLM supports this pricing model.

Day 0 Support: Claude 4.5 Opus (+Advanced Features)

Sameer Kankute
SWE @ LiteLLM (LLM Translation)
Krrish Dholakia
CEO, LiteLLM
Ishaan Jaff
CTO, LiteLLM

This guide covers Anthropic's latest model (Claude Opus 4.5) and its advanced features now available in LiteLLM: Tool Search, Programmatic Tool Calling, Tool Input Examples, and the Effort Parameter.


FeatureSupported Models
Tool SearchClaude Opus 4.5, Sonnet 4.5
Programmatic Tool CallingClaude Opus 4.5, Sonnet 4.5
Input ExamplesClaude Opus 4.5, Sonnet 4.5
Effort ParameterClaude Opus 4.5 only

Supported Providers: Anthropic, Bedrock, Vertex AI, Azure AI.

Usage​

import os
from litellm import completion

# set env - [OPTIONAL] replace with your anthropic key
os.environ["ANTHROPIC_API_KEY"] = "your-api-key"

messages = [{"role": "user", "content": "Hey! how's it going?"}]

## OPENAI /chat/completions API format
response = completion(model="claude-opus-4-5-20251101", messages=messages)
print(response)

Usage - Bedrock​

info

LiteLLM uses the boto3 library to authenticate with Bedrock.

For more ways to authenticate with Bedrock, see the Bedrock documentation.

import os
from litellm import completion

os.environ["AWS_ACCESS_KEY_ID"] = ""
os.environ["AWS_SECRET_ACCESS_KEY"] = ""
os.environ["AWS_REGION_NAME"] = ""

## OPENAI /chat/completions API format
response = completion(
model="bedrock/us.anthropic.claude-opus-4-5-20251101-v1:0",
messages=[{ "content": "Hello, how are you?","role": "user"}]
)

Usage - Vertex AI​

from litellm import completion
import json

## GET CREDENTIALS
## RUN ##
# !gcloud auth application-default login - run this to add vertex credentials to your env
## OR ##
file_path = 'path/to/vertex_ai_service_account.json'

# Load the JSON file
with open(file_path, 'r') as file:
vertex_credentials = json.load(file)

# Convert to JSON string
vertex_credentials_json = json.dumps(vertex_credentials)

## COMPLETION CALL
response = completion(
model="vertex_ai/claude-opus-4-5@20251101",
messages=[{ "content": "Hello, how are you?","role": "user"}],
vertex_credentials=vertex_credentials_json,
vertex_project="your-project-id",
vertex_location="us-east5"
)

Usage - Azure Anthropic (Azure Foundry Claude)​

LiteLLM funnels Azure Claude deployments through the azure_ai/ provider so Claude Opus models on Azure Foundry keep working with Tool Search, Effort, streaming, and the rest of the advanced feature set. Point AZURE_AI_API_BASE to https://<resource>.services.ai.azure.com/anthropic (LiteLLM appends /v1/messages automatically) and authenticate with AZURE_AI_API_KEY or an Azure AD token.

import os
from litellm import completion

# Configure Azure credentials
os.environ["AZURE_AI_API_KEY"] = "your-azure-ai-api-key"
os.environ["AZURE_AI_API_BASE"] = "https://my-resource.services.ai.azure.com/anthropic"

response = completion(
model="azure_ai/claude-opus-4-1",
messages=[{"role": "user", "content": "Explain how Azure Anthropic hosts Claude Opus differently from the public Anthropic API."}],
max_tokens=1200,
temperature=0.7,
stream=True,
)

for chunk in response:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)

This lets Claude work with thousands of tools, by dynamically loading tools on-demand, instead of loading all tools into the context window upfront.

Usage Example​

import litellm
import os

# Configure your API key
os.environ["ANTHROPIC_API_KEY"] = "your-api-key"

# Define your tools with defer_loading
tools = [
# Tool search tool (regex variant)
{
"type": "tool_search_tool_regex_20251119",
"name": "tool_search_tool_regex"
},
# Deferred tools - loaded on-demand
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather in a given location. Returns temperature and conditions.",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "Temperature unit"
}
},
"required": ["location"]
}
},
"defer_loading": True # Load on-demand
},
{
"type": "function",
"function": {
"name": "search_files",
"description": "Search through files in the workspace using keywords",
"parameters": {
"type": "object",
"properties": {
"query": {"type": "string"},
"file_types": {
"type": "array",
"items": {"type": "string"}
}
},
"required": ["query"]
}
},
"defer_loading": True
},
{
"type": "function",
"function": {
"name": "query_database",
"description": "Execute SQL queries against the database",
"parameters": {
"type": "object",
"properties": {
"sql": {"type": "string"}
},
"required": ["sql"]
}
},
"defer_loading": True
}
]

# Make a request - Claude will search for and use relevant tools
response = litellm.completion(
model="anthropic/claude-opus-4-5-20251101",
messages=[{
"role": "user",
"content": "What's the weather like in San Francisco?"
}],
tools=tools
)

print("Claude's response:", response.choices[0].message.content)
print("Tool calls:", response.choices[0].message.tool_calls)

# Check tool search usage
if hasattr(response.usage, 'server_tool_use'):
print(f"Tool searches performed: {response.usage.server_tool_use.tool_search_requests}")

For natural language queries instead of regex patterns:

tools = [
{
"type": "tool_search_tool_bm25_20251119", # Natural language variant
"name": "tool_search_tool_bm25"
},
# ... your deferred tools
]

Programmatic Tool Calling​

Programmatic tool calling allows Claude to write code that calls your tools programmatically. Learn more

import litellm
import json

# Define tools that can be called programmatically
tools = [
# Code execution tool (required for programmatic calling)
{
"type": "code_execution_20250825",
"name": "code_execution"
},
# Tool that can be called from code
{
"type": "function",
"function": {
"name": "query_database",
"description": "Execute a SQL query against the sales database. Returns a list of rows as JSON objects.",
"parameters": {
"type": "object",
"properties": {
"sql": {
"type": "string",
"description": "SQL query to execute"
}
},
"required": ["sql"]
}
},
"allowed_callers": ["code_execution_20250825"] # Enable programmatic calling
}
]

# First request
response = litellm.completion(
model="anthropic/claude-sonnet-4-5-20250929",
messages=[{
"role": "user",
"content": "Query sales data for West, East, and Central regions, then tell me which had the highest revenue"
}],
tools=tools
)

print("Claude's response:", response.choices[0].message)

# Handle tool calls
messages = [
{"role": "user", "content": "Query sales data for West, East, and Central regions, then tell me which had the highest revenue"},
{"role": "assistant", "content": response.choices[0].message.content, "tool_calls": response.choices[0].message.tool_calls}
]

# Process each tool call
for tool_call in response.choices[0].message.tool_calls:
# Check if it's a programmatic call
if hasattr(tool_call, 'caller') and tool_call.caller:
print(f"Programmatic call to {tool_call.function.name}")
print(f"Called from: {tool_call.caller}")

# Simulate tool execution
if tool_call.function.name == "query_database":
args = json.loads(tool_call.function.arguments)
# Simulate database query
result = json.dumps([
{"region": "West", "revenue": 150000},
{"region": "East", "revenue": 180000},
{"region": "Central", "revenue": 120000}
])

messages.append({
"role": "user",
"content": [{
"type": "tool_result",
"tool_use_id": tool_call.id,
"content": result
}]
})

# Get final response
final_response = litellm.completion(
model="anthropic/claude-sonnet-4-5-20250929",
messages=messages,
tools=tools
)

print("\nFinal answer:", final_response.choices[0].message.content)

Tool Input Examples​

You can now provide Claude with examples of how to use your tools. Learn more

import litellm

tools = [
{
"type": "function",
"function": {
"name": "create_calendar_event",
"description": "Create a new calendar event with attendees and reminders",
"parameters": {
"type": "object",
"properties": {
"title": {"type": "string"},
"start_time": {
"type": "string",
"description": "ISO 8601 format: YYYY-MM-DDTHH:MM:SS"
},
"duration_minutes": {"type": "integer"},
"attendees": {
"type": "array",
"items": {
"type": "object",
"properties": {
"email": {"type": "string"},
"optional": {"type": "boolean"}
}
}
},
"reminders": {
"type": "array",
"items": {
"type": "object",
"properties": {
"minutes_before": {"type": "integer"},
"method": {"type": "string", "enum": ["email", "popup"]}
}
}
}
},
"required": ["title", "start_time", "duration_minutes"]
}
},
# Provide concrete examples
"input_examples": [
{
"title": "Team Standup",
"start_time": "2025-01-15T09:00:00",
"duration_minutes": 30,
"attendees": [
{"email": "alice@company.com", "optional": False},
{"email": "bob@company.com", "optional": False}
],
"reminders": [
{"minutes_before": 15, "method": "popup"}
]
},
{
"title": "Lunch Break",
"start_time": "2025-01-15T12:00:00",
"duration_minutes": 60
# Demonstrates optional fields can be omitted
}
]
}
]

response = litellm.completion(
model="anthropic/claude-sonnet-4-5-20250929",
messages=[{
"role": "user",
"content": "Schedule a team meeting for tomorrow at 2pm for 45 minutes with john@company.com and sarah@company.com"
}],
tools=tools
)

print("Tool call:", response.choices[0].message.tool_calls[0].function.arguments)

Effort Parameter: Control Token Usage​

Control how much effort Claude puts into its response using the reasoning_effort parameter. This allows you to trade off between response thoroughness and token efficiency.

info

LiteLLM automatically maps reasoning_effort to Anthropic's output_config format and adds the required effort-2025-11-24 beta header for Claude Opus 4.5.

Potential values for reasoning_effort parameter: "high", "medium", "low".

Usage Example​

import litellm

message = "Analyze the trade-offs between microservices and monolithic architectures"

# High effort (default) - Maximum capability
response_high = litellm.completion(
model="anthropic/claude-opus-4-5-20251101",
messages=[{"role": "user", "content": message}],
reasoning_effort="high"
)

print("High effort response:")
print(response_high.choices[0].message.content)
print(f"Tokens used: {response_high.usage.completion_tokens}\n")

# Medium effort - Balanced approach
response_medium = litellm.completion(
model="anthropic/claude-opus-4-5-20251101",
messages=[{"role": "user", "content": message}],
reasoning_effort="medium"
)

print("Medium effort response:")
print(response_medium.choices[0].message.content)
print(f"Tokens used: {response_medium.usage.completion_tokens}\n")

# Low effort - Maximum efficiency
response_low = litellm.completion(
model="anthropic/claude-opus-4-5-20251101",
messages=[{"role": "user", "content": message}],
reasoning_effort="low"
)

print("Low effort response:")
print(response_low.choices[0].message.content)
print(f"Tokens used: {response_low.usage.completion_tokens}\n")

# Compare token usage
print("Token Comparison:")
print(f"High: {response_high.usage.completion_tokens} tokens")
print(f"Medium: {response_medium.usage.completion_tokens} tokens")
print(f"Low: {response_low.usage.completion_tokens} tokens")