Skip to main content

v0

Overviewโ€‹

PropertyDetails
Descriptionv0 provides AI models optimized for code generation, particularly for creating Next.js applications, React components, and modern web development.
Provider Route on LiteLLMv0/
Link to Provider Docv0 API Documentation โ†—
Base URLhttps://api.v0.dev/v1
Supported Operations/chat/completions


https://v0.dev/docs/v0-model-api

We support ALL v0 models, just set v0/ as a prefix when sending completion requests

Available Modelsโ€‹

ModelDescriptionContext WindowMax Output
v0/v0-1.5-lgLarge model for advanced code generation and reasoning512,000 tokens512,000 tokens
v0/v0-1.5-mdMedium model for everyday code generation tasks128,000 tokens128,000 tokens
v0/v0-1.0-mdLegacy medium model128,000 tokens128,000 tokens

Required Variablesโ€‹

Environment Variables
os.environ["V0_API_KEY"] = ""  # your v0 API key from v0.dev

Note: v0 API access requires a Premium or Team plan. Visit v0.dev/chat/settings/billing to upgrade.

Usage - LiteLLM Python SDKโ€‹

Non-streamingโ€‹

v0 Non-streaming Completion
import os
import litellm
from litellm import completion

os.environ["V0_API_KEY"] = "" # your v0 API key

messages = [{"content": "Create a React button component with hover effects", "role": "user"}]

# v0 call
response = completion(
model="v0/v0-1.5-md",
messages=messages
)

print(response)

Streamingโ€‹

v0 Streaming Completion
import os
import litellm
from litellm import completion

os.environ["V0_API_KEY"] = "" # your v0 API key

messages = [{"content": "Create a React button component with hover effects", "role": "user"}]

# v0 call with streaming
response = completion(
model="v0/v0-1.5-md",
messages=messages,
stream=True
)

for chunk in response:
print(chunk)

Vision/Multimodal Supportโ€‹

All v0 models support vision inputs, allowing you to send images along with text:

v0 Vision/Multimodal
import os
import litellm
from litellm import completion

os.environ["V0_API_KEY"] = "" # your v0 API key

messages = [{
"role": "user",
"content": [
{
"type": "text",
"text": "Recreate this UI design in React"
},
{
"type": "image_url",
"image_url": {
"url": "https://example.com/ui-design.png"
}
}
]
}]

response = completion(
model="v0/v0-1.5-lg",
messages=messages
)

print(response)

Function Callingโ€‹

v0 supports function calling for structured outputs:

v0 Function Calling
import os
import litellm
from litellm import completion

os.environ["V0_API_KEY"] = "" # your v0 API key

tools = [
{
"type": "function",
"function": {
"name": "create_component",
"description": "Create a React component",
"parameters": {
"type": "object",
"properties": {
"component_name": {
"type": "string",
"description": "The name of the component"
},
"props": {
"type": "array",
"items": {"type": "string"},
"description": "List of component props"
}
},
"required": ["component_name"]
}
}
}
]

response = completion(
model="v0/v0-1.5-md",
messages=[{"role": "user", "content": "Create a Button component with onClick and disabled props"}],
tools=tools,
tool_choice="auto"
)

print(response)

Usage - LiteLLM Proxyโ€‹

Add the following to your LiteLLM Proxy configuration file:

config.yaml
model_list:
- model_name: v0-large
litellm_params:
model: v0/v0-1.5-lg
api_key: os.environ/V0_API_KEY

- model_name: v0-medium
litellm_params:
model: v0/v0-1.5-md
api_key: os.environ/V0_API_KEY

- model_name: v0-legacy
litellm_params:
model: v0/v0-1.0-md
api_key: os.environ/V0_API_KEY

Start your LiteLLM Proxy server:

Start LiteLLM Proxy
litellm --config config.yaml

# RUNNING on http://0.0.0.0:4000
v0 via Proxy - Non-streaming
from openai import OpenAI

# Initialize client with your proxy URL
client = OpenAI(
base_url="http://localhost:4000", # Your proxy URL
api_key="your-proxy-api-key" # Your proxy API key
)

# Non-streaming response
response = client.chat.completions.create(
model="v0-medium",
messages=[{"role": "user", "content": "Create a React card component"}]
)

print(response.choices[0].message.content)
v0 via Proxy - Streaming
from openai import OpenAI

# Initialize client with your proxy URL
client = OpenAI(
base_url="http://localhost:4000", # Your proxy URL
api_key="your-proxy-api-key" # Your proxy API key
)

# Streaming response
response = client.chat.completions.create(
model="v0-medium",
messages=[{"role": "user", "content": "Create a React card component"}],
stream=True
)

for chunk in response:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")

For more detailed information on using the LiteLLM Proxy, see the LiteLLM Proxy documentation.

Supported OpenAI Parametersโ€‹

v0 supports the following OpenAI-compatible parameters:

ParameterTypeDescription
messagesarrayRequired. Array of message objects with 'role' and 'content'
modelstringRequired. Model ID (v0-1.5-lg, v0-1.5-md, v0-1.0-md)
streambooleanOptional. Enable streaming responses
toolsarrayOptional. List of available tools/functions
tool_choicestring/objectOptional. Control tool/function calling

Note: v0 has a limited set of supported parameters compared to the full OpenAI API. Parameters like temperature, max_tokens, top_p, etc. are not supported.

Advanced Usageโ€‹

Custom API Baseโ€‹

If you're using a custom v0 deployment:

Custom API Base
import litellm

response = litellm.completion(
model="v0/v0-1.5-md",
messages=[{"role": "user", "content": "Hello"}],
api_base="https://your-custom-v0-endpoint.com/v1",
api_key="your-api-key"
)

Pricingโ€‹

v0 models require a Premium or Team subscription. Visit v0.dev/chat/settings/billing for current pricing information.

Additional Resourcesโ€‹