v0
Overviewโ
Property | Details |
---|---|
Description | v0 provides AI models optimized for code generation, particularly for creating Next.js applications, React components, and modern web development. |
Provider Route on LiteLLM | v0/ |
Link to Provider Doc | v0 API Documentation โ |
Base URL | https://api.v0.dev/v1 |
Supported Operations | /chat/completions |
https://v0.dev/docs/v0-model-api
We support ALL v0 models, just set v0/
as a prefix when sending completion requests
Available Modelsโ
Model | Description | Context Window | Max Output |
---|---|---|---|
v0/v0-1.5-lg | Large model for advanced code generation and reasoning | 512,000 tokens | 512,000 tokens |
v0/v0-1.5-md | Medium model for everyday code generation tasks | 128,000 tokens | 128,000 tokens |
v0/v0-1.0-md | Legacy medium model | 128,000 tokens | 128,000 tokens |
Required Variablesโ
os.environ["V0_API_KEY"] = "" # your v0 API key from v0.dev
Note: v0 API access requires a Premium or Team plan. Visit v0.dev/chat/settings/billing to upgrade.
Usage - LiteLLM Python SDKโ
Non-streamingโ
import os
import litellm
from litellm import completion
os.environ["V0_API_KEY"] = "" # your v0 API key
messages = [{"content": "Create a React button component with hover effects", "role": "user"}]
# v0 call
response = completion(
model="v0/v0-1.5-md",
messages=messages
)
print(response)
Streamingโ
import os
import litellm
from litellm import completion
os.environ["V0_API_KEY"] = "" # your v0 API key
messages = [{"content": "Create a React button component with hover effects", "role": "user"}]
# v0 call with streaming
response = completion(
model="v0/v0-1.5-md",
messages=messages,
stream=True
)
for chunk in response:
print(chunk)
Vision/Multimodal Supportโ
All v0 models support vision inputs, allowing you to send images along with text:
import os
import litellm
from litellm import completion
os.environ["V0_API_KEY"] = "" # your v0 API key
messages = [{
"role": "user",
"content": [
{
"type": "text",
"text": "Recreate this UI design in React"
},
{
"type": "image_url",
"image_url": {
"url": "https://example.com/ui-design.png"
}
}
]
}]
response = completion(
model="v0/v0-1.5-lg",
messages=messages
)
print(response)
Function Callingโ
v0 supports function calling for structured outputs:
import os
import litellm
from litellm import completion
os.environ["V0_API_KEY"] = "" # your v0 API key
tools = [
{
"type": "function",
"function": {
"name": "create_component",
"description": "Create a React component",
"parameters": {
"type": "object",
"properties": {
"component_name": {
"type": "string",
"description": "The name of the component"
},
"props": {
"type": "array",
"items": {"type": "string"},
"description": "List of component props"
}
},
"required": ["component_name"]
}
}
}
]
response = completion(
model="v0/v0-1.5-md",
messages=[{"role": "user", "content": "Create a Button component with onClick and disabled props"}],
tools=tools,
tool_choice="auto"
)
print(response)
Usage - LiteLLM Proxyโ
Add the following to your LiteLLM Proxy configuration file:
model_list:
- model_name: v0-large
litellm_params:
model: v0/v0-1.5-lg
api_key: os.environ/V0_API_KEY
- model_name: v0-medium
litellm_params:
model: v0/v0-1.5-md
api_key: os.environ/V0_API_KEY
- model_name: v0-legacy
litellm_params:
model: v0/v0-1.0-md
api_key: os.environ/V0_API_KEY
Start your LiteLLM Proxy server:
litellm --config config.yaml
# RUNNING on http://0.0.0.0:4000
- OpenAI SDK
- LiteLLM SDK
- cURL
from openai import OpenAI
# Initialize client with your proxy URL
client = OpenAI(
base_url="http://localhost:4000", # Your proxy URL
api_key="your-proxy-api-key" # Your proxy API key
)
# Non-streaming response
response = client.chat.completions.create(
model="v0-medium",
messages=[{"role": "user", "content": "Create a React card component"}]
)
print(response.choices[0].message.content)
from openai import OpenAI
# Initialize client with your proxy URL
client = OpenAI(
base_url="http://localhost:4000", # Your proxy URL
api_key="your-proxy-api-key" # Your proxy API key
)
# Streaming response
response = client.chat.completions.create(
model="v0-medium",
messages=[{"role": "user", "content": "Create a React card component"}],
stream=True
)
for chunk in response:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")
import litellm
# Configure LiteLLM to use your proxy
response = litellm.completion(
model="litellm_proxy/v0-medium",
messages=[{"role": "user", "content": "Create a React card component"}],
api_base="http://localhost:4000",
api_key="your-proxy-api-key"
)
print(response.choices[0].message.content)
import litellm
# Configure LiteLLM to use your proxy with streaming
response = litellm.completion(
model="litellm_proxy/v0-medium",
messages=[{"role": "user", "content": "Create a React card component"}],
api_base="http://localhost:4000",
api_key="your-proxy-api-key",
stream=True
)
for chunk in response:
if hasattr(chunk.choices[0], 'delta') and chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")
curl http://localhost:4000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-proxy-api-key" \
-d '{
"model": "v0-medium",
"messages": [{"role": "user", "content": "Create a React card component"}]
}'
curl http://localhost:4000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-proxy-api-key" \
-d '{
"model": "v0-medium",
"messages": [{"role": "user", "content": "Create a React card component"}],
"stream": true
}'
For more detailed information on using the LiteLLM Proxy, see the LiteLLM Proxy documentation.
Supported OpenAI Parametersโ
v0 supports the following OpenAI-compatible parameters:
Parameter | Type | Description |
---|---|---|
messages | array | Required. Array of message objects with 'role' and 'content' |
model | string | Required. Model ID (v0-1.5-lg, v0-1.5-md, v0-1.0-md) |
stream | boolean | Optional. Enable streaming responses |
tools | array | Optional. List of available tools/functions |
tool_choice | string/object | Optional. Control tool/function calling |
Note: v0 has a limited set of supported parameters compared to the full OpenAI API. Parameters like temperature
, max_tokens
, top_p
, etc. are not supported.
Advanced Usageโ
Custom API Baseโ
If you're using a custom v0 deployment:
import litellm
response = litellm.completion(
model="v0/v0-1.5-md",
messages=[{"role": "user", "content": "Hello"}],
api_base="https://your-custom-v0-endpoint.com/v1",
api_key="your-api-key"
)
Pricingโ
v0 models require a Premium or Team subscription. Visit v0.dev/chat/settings/billing for current pricing information.