📄️ Router - Load Balancing, Fallbacks
LiteLLM manages:
📄️ [BETA] Request Prioritization
Beta feature. Use for testing only.
📄️ Proxy - Load Balancing
Load balance multiple instances of the same model
📄️ Proxy - Fallbacks, Retries
- Quick Start load balancing
📄️ Tag Based Routing
Route requests based on tags.
📄️ Provider Budget Routing
Use this to set budgets for LLM Providers - example $100/day for OpenAI, $100/day for Azure.
📄️ Team-based Routing
Routing
📄️ Region-based Routing
Route specific customers to eu-only models.