Skip to main content

Region-based Routing

Route specific customers to eu-only models.

By specifying 'allowed_model_region' for a customer, LiteLLM will filter-out any models in a model group which is not in the allowed region (i.e. 'eu').

See Code

1. Create customer with region-specification

Use the litellm 'end-user' object for this.

End-users can be tracked / id'ed by passing the 'user' param to litellm in an openai chat completion/embedding call.

curl -X POST --location '' \
--header 'Authorization: Bearer sk-1234' \
--header 'Content-Type: application/json' \
--data '{
"user_id" : "ishaan-jaff-45",
"allowed_model_region": "eu", # 👈 SPECIFY ALLOWED REGION='eu'

2. Add eu models to model-group

Add eu models to a model group. For azure models, litellm can automatically infer the region (no need to set it).

- model_name: gpt-3.5-turbo
model: azure/gpt-35-turbo-eu # 👈 EU azure model
api_key: os.environ/AZURE_EUROPE_API_KEY
- model_name: gpt-3.5-turbo
model: azure/chatgpt-v-2
api_version: "2023-05-15"
api_key: os.environ/AZURE_API_KEY

enable_pre_call_checks: true # 👈 IMPORTANT

Start the proxy

litellm --config /path/to/config.yaml

3. Test it!

Make a simple chat completions call to the proxy. In the response headers, you should see the returned api base.

curl -X POST --location 'http://localhost:4000/chat/completions' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer sk-1234' \
--data '{
"model": "gpt-3.5-turbo",
"messages": [
"role": "user",
"content": "what is the meaning of the universe? 1234"
"user": "ishaan-jaff-45" # 👈 USER ID

Expected API Base in response headers

x-litellm-api-base: ""


What happens if there are no available models for that region?

Since the router filters out models not in the specified region, it will return back as an error to the user, if no models in that region are available.