Skip to main content

Exception Mapping

LiteLLM maps exceptions across all providers to their OpenAI counterparts.

All exceptions can be imported from litellm - e.g. from litellm import BadRequestError

LiteLLM Exceptions​

Status CodeError TypeInherits fromDescription
400BadRequestErroropenai.BadRequestError
400UnsupportedParamsErrorlitellm.BadRequestErrorRaised when unsupported params are passed
400ContextWindowExceededErrorlitellm.BadRequestErrorSpecial error type for context window exceeded error messages - enables context window fallbacks
400ContentPolicyViolationErrorlitellm.BadRequestErrorSpecial error type for content policy violation error messages - enables content policy fallbacks
400InvalidRequestErroropenai.BadRequestErrorDeprecated error, use BadRequestError instead
401AuthenticationErroropenai.AuthenticationError
403PermissionDeniedErroropenai.PermissionDeniedError
404NotFoundErroropenai.NotFoundErrorraise when invalid models passed, example gpt-8
408Timeoutopenai.APITimeoutErrorRaised when a timeout occurs
422UnprocessableEntityErroropenai.UnprocessableEntityError
429RateLimitErroropenai.RateLimitError
500APIConnectionErroropenai.APIConnectionErrorIf any unmapped error is returned, we return this error
500APIErroropenai.APIErrorGeneric 500-status code error
503ServiceUnavailableErroropenai.APIStatusErrorIf provider returns a service unavailable error, this error is raised
>=500InternalServerErroropenai.InternalServerErrorIf any unmapped 500-status code error is returned, this error is raised
N/AAPIResponseValidationErroropenai.APIResponseValidationErrorIf Rules are used, and request/response fails a rule, this error is raised
N/ABudgetExceededErrorExceptionRaised for proxy, when budget is exceeded
N/AJSONSchemaValidationErrorlitellm.APIResponseValidationErrorRaised when response does not match expected json schema - used if response_schema param passed in with enforce_validation=True
N/AMockExceptionExceptionInternal exception, raised by mock_completion class. Do not use directly
N/AOpenAIErroropenai.OpenAIErrorDeprecated internal exception, inherits from openai.OpenAIError.

Base case we return APIConnectionError

All our exceptions inherit from OpenAI's exception types, so any error-handling you have for that, should work out of the box with LiteLLM.

For all cases, the exception returned inherits from the original OpenAI Exception but contains 3 additional attributes:

  • status_code - the http status code of the exception
  • message - the error message
  • llm_provider - the provider raising the exception

Usage​

import litellm
import openai

try:
response = litellm.completion(
model="gpt-4",
messages=[
{
"role": "user",
"content": "hello, write a 20 pageg essay"
}
],
timeout=0.01, # this will raise a timeout exception
)
except openai.APITimeoutError as e:
print("Passed: Raised correct exception. Got openai.APITimeoutError\nGood Job", e)
print(type(e))
pass

Usage - Catching Streaming Exceptions​

import litellm
try:
response = litellm.completion(
model="gpt-3.5-turbo",
messages=[
{
"role": "user",
"content": "hello, write a 20 pg essay"
}
],
timeout=0.0001, # this will raise an exception
stream=True,
)
for chunk in response:
print(chunk)
except openai.APITimeoutError as e:
print("Passed: Raised correct exception. Got openai.APITimeoutError\nGood Job", e)
print(type(e))
pass
except Exception as e:
print(f"Did not raise error `openai.APITimeoutError`. Instead raised error type: {type(e)}, Error: {e}")

Usage - Should you retry exception?​

import litellm
import openai

try:
response = litellm.completion(
model="gpt-4",
messages=[
{
"role": "user",
"content": "hello, write a 20 pageg essay"
}
],
timeout=0.01, # this will raise a timeout exception
)
except openai.APITimeoutError as e:
should_retry = litellm._should_retry(e.status_code)
print(f"should_retry: {should_retry}")

Details​

To see how it's implemented - check out the code

Create an issue or make a PR if you want to improve the exception mapping.

Note For OpenAI and Azure we return the original exception (since they're of the OpenAI Error type). But we add the 'llm_provider' attribute to them. See code

Custom mapping list​

Base case - we return litellm.APIConnectionError exception (inherits from openai's APIConnectionError exception).

custom_llm_providerTimeoutContextWindowExceededErrorBadRequestErrorNotFoundErrorContentPolicyViolationErrorAuthenticationErrorAPIErrorRateLimitErrorServiceUnavailableErrorPermissionDeniedErrorUnprocessableEntityError
openai✓✓✓✓✓
watsonx✓
text-completion-openai✓✓✓✓✓
custom_openai✓✓✓✓✓
openai_compatible_providers✓✓✓✓✓
anthropic✓✓✓✓✓✓✓
replicate✓✓✓✓✓✓✓
bedrock✓✓✓✓✓✓✓✓
sagemaker✓✓
vertex_ai✓✓✓✓
palm✓✓✓
gemini✓✓✓
cloudflare✓✓
cohere✓✓✓✓
cohere_chat✓✓✓✓
huggingface✓✓✓✓✓✓
ai21✓✓✓✓✓✓
nlp_cloud✓✓✓✓✓✓✓
together_ai✓✓✓✓
aleph_alpha✓✓
ollama✓✓✓
ollama_chat✓✓✓
vllm✓✓
azure✓✓✓✓✓✓✓
  • "✓" indicates that the specified custom_llm_provider can raise the corresponding exception.
  • Empty cells indicate the lack of association or that the provider does not raise that particular exception type as indicated by the function.

For a deeper understanding of these exceptions, you can check out this implementation for additional insights.

The ContextWindowExceededError is a sub-class of InvalidRequestError. It was introduced to provide more granularity for exception-handling scenarios. Please refer to this issue to learn more.

Contributions to improve exception mapping are welcome