Skip to main content

Bedrock Guardrails

LiteLLM supports Bedrock guardrails via the Bedrock ApplyGuardrail API.

Quick Start​

1. Define Guardrails on your LiteLLM config.yaml​

Define your guardrails under the guardrails section

model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: openai/gpt-3.5-turbo
api_key: os.environ/OPENAI_API_KEY

guardrails:
- guardrail_name: "bedrock-pre-guard"
litellm_params:
guardrail: bedrock # supported values: "aporia", "bedrock", "lakera"
mode: "during_call"
guardrailIdentifier: ff6ujrregl1q # your guardrail ID on bedrock
guardrailVersion: "DRAFT" # your guardrail version on bedrock

Supported values for mode​

  • pre_call Run before LLM call, on input
  • post_call Run after LLM call, on input & output
  • during_call Run during LLM call, on input Same as pre_call but runs in parallel as LLM call. Response not returned until guardrail check completes

2. Start LiteLLM Gateway​

litellm --config config.yaml --detailed_debug

3. Test request​

Langchain, OpenAI SDK Usage Examples

Expect this to fail since since ishaan@berri.ai in the request is PII

curl -i http://localhost:4000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-npnwjPQciVRok5yNZgKmFQ" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [
{"role": "user", "content": "hi my email is ishaan@berri.ai"}
],
"guardrails": ["bedrock-pre-guard"]
}'

Expected response on failure

{
"error": {
"message": {
"error": "Violated guardrail policy",
"bedrock_guardrail_response": {
"action": "GUARDRAIL_INTERVENED",
"assessments": [
{
"topicPolicy": {
"topics": [
{
"action": "BLOCKED",
"name": "Coffee",
"type": "DENY"
}
]
}
}
],
"blockedResponse": "Sorry, the model cannot answer this question. coffee guardrail applied ",
"output": [
{
"text": "Sorry, the model cannot answer this question. coffee guardrail applied "
}
],
"outputs": [
{
"text": "Sorry, the model cannot answer this question. coffee guardrail applied "
}
],
"usage": {
"contentPolicyUnits": 0,
"contextualGroundingPolicyUnits": 0,
"sensitiveInformationPolicyFreeUnits": 0,
"sensitiveInformationPolicyUnits": 0,
"topicPolicyUnits": 1,
"wordPolicyUnits": 0
}
}
},
"type": "None",
"param": "None",
"code": "400"
}
}

PII Masking with Bedrock Guardrails​

Bedrock guardrails support PII detection and masking capabilities. To enable this feature, you need to:

  1. Set mode to pre_call to run the guardrail check before the LLM call
  2. Enable masking by setting mask_request_content and/or mask_response_content to true

Here's how to configure it in your config.yaml:

litellm bedrock guardrailconfig.yaml
guardrails:
- guardrail_name: "bedrock-pre-guard"
litellm_params:
guardrail: bedrock
mode: "pre_call" # Important: must use pre_call mode for masking
guardrailIdentifier: wf0hkdb5x07f
guardrailVersion: "DRAFT"
mask_request_content: true # Enable masking in user requests
mask_response_content: true # Enable masking in model responses

With this configuration, when the bedrock guardrail intervenes, litellm will read the masked output from the guardrail and send it to the model.

Example Usage​

When enabled, PII will be automatically masked in the text. For example, if a user sends:

My email is john.doe@example.com and my phone number is 555-123-4567

The text sent to the model might be masked as:

My email is [EMAIL] and my phone number is [PHONE_NUMBER]

This helps protect sensitive information while still allowing the model to understand the context of the request.