API Endpoints
Prompt Injection Detection
This evaluator checks for prompt injection attempt in the input and the contexts using Azure’s Content Safety API.
Env vars: AZURE_CONTENT_SAFETY_ENDPOINT, AZURE_CONTENT_SAFETY_KEY
Docs: https://learn.microsoft.com/en-us/azure/ai-services/content-safety/concepts/jailbreak-detection
POST
/
azure
/
prompt_injection
/
evaluate
Body
application/json
List of entries to be evaluated, check the field type for the necessary keys
Evaluator settings, check the field type for what settings this evaluator supports
Optional environment variables to override the server ones
Response
200 - application/json
Available options:
processed
No description provided
If true then no prompt injection was detected, if false then a prompt injection was detected
Short human-readable description of the result