API Endpoints
Prompt Injection Detection
This evaluator checks for prompt injection attempt in the input and the contexts using Azure’s Content Safety API.
Env vars: AZURE_CONTENT_SAFETY_ENDPOINT, AZURE_CONTENT_SAFETY_KEY
Docs: https://learn.microsoft.com/en-us/azure/ai-services/content-safety/concepts/jailbreak-detection
POST
Body
application/json
List of entries to be evaluated, check the field type for the necessary keys
Optional environment variables to override the server ones
Evaluator settings, check the field type for what settings this evaluator supports
Response
200 - application/json
No description provided
Short human-readable description of the result
If true then no prompt injection was detected, if false then a prompt injection was detected
Available options:
processed