API Endpoints
Prompt Injection Detection
This evaluator checks for prompt injection attempt in the input and the contexts using Azure’s Content Safety API.
Env vars: AZURE_CONTENT_SAFETY_ENDPOINT, AZURE_CONTENT_SAFETY_KEY
Docs: https://learn.microsoft.com/en-us/azure/ai-services/content-safety/concepts/jailbreak-detection
POST
/
azure
/
prompt_injection
/
evaluate
Body
application/json
data
object[]
requiredList of entries to be evaluated, check the field type for the necessary keys
settings
object | null
Evaluator settings, check the field type for what settings this evaluator supports
env
object | null
Optional environment variables to override the server ones
Response
200 - application/json
status
enum<string>
default: processedAvailable options:
processed
score
number
requiredNo description provided
passed
boolean | null
If true then no prompt injection was detected, if false then a prompt injection was detected
details
string | null
Short human-readable description of the result
cost
object | null