API Endpoints
Content Safety
This evaluator detects potentially unsafe content in text, including hate speech, self-harm, sexual content, and violence. It allows customization of the severity threshold and the specific categories to check.
Env vars: AZURE_CONTENT_SAFETY_ENDPOINT, AZURE_CONTENT_SAFETY_KEY
Docs: https://learn.microsoft.com/en-us/azure/ai-services/content-safety/quickstart-text
POST
/
azure
/
content_safety
/
evaluate
Body
application/json
data
object[]
requiredList of entries to be evaluated, check the field type for the necessary keys
settings
object | null
Evaluator settings, check the field type for what settings this evaluator supports
env
object | null
Optional environment variables to override the server ones
Response
200 - application/json
status
enum<string>
default: processedAvailable options:
processed
score
number
requiredThe severity level of the detected content from 0 to 7. A higher score indicates higher severity.
passed
boolean | null
details
string | null
Short human-readable description of the result
cost
object | null