API Endpoints
Content Safety
This evaluator detects potentially unsafe content in text, including hate speech, self-harm, sexual content, and violence. It allows customization of the severity threshold and the specific categories to check.
Env vars: AZURE_CONTENT_SAFETY_ENDPOINT, AZURE_CONTENT_SAFETY_KEY
Docs: https://learn.microsoft.com/en-us/azure/ai-services/content-safety/quickstart-text
POST
/
azure
/
content_safety
/
evaluate
Body
application/json
List of entries to be evaluated, check the field type for the necessary keys
Evaluator settings, check the field type for what settings this evaluator supports
Optional environment variables to override the server ones
Response
200 - application/json