Real-Time Evaluations for Safety
Just like all web applications need standard safety protections from for example DDOS attacks, it’s now the default practice to add sane protections to LLM applications too, like PII detection to know when sensitive data is being exposed, or protection agains Prompt Injection, listed as the number 1 vulnerability for LLMs on the OWASP Top 10.Setting up a Prompt Injection detection monitor
On LangWatch, it’s very easy to set up a prompt injection detection, and making sure it works well with your data, so you can monitor any incidents and get alerted. First, go to the evaluations page and click in New Evaluation:.png?fit=max&auto=format&n=UFU4yqeW-QWPi3A0&q=85&s=ea56e37a77dce46633233441e6dd0d6e)


.png?fit=max&auto=format&n=UFU4yqeW-QWPi3A0&q=85&s=86919c0c8be016bd3cdff8722baf5c34)


.png?fit=max&auto=format&n=UFU4yqeW-QWPi3A0&q=85&s=9f614ee145145d334513a2aeb9d97b7d)

.png?fit=max&auto=format&n=UFU4yqeW-QWPi3A0&q=85&s=cbf79ba9d99831b8e813108b2aa56098)
