Documentation
LangEvals
LangEvals is the standalone LLM evaluations framework that powers LangWatch evaluations.
LangEvals integrates many APIs and other open-source evaluators under the same interface, to be used locally as a library.
It can be used in notebooks for exploration, in pytest for writting unit tests or as a server API for live-evaluations and guardrails. LangEvals is modular, including 20+ evaluators such as Ragas for RAG quality, OpenAI Moderation and Azure Jailbreak detection for safety and many others under the same interface.