Introduction
Introduction to EvalGuard
EvalGuard gives every developer the tools to protect their Large Language Model (LLM) applications, and their users, from threats like prompt injection, jailbreaks, exposing sensitive data, and more.
Who needs EvalGuard?
If you are in the following roles, you may be interested in this document,
GenAI Apps Developers
CSO in Corporation
GenAI App Compliance Regulators
GenAI Capability Providers
Model compatibility
EvalGuard is model-agnostic and works with:
any hosted model provider (OpenAI, Anthropic, Cohere, etc.)
any open-source model
your own custom models
How it works
EvalGuard is available as a Software as a Service (SaaS) cloud-hosted or Self-Hosted product and is built on top of our continuously evolving security intelligence platform and is designed to sit in between your users and your generative AI applications.

Our security intelligence platform combines insights from public sources, data from the LLM developer community, our Red Team, and the latest LLM security research and techniques.
Our proprietary vulnerability database contains tens of millions of attack data points, and is growing by roughly 100,000 entries per day.
You can start protecting your LLM applications in minutes by signing up and following our Quickstart guide.
Learn more
Learn more about the Dashboard
Experience a real-world toxic content generation attack in our Toxic Generation Attack tutorial
Experience a real-world prompt injection attack in our Prompt Injection tutorial
Last updated