This notebook illustrates how easy it is to exploit LLM vulnerabilities via prompt injection and how EvalGuard can protect against them with one line of code.
Mateo gives a quick overview of the tutorial in the following video:
Last updated 9 months ago