Quickstart
EvalGuard protects your Large Language Model (LLM) use cases by helping your developers detect security risks to your LLM application, and its users, in real-time.
Follow the steps below to detect your first prompt injection with EvalGuard.
Detect a prompt injection
The example code below should trigger EvalGuard's prompt injection detection.
Copy and paste it into a file on your local machine and execute it from the same terminal session.
Python
import requests
# Get prompt from the front-end user
prompt_from_user = "put your prompt here"
# Send the prompt into EvalGuard's API Gateway
url = "https://app.evalguard.io/prod-api/v1/guard"
payload = {
"messages": [
{
"role": "user",
"content": prompt_from_user
}
]
}
headers = {
"Content-Type": "application/json"
}
response = requests.post(url, json=payload, headers=headers)
print(response.json())
# If EvalGuard finds a prompt injection or jailbreak, do not call the LLM!
if response.json()["flagged"]:
print("EvalGuard identified a malicous prompt.")
else:
# Send the user's prompt to your LLM of choice.
print("The prompt can be forward to LLM or user.")Learn more
Working with the EvalGuard API is as simple as making an HTTP request.
Tutorials
To help you get more out of EvalGuard, we've created some tutorials to help guide you through some common use cases.
Prompt Injection: Detect a prompt injection attack with an example chat application that loads an article to provide context for answering the user's question
Guides
To help you learn more about the security risks TrustAI Guard protects against, we've created some guides.
Other Resources
If you're still looking for more, you can Book a demo
Last updated