Guardrails
Guardrails are security policies that inspect LLM requests and responses to detect and block harmful, policy-violating, or inappropriate content before it reaches the model or the user. You can apply prompt guards to the request phase, the response phase, or both.
To learn more about guardrails, see the following topic.
To set up guardrails, check out the following guides.
To track guardrails and content safety, see the following guide.