FAQ
Frequently Asked Questions
General
What is AI Guard?
AI Guard is a comprehensive classification tool by OneTrust that protects AI systems by detecting and managing sensitive data (PII, credentials, etc.) in real time. It sits between your users and AI tools as an inspection layer, classifying text and applying redaction or blocking policies.
Is AI Guard production-ready?
It is optimized for development and testing workloads. As of this release, it is not recommended for large classification volumes generally seen in externally facing AI applications or agents.
Does OneTrust collect my prompts or responses?
No. The AI Guard service processes all text locally within your infrastructure. Only aggregated classification metrics (counts and statistics) are sent to OneTrust AI Governance Cloud. Prompts, responses, and classified text content never leave your environment.
What AI platforms are supported?
AI Guard supports any Python-based GenAI application. The SDK tracks metrics by platform for the following:
| Platform | Identifier |
|---|---|
| Amazon Bedrock | AMAZON_BEDROCK |
| Amazon SageMaker | AMAZON_SAGEMAKER |
| Azure AI Foundry | AZURE_FOUNDRY |
| Databricks | DATABRICKS |
| Google Cloud Vertex AI | GCP_VERTEX |
SDK
What Python version is required?
Python 3.13 or higher is required.
Can I use AI Guard with a non-Python application?
The SDK is currently Python-only. However, the AI Guard service exposes a standard REST API that can be called from any language. See the API Reference for endpoint details and request/response formats.
How do I pass my API key securely?
Use environment variables instead of hard-coding keys:
import os
from ai_guard import AIGuardClient
from ai_guard.api import AIPlatform
client = AIGuardClient(
os.environ["AI_GUARD_URL"],
token=os.environ["AI_GUARD_TOKEN"],
agent_id="my-agent",
platform=AIPlatform.AMAZON_BEDROCK,
)What happens if the AI Guard service is unavailable?
The SDK will raise a ConnectionError. Your application should handle this gracefully β for example, by passing text through unclassified or queuing it for retry.
Can I use AI Guard with streaming LLM responses?
Yes. The ClassificationStream processes text incrementally with concurrent classification. It accepts any Python iterable of strings as input, making it compatible with streaming APIs from AWS Bedrock, Azure, and other providers. See Streaming Classification.
Classification
How many classifiers does AI Guard support?
AI Guard supports 300+ system classifiers out of the box, covering PII patterns, credentials, financial data, healthcare information, and more.
Can I create custom classifiers?
Custom classification profiles (self-created profiles) are not currently supported for AI Guard operations. Only OneTrust system-defined profiles can be used. You can, however, select specific classifiers by code using ClassifierDescriptionCodes.
What does the confidence score mean?
The confidence score (0β100) indicates how certain the classifier is about the match. Higher scores indicate greater confidence. The minimum threshold is configured on the server via classification.min-allowed-likelihood (default: LIKELY).
Can I change which classifiers are used without redeploying?
Yes. Classification profiles are managed in the OneTrust admin console. The AI Guard service caches profiles and refreshes them periodically. You can also specify different profiles per request using ClassifierDescriptionProfile.
Redaction
What's the difference between redact and block?
| Action | Behavior |
|---|---|
| Redact | Each character of the matched text is replaced with a specified character (e.g., *) |
| Block | The entire text is rejected and replaced with an empty string |
Block always takes priority β if any match triggers a block action, the entire text is blocked regardless of other actions.
Can I use different redaction policies for user input vs. agent output?
Yes. Create separate RedactPolicy instances and apply them based on the context:
user_policy = RedactPolicy(
actions=[RedactAction(kind=RedactKind.BLOCK, classifier="US_SSN")],
default=RedactKind.REDACT,
redactor="*",
)
agent_policy = RedactPolicy(
actions=[RedactAction(kind=RedactKind.REDACT, classifier="US_PHONE_NUMBER")],
default=RedactKind.NONE,
redactor="#",
)Deployment
Where does AI Guard run?
The AI Guard classification service runs on-premises within your infrastructure, deployed as a Docker container or Kubernetes pod on the OneTrust Light Worker Node. Only aggregated metrics are sent to OneTrust Cloud.
What ports need to be open?
| Port | Direction | Purpose |
|---|---|---|
4443 | Inbound | SDK traffic to AI Guard service |
443 | Outbound | Token validation to OneTrust tenant |
8080 | Internal | Metrics publishing and classification profiles (Kubernetes only) |
Can I run AI Guard without TLS?
Yes, by omitting the tls section from the config file or passing --no-tls. However, this is not recommended for production. All sensitive data should be transmitted over encrypted connections.
What container architectures are supported?
The Docker image supports both linux/amd64 and linux/arm64 architectures.
Metrics & Observability
What metrics does AI Guard track?
| Meter | What It Measures |
|---|---|
ai_guard.classification | Classifier match counts (auto-generated) |
ai_guard.redact | Redaction and block event counts |
ai_guard.agent | LLM agent response time (histogram) |
ai_guard.user | User session counts |
How often are metrics exported?
The default export interval is 1 hour (3600 seconds) for the OneTrust exporter. This is configurable via metrics.exporter.interval.
Can I use my own monitoring stack?
Yes. Use the OTLP exporter to send metrics to any OpenTelemetry-compatible collector (e.g., InfluxDB, Grafana, Datadog, Prometheus).
Why are metrics not appearing in AI Governance?
Common causes:
- The
metricssection is missing from the config (metrics disabled) - The export interval hasn't elapsed yet (default: 1 hour)
- The
datadiscovery-onprem-agentis not reachable - Export retries were exhausted (check service logs)
See Troubleshooting for detailed diagnostic steps.
What's Next?
- AI Guard Overview β Return to the main overview
- Getting Started β Set up AI Guard from scratch
- Troubleshooting β Detailed diagnostic guides
Updated about 4 hours ago