Metrics Overview
AI Guard exposes an observability pipeline that captures classification activity, redaction actions, agent response times, and user sessions. Metrics flow from the SDK through the AI Guard service to a time-series database or directly to OneTrust AI Governance.
Pipeline Architecture
βββββββββββββββ POST /metric ββββββββββββββββ Export βββββββββββββββββββββββ
β SDK Client ββββββββββββββββββββΊ β AI Guard βββββββββββββββββΊ β OneTrust AI Gov β
β (Python) β β Service β β Discovery Platform β
βββββββββββββββ ββββββββ¬ββββββββ βββββββββββββββββββββββ
β
Auto-generatedβ
classificationβ
counters β
β
ββββββββΌββββββββ Flush βββββββββββββββββββββββ
β Meters & βββββββββββββββββΊ β OTLP Collector β
β Aggregation β β (InfluxDB/Grafana) β
ββββββββββββββββ βββββββββββββββββββββββ
How It Works
- SDK submits metrics β Your application calls
client.metric()to record agent response times, user sessions, and redaction events - Auto-generated metrics β The AI Guard service automatically counts classification matches per classifier (the
ai_guard.classificationmeter) - Aggregation β The service aggregates metrics internally using OpenTelemetry instrumentation
- Export β Aggregated metrics are flushed on a configurable interval to either the OneTrust Discovery Platform API or an OpenTelemetry Collector
What Gets Tracked
| Metric | Source | What It Measures |
|---|---|---|
| Classification counts | Auto-generated | How often each classifier detects sensitive data |
| Redaction events | SDK (client.metric()) | How often data is redacted or blocked |
| Agent response time | SDK (client.metric()) | LLM agent latency in milliseconds |
| User sessions | SDK (client.metric()) | User interaction and session counts |
Data Privacy
ImportantThe metrics pipeline transmits only aggregated counts and statistics. Prompts, responses, and classified text content are never sent to OneTrust Cloud. The AI Guard service processes all text locally within your infrastructure.
Agent Identity
In OneTrust AI Governance, a unique AI agent is represented by the combination of agent_id + platform. These attributes are required on all metrics and are automatically injected by the SDK client.
Export Modes
AI Guard supports two export modes:
| Mode | Destination | Use Case |
|---|---|---|
| OneTrust | Discovery Platform API | Production β metrics feed into AI Governance dashboards |
| OTLP | OpenTelemetry Collector | Development or custom observability stacks (InfluxDB, Grafana, etc.) |
See Metrics Exporters for configuration details.
What's Next?
- Meter Definitions β Detailed specifications for each meter
- Metrics Exporters β Configure OTLP and OneTrust export modes
- Observability & Metrics (SDK) β Send metrics from your application
Updated about 5 hours ago