Metrics Overview

AI Guard exposes an observability pipeline that captures classification activity, redaction actions, agent response times, and user sessions. Metrics flow from the SDK through the AI Guard service to a time-series database or directly to OneTrust AI Governance.

Pipeline Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    POST /metric     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     Export      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  SDK Client │──────────────────►  β”‚  AI Guard    │───────────────► β”‚  OneTrust AI Gov    β”‚
β”‚  (Python)   β”‚                     β”‚  Service     β”‚                 β”‚  Discovery Platform β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                     β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜                 β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                           β”‚
                             Auto-generatedβ”‚
                             classificationβ”‚
                             counters      β”‚
                                           β”‚
                                    β”Œβ”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”     Flush       β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                                    β”‚  Meters &    │───────────────► β”‚  OTLP Collector     β”‚
                                    β”‚  Aggregation β”‚                 β”‚  (InfluxDB/Grafana) β”‚
                                    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                 β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

How It Works

  1. SDK submits metrics β€” Your application calls client.metric() to record agent response times, user sessions, and redaction events
  2. Auto-generated metrics β€” The AI Guard service automatically counts classification matches per classifier (the ai_guard.classification meter)
  3. Aggregation β€” The service aggregates metrics internally using OpenTelemetry instrumentation
  4. Export β€” Aggregated metrics are flushed on a configurable interval to either the OneTrust Discovery Platform API or an OpenTelemetry Collector

What Gets Tracked

MetricSourceWhat It Measures
Classification countsAuto-generatedHow often each classifier detects sensitive data
Redaction eventsSDK (client.metric())How often data is redacted or blocked
Agent response timeSDK (client.metric())LLM agent latency in milliseconds
User sessionsSDK (client.metric())User interaction and session counts

Data Privacy

πŸ“˜

Important

The metrics pipeline transmits only aggregated counts and statistics. Prompts, responses, and classified text content are never sent to OneTrust Cloud. The AI Guard service processes all text locally within your infrastructure.

Agent Identity

In OneTrust AI Governance, a unique AI agent is represented by the combination of agent_id + platform. These attributes are required on all metrics and are automatically injected by the SDK client.

Export Modes

AI Guard supports two export modes:

ModeDestinationUse Case
OneTrustDiscovery Platform APIProduction β€” metrics feed into AI Governance dashboards
OTLPOpenTelemetry CollectorDevelopment or custom observability stacks (InfluxDB, Grafana, etc.)

See Metrics Exporters for configuration details.

What's Next?