Observability & Metrics

AI Guard exposes an observability pipeline that captures classification activity, redaction actions, agent response times, and user sessions. Metrics flow from the SDK through the AI Guard service to OneTrust AI Governance for compliance monitoring and dashboards.

πŸ“˜

Metrics Pipeline

The OneTrust AI Governance Cloud acts as a control plane. It does not collect prompts or responses. Only aggregated classification metrics are sent from the Light Worker Node to AI Governance at regular intervals.

Sending Metrics

Use client.metric() to record metrics events:

from ai_guard import AIGuardClient
from ai_guard.api import AIPlatform, MetricsEvent, MetricsEventMeter

client = AIGuardClient(
    "https://ai-guard.example.com:4443",
    token="your-api-key",
    agent_id="my-agent",
    platform=AIPlatform.AMAZON_BEDROCK,
)
πŸ“˜

Automatic Attributes

The agent_id and platform are injected automatically by the client into all metric events. You do not need to include them in the attributes dictionary.

Available Meters

ai_guard.agent β€” Agent Response Time

Record the response time of your LLM agent in milliseconds:

client.metric(MetricsEvent(
    attributes={},
    meter=MetricsEventMeter(name="ai_guard.agent", value="1.234"),
))
AttributeRequiredDescription
agent_idAuto-injectedUnique agent identifier
platformAuto-injectedAI platform identifier

ai_guard.user β€” User Session

Record a user interaction or session event:

client.metric(MetricsEvent(
    attributes={"new_session": "true"},
    meter=MetricsEventMeter(name="ai_guard.user", value="1"),
))
AttributeRequiredDescription
agent_idAuto-injectedUnique agent identifier
platformAuto-injectedAI platform identifier
new_sessionYes"true" or "false" β€” whether this is a new session

ai_guard.redact β€” Redaction Event

Record a redaction or block event:

client.metric(MetricsEvent(
    attributes={"action": "redact", "actor": "user"},
    meter=MetricsEventMeter(name="ai_guard.redact", value="1"),
))
AttributeRequiredDescription
agent_idAuto-injectedUnique agent identifier
platformAuto-injectedAI platform identifier
actionYes"redact" or "block"
actorYes"user" or "agent" β€” source of the classified text

ai_guard.classification β€” Classification Count

πŸ“˜

Automatic Meter

This meter is generated automatically by the AI Guard service for each classifier match in a classification response. It cannot be submitted via the SDK.

AttributeSourceDescription
agent_idFrom request contextUnique agent identifier
platformFrom request contextAI platform identifier
actorFrom request context"user" or "agent"
classifierAuto-set by serviceThe classifier that matched

Meter Types

MeterTypeValue Format
ai_guard.agentHistogramResponse time in milliseconds (decimal string)
ai_guard.userCounter"1"
ai_guard.redactCounter"1"
ai_guard.classificationCounter"1" (auto-generated)

Error Handling

The metric() method raises exceptions based on the HTTP response:

HTTP StatusExceptionDescription
400ValueErrorMetrics not enabled on the service, or invalid request
401PermissionErrorInvalid or missing API key
OtherRuntimeErrorUnexpected error
try:
    client.metric(MetricsEvent(
        attributes={"new_session": "true"},
        meter=MetricsEventMeter(name="ai_guard.user", value="1"),
    ))
except ValueError as e:
    print(f"Metrics not enabled: {e}")
except PermissionError as e:
    print(f"Authentication failed: {e}")
except RuntimeError as e:
    print(f"Service error: {e}")

Integration Example

A complete example that classifies text, applies redaction, and records metrics:

import time
from ai_guard import AIGuardClient
from ai_guard.api import (
    AIPlatform, ClassificationRequest, ClassifierDescriptionDefault,
    MetricsEvent, MetricsEventMeter,
)
from ai_guard.redact import ClassificationRedactor, RedactPolicy, RedactAction, RedactKind

client = AIGuardClient(
    "https://ai-guard.example.com:4443",
    token="your-api-key",
    agent_id="my-agent",
    platform=AIPlatform.AMAZON_BEDROCK,
)

# Record new user session
client.metric(MetricsEvent(
    attributes={"new_session": "true"},
    meter=MetricsEventMeter(name="ai_guard.user", value="1"),
))

# Classify agent response and measure time
start = time.time()
response = client.classify(ClassificationRequest(
    context={"actor": "agent"},
    classifier_description=ClassifierDescriptionDefault(),
    text=agent_output,
))
elapsed_ms = (time.time() - start) * 1000

# Record agent response time
client.metric(MetricsEvent(
    attributes={},
    meter=MetricsEventMeter(name="ai_guard.agent", value=str(elapsed_ms)),
))

# Apply redaction and record redaction events
policy = RedactPolicy(
    actions=[RedactAction(kind=RedactKind.REDACT, classifier="US_PHONE_NUMBER")],
    default=RedactKind.NONE,
    redactor="*",
)
redactor = ClassificationRedactor(policy)
result = redactor.redact(text=agent_output, classification=response)

for action in result.actions:
    client.metric(MetricsEvent(
        attributes={"action": action.kind.name.lower(), "actor": "agent"},
        meter=MetricsEventMeter(name="ai_guard.redact", value="1"),
    ))

What's Next?