Observability & Metrics
AI Guard exposes an observability pipeline that captures classification activity, redaction actions, agent response times, and user sessions. Metrics flow from the SDK through the AI Guard service to OneTrust AI Governance for compliance monitoring and dashboards.
Metrics PipelineThe OneTrust AI Governance Cloud acts as a control plane. It does not collect prompts or responses. Only aggregated classification metrics are sent from the Light Worker Node to AI Governance at regular intervals.
Sending Metrics
Use client.metric() to record metrics events:
from ai_guard import AIGuardClient
from ai_guard.api import AIPlatform, MetricsEvent, MetricsEventMeter
client = AIGuardClient(
"https://ai-guard.example.com:4443",
token="your-api-key",
agent_id="my-agent",
platform=AIPlatform.AMAZON_BEDROCK,
)
Automatic AttributesThe
agent_idandplatformare injected automatically by the client into all metric events. You do not need to include them in theattributesdictionary.
Available Meters
ai_guard.agent β Agent Response Time
ai_guard.agent β Agent Response TimeRecord the response time of your LLM agent in milliseconds:
client.metric(MetricsEvent(
attributes={},
meter=MetricsEventMeter(name="ai_guard.agent", value="1.234"),
))| Attribute | Required | Description |
|---|---|---|
agent_id | Auto-injected | Unique agent identifier |
platform | Auto-injected | AI platform identifier |
ai_guard.user β User Session
ai_guard.user β User SessionRecord a user interaction or session event:
client.metric(MetricsEvent(
attributes={"new_session": "true"},
meter=MetricsEventMeter(name="ai_guard.user", value="1"),
))| Attribute | Required | Description |
|---|---|---|
agent_id | Auto-injected | Unique agent identifier |
platform | Auto-injected | AI platform identifier |
new_session | Yes | "true" or "false" β whether this is a new session |
ai_guard.redact β Redaction Event
ai_guard.redact β Redaction EventRecord a redaction or block event:
client.metric(MetricsEvent(
attributes={"action": "redact", "actor": "user"},
meter=MetricsEventMeter(name="ai_guard.redact", value="1"),
))| Attribute | Required | Description |
|---|---|---|
agent_id | Auto-injected | Unique agent identifier |
platform | Auto-injected | AI platform identifier |
action | Yes | "redact" or "block" |
actor | Yes | "user" or "agent" β source of the classified text |
ai_guard.classification β Classification Count
ai_guard.classification β Classification Count
Automatic MeterThis meter is generated automatically by the AI Guard service for each classifier match in a classification response. It cannot be submitted via the SDK.
| Attribute | Source | Description |
|---|---|---|
agent_id | From request context | Unique agent identifier |
platform | From request context | AI platform identifier |
actor | From request context | "user" or "agent" |
classifier | Auto-set by service | The classifier that matched |
Meter Types
| Meter | Type | Value Format |
|---|---|---|
ai_guard.agent | Histogram | Response time in milliseconds (decimal string) |
ai_guard.user | Counter | "1" |
ai_guard.redact | Counter | "1" |
ai_guard.classification | Counter | "1" (auto-generated) |
Error Handling
The metric() method raises exceptions based on the HTTP response:
| HTTP Status | Exception | Description |
|---|---|---|
| 400 | ValueError | Metrics not enabled on the service, or invalid request |
| 401 | PermissionError | Invalid or missing API key |
| Other | RuntimeError | Unexpected error |
try:
client.metric(MetricsEvent(
attributes={"new_session": "true"},
meter=MetricsEventMeter(name="ai_guard.user", value="1"),
))
except ValueError as e:
print(f"Metrics not enabled: {e}")
except PermissionError as e:
print(f"Authentication failed: {e}")
except RuntimeError as e:
print(f"Service error: {e}")Integration Example
A complete example that classifies text, applies redaction, and records metrics:
import time
from ai_guard import AIGuardClient
from ai_guard.api import (
AIPlatform, ClassificationRequest, ClassifierDescriptionDefault,
MetricsEvent, MetricsEventMeter,
)
from ai_guard.redact import ClassificationRedactor, RedactPolicy, RedactAction, RedactKind
client = AIGuardClient(
"https://ai-guard.example.com:4443",
token="your-api-key",
agent_id="my-agent",
platform=AIPlatform.AMAZON_BEDROCK,
)
# Record new user session
client.metric(MetricsEvent(
attributes={"new_session": "true"},
meter=MetricsEventMeter(name="ai_guard.user", value="1"),
))
# Classify agent response and measure time
start = time.time()
response = client.classify(ClassificationRequest(
context={"actor": "agent"},
classifier_description=ClassifierDescriptionDefault(),
text=agent_output,
))
elapsed_ms = (time.time() - start) * 1000
# Record agent response time
client.metric(MetricsEvent(
attributes={},
meter=MetricsEventMeter(name="ai_guard.agent", value=str(elapsed_ms)),
))
# Apply redaction and record redaction events
policy = RedactPolicy(
actions=[RedactAction(kind=RedactKind.REDACT, classifier="US_PHONE_NUMBER")],
default=RedactKind.NONE,
redactor="*",
)
redactor = ClassificationRedactor(policy)
result = redactor.redact(text=agent_output, classification=response)
for action in result.actions:
client.metric(MetricsEvent(
attributes={"action": action.kind.name.lower(), "actor": "agent"},
meter=MetricsEventMeter(name="ai_guard.redact", value="1"),
))What's Next?
- Metrics Pipeline & Exporters β How metrics flow to AI Governance
- Meter Definitions β Detailed meter specifications
- Troubleshooting β Common metrics issues and fixes
Updated 3 days ago