Deploy the Light Worker Node
The AI Guard classification service runs on the OneTrust Light Worker Node. The Light Worker Node serves as the classification engine and a secure communication link between the AI Guard SDK and OneTrust Cloud.
About the Light Worker NodeThe Light Worker Node is a lightweight, on-premises deployment that hosts multiple OneTrust services. AI Guard is one of the supported feature sets. The node runs within your infrastructure, ensuring that prompts and responses never leave your environment β only aggregated classification metrics are sent to OneTrust Cloud.
Prerequisites
Before deploying the Light Worker Node:
- An active OneTrust AI Governance subscription
- A Kubernetes cluster (for Workernode deployment) or Docker runtime
- Network connectivity to your OneTrust tenant for token validation
- TLS certificates for securing the AI Guard service endpoint
Deployment Steps
1. Follow the Light Worker Node Installation Guide
Deploy the Light Worker Node using the official OneTrust installation guide:
Installation GuideFollow the detailed deployment instructions in the OneTrust Light Worker Node documentation.
2. Configure TLS Certificates
AI Guard requires TLS certificates for secure communication. Generate or obtain certificates for the service:
Option A: Self-Signed Certificate
Create a self-signed certificate suitable for development and internal use:
# ai-guard.cnf β OpenSSL config for self-signed CA certificate
[req]
default_bits = 4096
distinguished_name = req_distinguished_name
x509_extensions = v3_ca
prompt = no
[req_distinguished_name]
CN = ai-guard.local
[v3_ca]
basicConstraints = critical, CA:true
keyUsage = critical, digitalSignature, keyCertSign, keyEncipherment
extendedKeyUsage = serverAuth
subjectKeyIdentifier = hash
subjectAltName = @alt_names
[alt_names]
DNS.1 = ai-guard.local
DNS.2 = localhost
# Add additional hostnames or IPs as neededGenerate the certificate:
openssl req -x509 -newkey rsa:4096 -sha256 -days 365 -nodes \
-keyout server.key -out server.crt -config ai-guard.cnfOption B: CA-Signed Certificate
Use your organization's Certificate Authority to issue a certificate for the AI Guard service hostname.
3. Generate the Certificate Pin for SDK Users
If using certificate pinning (recommended for self-signed certificates), generate the SHA-256 pin to share with SDK users:
openssl x509 -in server.crt -pubkey -noout \
| openssl pkey -pubin -outform DER \
| openssl dgst -sha256 -binary \
| base64Share this base64-encoded value with developers who will use the pin_sha256 parameter in the SDK client.
4. Verify the Deployment
Once the Light Worker Node is running, verify the AI Guard service is healthy:
# Using CA-signed certificate
curl https://ai-guard.example.com:4443/health
# Using self-signed certificate with pinning
PIN=$(openssl x509 -in server.crt -pubkey -noout \
| openssl pkey -pubin -outform DER \
| openssl dgst -sha256 -binary \
| base64)
curl -k --pinnedpubkey "sha256//$PIN" https://ai-guard.example.com:4443/healthA successful response returns 200 OK.
Networking Requirements
Ensure the following network connectivity is in place:
| Direction | Purpose | Endpoint | Notes |
|---|---|---|---|
| Inbound | SDK traffic | AI Guard service on port 4443 | Must be reachable from your application network |
| Outbound | Token validation | Your OneTrust tenant URL | Requires internet or tenant connectivity |
| Internal | Metrics publishing | datadiscovery-onprem-agent:8080 | Internal Kubernetes network only |
| Internal | Classification profiles | scan-job-manager:8080 | Internal Kubernetes network only |
Network BridgingIf the SDK runs on a different network than the Worker Node, you may need to configure network bridging (e.g., NodePort, LoadBalancer, or Ingress) to route traffic to the AI Guard pod.
What's Next?
- Install the SDK β Download and install the AI Guard Python SDK
- Initialize the Client β Configure and connect the SDK to your AI Guard service
Updated 3 days ago