Affinidi Trust Fabric
This product is in Closed Beta. The information provided here offers a technical preview of key capabilities.
Overview
Affinidi Trust Fabric is a secure proxy infrastructure that provides cryptographic identity, policy enforcement, and observability for AI agent communication. It enables multi-hop routing across organisational boundaries whilst maintaining end-to-end security and compliance.
sequenceDiagram
participant Agent as AI Agent
participant GW as Trust Gateway
participant Service as Target Service/<br/>AI Agent
Agent->>GW: Send Request (A2A/MCP) via Channel
Note over GW: Verify Identity, Assign DID<br/>Apply Policies<br/>Capture Metrics<br/>Inject Metadata
GW->>Service: Forward AI Agent Request
Service-->>GW: Send Response
Note over GW: Verify Identity, Assign DID<br/>Apply Policies<br/>Capture Metrics<br/>Inject Metadata
GW-->>Agent: Forward Backend ResponseThe Trust Gateway acts as an intercepting proxy that:
- Verifies identity and assigns decentralised identifiers (DIDs) to AI agents automatically.
- Enforces access policies, rate limits, and circuit breakers.
- Routes traffic to target services, another AI agents, or other Trust Gateways.
- Captures metrics and request/response payloads for audit.
Key features
Cryptographic identity
Identity in Trust Gateway automatically generates a unique Decentralised Identifier (DID) for each agent based on configurable identity fields extracted from requests. The DID serves as a persistent identifier for the agent, regardless of request origin or network address.
- Automatic DID issuance based on agent configuration (LLM provider, model, deployment region).
- Ed25519 signing for inter-gateway communication.
- DID resolution with caching for performance.
Channel management
A channel is a routing configuration that controls how the Trust Gateway handles requests. A channel connects your AI agent to its destination, whether that’s a backend service, another AI agent, or another Trust Gateway.
Each channel defines:
- Where to listen: The URL path where the channel accepts incoming requests.
- Where to forward: The destination endpoint (backend service, AI agent, or another Trust Gateway).
- Which protocol to use: A2A (Agent-to-Agent), MCP (Model Context Protocol), or other supported protocols.
- What policies to enforce: Security rules, rate limits, circuit breakers, and access controls.
- What to inject: Custom metadata, identity credentials, or secrets.
- What to capture: Logging, metrics, and payload information for monitoring and debugging.
Policy enforcement
The Trust Gateway uses Open Policy Agent (OPA) to define and enforce custom access control rules. Policies are written in Rego, a declarative policy language that evaluates requests before they are forwarded to the target. This enables fine-grained, context-aware authorization based on agent identity, request content, JWT claims, and external data sources.
OPA policies evaluate requests based on:
- Agent identity (DID, metadata fields).
- Request content (method, parameters, headers).
- Contextual information (time of day, rate limits, user roles).
- External data sources (allowlists, denylists, databases).
Network configuration
Network configuration controls how the Trust Gateway handles request lifecycle, failures, and traffic distribution. These features enable resilient communication patterns, protect against cascading failures, and provide fine-grained control over request routing and performance.
- Circuit breakers to prevent cascading failures.
- Retry logic with exponential backoff.
- Configurable timeouts (request, connect, idle).
- Traffic mirroring for A/B testing and shadow deployment.
Observability
The Trust Gateway provides comprehensive observability and monitoring capabilities to track performance, debug issues, and analyse AI agent behaviour across your channels and gateway connections.
- Real-time dashboard showing active connections and request rates.
- Full payload capture for debugging (configurable per channel).
- Multi-backend metrics export (Prometheus, CloudWatch, local files).
- Structured logging with correlation IDs for request tracing.
Multi-hop routing
Trust Gateways can chain together for cross-organisational or cross-network communication. Each gateway maintains independent policies and audit trails.
sequenceDiagram
participant Agent as AI Agent
participant GW1 as Trust Gateway 1<br/>(Org A)
participant GW2 as Trust Gateway 2<br/>(Org B)
participant Service as Target Service
Agent->>GW1: Request (A2A/MCP)
Note over GW1: Assign DID<br/>Apply Policies<br/>Capture Metrics
GW1->>GW2: Encrypted DIDComm Message
Note over GW2: Verify DID<br/>Apply Policies<br/>Capture Metrics
GW2->>Service: Forward Request
Service-->>GW2: Response
GW2-->>GW1: Encrypted Response
GW1-->>Agent: ResponseIt enables persistent, authenticated links between Trust Gateways. Each connection uses DIDComm v2.1 for message delivery and supports:
- Bidirectional messaging.
- DID-based identity verification.
- Out-of-band (OOB) invitation-based setup.
Supported protocols
Affinidi Trust Fabric supports multiple communication protocols to enable interoperability across different AI agent ecosystems:
| Protocol | Description | Use Case |
|---|---|---|
| Agent-to-Agent (A2A) | Open standard protocol for direct agent-to-agent communication enabling interoperability between different AI agent frameworks. Provides a common language for agents to discover, negotiate capabilities, and exchange messages. Learn more about A2A Protocol. | Enable agents built on different platforms (AutoGPT, LangChain, custom frameworks) to communicate and collaborate seamlessly. |
| Agent Payments Protocol (AP2) | Google’s protocol enabling AI agents to execute payment transactions autonomously. Standardises how agents authenticate, authorise, and complete payments across different payment providers. Learn more about Google AP2. | AI agents that need to purchase services, pay for API usage, or conduct financial transactions on behalf of users. |
| Model Context Protocol (MCP) | JSON-RPC 2.0 interface for AI model tool calling, resource access, and prompt management. The Trust Gateway injects identity tracking via the _meta field without modifying the MCP protocol. Learn more about MCP. | Monitor which AI models call which backend tools whilst maintaining MCP protocol compliance. |
| Universal Commerce Protocol (UCP) | Google’s standard protocol for commerce-related agent interactions. Enables AI agents to participate in e-commerce transactions, payment processing, and merchant communication flows. Learn more about Google UCP. | AI agents that facilitate product purchases, process payments, or interact with merchant systems for standardised commerce operations. |
| x402 Protocol | Payment-required protocol based on HTTP 402 status code. Enables micropayment and pay-per-use models for AI agent API access. Learn more about x402.org. | Monetise AI agent API access through automated micropayments. Agents can consume paid resources without manual intervention. |
| DIDComm v2.1 | Encrypted, authenticated messaging between Trust Gateways for multi-hop routing. Managed through persistent Connection Points linked to DIDComm Mediators. Learn more about Affinidi Messaging. | Create secure communication paths across organisational boundaries with end-to-end encryption. |
Sample use cases
| Use Case | Description |
|---|---|
| Multi-Tenant AI Platforms | Assign unique DIDs to each tenant’s agents. Enforce tenant-specific rate limits, access policies, and observability boundaries. |
| Cross-Organisation Collaboration | Link Trust Gateways between organisations. Each maintains independent policies and audit trails whilst enabling secure agent-to-service communication. |
| Network Boundary Traversal | Route internal agents through DMZ gateways to external services. Maintain end-to-end encryption and complete audit trails across network zones. |
| AI Model Experimentation | Use traffic mirroring to test different LLM responses (GPT-4 vs Claude) in parallel without modifying agent code. |
Next steps
Glad to hear it! Please tell us how we can improve more.
Sorry to hear that. Please tell us how we can improve.
Thank you for sharing your feedback so we can improve your experience.