Trust Center
Principles we apply to how we build
Access Control & Credentials
Least-privilege access
Agents and users are only granted access to what they need for a specific task. Nothing more.
No static API keys
Access to tools and services uses scoped, short-lived tokens (not passwords or shared keys stored in code.
MFA on all critical access
Multi-factor authentication is required for all services and infrastructure we operate.
Data & EU Residency
Data stays in the EU
All processing and storage takes place on EU-based infrastructure. Your data does not leave the EU.
We only use what we need
Only the data required for a task is passed to the AI. Sensitive context is not included by default.
Tenant isolation
Your data is kept separate from other customers at every layer.
Infrastructure & Compliance
EU-based hosting
All infrastructure runs in EU data centres operated by certified providers.
AI-generated code is reviewed
Code written with AI assistance goes through the same review and testing gates as any other code. It cannot be merged automatically.
Supply chain integrity
We track where our code and dependencies come from.
Monitoring & Audit Trail
Every agent action is logged
We record what tools were called, when, by whom, and what the outcome was.
You can ask what happened
For any session, we can reconstruct the sequence of agent decisions and actions.
Anomalies are flagged
Unusual agent behaviour (unexpected tool calls, access outside normal scope) triggers alerts.
Prompt Injection
Model output is never trusted
We treat everything the AI produces as untrusted input. Security rules are enforced by the system) not by instructions to the model.
Indirect injection via documents
If an agent reads a file, email, or webpage, that content cannot secretly redirect what the agent does.
We design for injection, not against it
We assume prompt injection attempts will occur and architect accordingly) rather than trying to block every variant.
Risk & Autonomy
Autonomy is configured, not assumed
How independently an agent acts is a setting (agreed with you upfront. The default is cautious.
High-risk actions are classified
We distinguish between low-risk reads and high-risk writes or sends. Different rules apply to each.
Guardrails
Hard limits the AI cannot override
Certain actions are blocked at the system level) no instruction to the model can change this.
Tool access is allowlisted
Agents can only call tools that have been explicitly permitted. There is no general-purpose access.
Execution is isolated
When agents run code or scripts, this happens in a sandboxed environment with no access to your broader systems.
Approval Gates
Humans approve high-impact actions
Before an agent sends a message, deletes data, or runs code, a human confirms. This is enforced, not optional.
Limits are enforced, not just prompted
We don't rely on telling the AI to behave. Boundaries are technical (the AI cannot act outside them regardless of what it is told.