ComplianceAI governance

AI governance,
in two layers.

Control plane, and data plane.

AI governance platforms like ServiceNow AI Control Tower™ inventory, observe, and measure every AI deployed across the enterprise. They tell you what AI is running. Lucairn sits one layer below: in the request path, redacting PII before the model sees it, signing each response so an auditor can verify the decision later. The two layers are complementary, not competitive — and most regulated EU workloads need both.

TL;DR

Control plane answers which AI exists, who can use it, what risk class is it. Data plane answers what data went in, what came out, can we prove it. ACT does the first; Lucairn does the second. Article 12 logging needs the data plane — vendor dashboards are not sufficient evidence.

See the layer mapping Talk to sales
01How the layers split

Five risk areas,
two layers each.

Each row is a real procurement question. Read across to see which layer answers it. Most regulated buyers end up wiring both — ACT as the governance hub, Lucairn as the inline data-plane gateway whose signed receipts feed back into ACT's audit module.

Risk area
Control plane (governance platforms like ACT)
Data plane (Lucairn)
DISCOVER

What AI is running where

✓ Native — discovers AI across hyperscalers, SaaS apps, and LLM endpoints. Per ServiceNow's Knowledge 2026 announcement, ACT discovers AI deployed across any system in the enterprise.

— Not Lucairn's job — wire Lucairn's audit feed into the inventory so the discovered asset has receipts attached.

CLASSIFY

Risk classification (EU AI Act high-risk?)

✓ Risk frameworks aligned to NIST AI RMF and the EU AI Act. ACT carries the policy mapping and conformity workflow.

— Lucairn doesn't classify; we produce the per-decision evidence the classification needs.

ACCESS

Identity and access governance

✓ Scoped permissions, least-privilege enforcement across the discovered AI estate.

— Out of scope — Lucairn enforces only at the request boundary (BYOK gating, per-key rate limits, per-customer scoping).

ART. 12

Per-decision evidence (Art. 12 logging)

◐ Tracks the obligation; depends on each integrated system to actually produce the underlying log.

✓ Produces the artefact — signed Lucairn Certificate with input/output hashes per request.

PII

PII never reaches the model

◐ Cannot prevent; observes after the fact via runtime agent telemetry.

✓ On-gateway pseudonymization before the LLM sees the request.

INTEGRITY

Tamper-evident audit trail

◐ Logs decisions; integrity depends on the underlying log store and operator policy.

✓ Ed25519 signature plus RFC 3161 TSA token plus Sigstore Rekor public anchor — verifiable without our cooperation.

02Procurement questions

We already have ACT —
why Lucairn?

We already have ACT. Doesn't it cover EU AI Act compliance?

ACT covers the governance side of EU AI Act compliance — risk classification, model inventory, policy mapping. The Act's Article 12 (per-decision logging for high-risk systems) and Annex IV (technical documentation, including the actual artefacts the system produced) require evidence that lives in the request path. ACT can track the obligation; Lucairn produces the artefact. Most enterprises will deploy both: ACT as the governance hub, Lucairn as the inline data-plane gateway whose receipts feed back into ACT's audit module.

Does Lucairn replace ACT?

No. ACT operates above the request flow — discovering, classifying, observing AI across the enterprise. Lucairn operates inside the request flow — redacting PII before the LLM sees it, signing the response. They are different layers. A team that has only ACT cannot answer the auditor question 'show me the redacted prompt that was sent to the model and the signed proof of what came back.' A team that has only Lucairn cannot answer 'show me every AI deployed across the company.' EU regulated workloads typically need both.

Will ACT eventually do the data-plane work too?

Possible, and we'd welcome it — the market needs both layers, and a healthy governance platform plus best-of-breed data-plane gateways is the pattern most security stacks converge on (cf. SIEM and EDR, IAM and a secrets manager, CIEM and runtime protection). Today, ACT's Secure capability focuses on identity, access, and observability of agent behaviour. Per-request PII redaction with cryptographic per-response receipts is a different problem, and Lucairn is purpose-built for it.

How does the ACT and Lucairn integration look in practice?

Lucairn emits one signed receipt per LLM request. ACT consumes those receipts the same way it consumes any other AI-system audit feed: as evidence rows tagged to a discovered AI asset. The result is one pane of glass for the governance team (ACT) backed by cryptographically verifiable evidence from the data plane (Lucairn). Wiring is REST plus signed JSON; no special partnership needed — the receipt format is open.

03Get started

Layer your AI governance.
Add the data plane.

Run the 15-minute compliance assessment to see which Art. 12 / Annex IV controls Lucairn covers and how the receipts wire into your existing governance platform. Output is a one-page exposure report you can bring to your CISO, DPO, and ACT operator.