Claim chain with signed-receipt curve
All posts
EU-AI-Actcompliancegovernanceaudit-trailarchitecture

ServiceNow AI Control Tower at Knowledge 2026: a credible governance signal for the EU AI Act era

ServiceNow expanded AI Control Tower at Knowledge 2026 with five capabilities — discover, observe, govern, secure, measure — explicitly aligned to NIST AI RMF and the EU AI Act. This is good for the market, and it sharpens where the governance control plane stops and the data plane begins. Here's how the two layers fit together.

Lucairn··9 min read
On this page
  1. What ServiceNow announced at Knowledge 2026
  2. Why this matters — beyond ServiceNow customers
  3. Two layers of AI governance
  4. Where ACT's coverage stops, by design
  5. Where Lucairn complements ACT
  6. A worked example: the EU regulated workload
  7. What changes for Lucairn
  8. Closing — a healthy stack
  9. References

What ServiceNow announced at Knowledge 2026

At Knowledge 2026, ServiceNow announced an expansion of ServiceNow AI Control Tower™. Per the official ServiceNow newsroom release, ACT now lets enterprises "discover, observe, govern, secure, and measure AI deployed across any system in the enterprise" — five capabilities that together stake out the platform as the central command surface for an organization's AI estate.

The expansion is substantive. Discovery has been extended across 30 new enterprise integrations, including the major hyperscalers (AWS, Google Cloud, Azure) and the operational systems where enterprise AI actually runs (SAP, Oracle, Workday). Risk classification is aligned to five frameworks, including NIST AI RMF and the EU AI Act. Observability for agent runtime behaviour comes from the recently announced Traceloop acquisition. The Secure capability adds identity-and-access-aware governance via the Veza integration, plus an emergency kill-switch for runaway agents. General availability for the expanded surface is planned for August 2026.

A few lines from the press release frame the product positioning precisely. ACT "discovers AI across the enterprise — including LLMs, agents, models and applications running on AWS, Azure, Google Cloud, Salesforce, SAP, Oracle, Workday and more." Coverage spans risk frameworks aligned to NIST AI RMF, the EU AI Act, broader international AI standards, and ServiceNow's own internal policies. The Secure capability "protects AI through identity-aware governance and runtime observability, with one-click controls to disable misbehaving agents."

That's the announcement. The substance is real, and the framing is honest about what the product is: a governance platform.

Why this matters — beyond ServiceNow customers

A serious governance platform vendor explicitly aligning a product roadmap to the EU AI Act is a strong demand signal. ServiceNow does not chase fads, and the company does not stake a flagship Knowledge keynote on a regulatory framework unless their enterprise customers are asking for it. They are.

This validates a procurement pattern that the entire compliance-tech category is built around: CISO and DPO and chief AI officer all want the same evidence, all want it in one place, and all want the regulatory mapping done before the auditor knocks. Read the announcement as good news for everyone working on the EU AI Act stack, even teams that will never run ServiceNow. A heavyweight platform vendor moving in this direction means the regulator-driven category gets bigger, the buyer education cost goes down, and the question shifts from "do we need this?" to "which combination of products do we need?"

The reason that shift matters for the rest of the category is that no single product is going to answer the EU AI Act for an enterprise. The Act spans technical documentation (Annex IV), per-decision logging (Article 12), human oversight (Article 14), risk management (Article 9), data governance (Article 10), accuracy and robustness (Article 15), and a conformity assessment workflow that pulls all of those threads together. Some of that is policy and process. Some of it is architecture. Some of it is cryptographic evidence produced during inference. Different layers, different tools.

Two layers of AI governance

Borrow a frame from adjacent security categories.

In network security, you have a control plane (the SIEM, asking "what's on the network, who's logging in, what alerts fired?") and a data plane (the EDR, asking "what is each endpoint actually doing right now?"). Both exist. Neither replaces the other. A SOC running only a SIEM has no detection on the endpoint; a SOC running only EDR cannot correlate across the estate.

In identity, you have a control plane (the IAM, asking "who can access what?") and a data plane (the secrets manager, asking "what is the secret value, was it rotated, who fetched it last?"). Again, both exist; again, neither replaces the other.

AI governance has the same shape. ACT is a control plane in this taxonomy: the platform that asks "which AI is running, who built it, what risk class is it, who can use it, what policies apply?" Lucairn is a data plane: the gateway that asks "what data went into this specific request, what did the model produce, can we cryptographically prove it?" Different problems, both real.

The control-plane / data-plane vocabulary is not new in security architecture, but it is unusually clarifying for AI governance because the two layers have genuinely different shapes. The control plane is policy-shaped — discover, classify, attribute, observe at the meta level. The data plane is request-shaped — sanitize, route, sign, record at the per-call level. A product optimised for one is structurally a different product than one optimised for the other.

Where ACT's coverage stops, by design

Read the public materials carefully and ACT's stated scope is consistent and well-defined.

The Govern capability tracks Article 12 and Annex IV obligations and produces compliance reports. ServiceNow's community blueprint for AI Control Tower (link below) describes ACT serving as the central platform for surfacing the obligations and aggregating evidence from each integrated system. The blueprint is explicit that the underlying logs are produced by each integrated system and ingested into ACT — ACT is the governance hub, not the inline producer of per-decision artefacts.

The Secure capability focuses on identity, access, and runtime agent observability. Public materials describe Veza-powered identity-aware governance and Traceloop-powered observability of agent behaviour — permissions, scope, behavioural patterns. There is no public claim that ACT inspects request content inline or redacts PII before a model call.

These are deliberate scope choices, not gaps. A control plane that also tried to be a data plane would lose its value as a single pane of glass. The platform's strength is precisely that it sits above the request flow and aggregates evidence from below; pushing it down into the request path would change the architectural shape of the product.

What this means for an EU AI Act buyer is that ACT answers the governance and inventory questions natively, and points at the data plane for the per-decision evidence. That's good architecture. It's also why most regulated buyers will need both layers.

Where Lucairn complements ACT

Three concrete patterns.

Pattern A: ACT discovers, Lucairn evidences. ACT inventories a regulated workload — say, a credit-decision LLM under Annex III — as high-risk per the EU AI Act. The compliance team marks it for Article 12 logging. ACT produces the policy mapping and the conformity workflow. Inline, Lucairn issues one signed receipt per LLM call inside that workload. Each receipt carries an Ed25519 signature over the canonical request bytes, an RFC 3161 TSA token, and a Sigstore Rekor inclusion proof. The receipts feed back into ACT as evidence rows tagged to the discovered AI asset. ACT becomes the read interface for the auditor; Lucairn produces the artefacts the read interface displays. See the implementation breakdown at audit-trail-for-ai.

Pattern B: Identity stays out of the model. ACT enforces who can invoke the AI through Veza-powered identity-aware governance — the access-control question. Lucairn enforces what the AI actually sees through on-gateway pseudonymisation — the data-exposure question. A user calls a high-risk LLM workflow; ACT confirms the user has the right scope; Lucairn intercepts the request, redacts PII before the LLM sees the prompt, then re-links the placeholders only on response. Two enforcement points, both real, neither redundant. The result is that a model output traceable to a specific user under ACT's access logs never actually contained the user's raw identity at the moment the model produced the text. See private-ai-inference for the architecture.

Pattern C: Cryptographic integrity. ACT's audit module records the decision chain at the platform layer. Lucairn's per-response Ed25519 signature plus the public timestamp plus the Sigstore Rekor anchor mean the chain is independently verifiable without ACT and Lucairn cooperating. An auditor with the public witness key can verify any individual receipt offline. A regulator can confirm a receipt's existence at a point in time via the public Rekor log without contacting either vendor. That kind of independent verifiability is a property of the receipt, not a property of the platform that displays it — which is the whole point of cryptographic evidence.

None of these patterns require deep integration. The receipt format is open, the verification keys are public, and the wiring between the two layers is REST plus signed JSON.

A worked example: the EU regulated workload

Concrete scenario. A bank deploys ACT for governance and Lucairn as the inline gateway in front of an Anthropic LLM call inside a high-risk credit-decision workflow.

A loan officer drafts a recommendation and sends it through the LLM-assisted summary tool. ACT logs the workflow invocation against the discovered AI asset and the user's access scope. Inline, Lucairn intercepts the request. The applicant's name, date of birth, and address are detected by the sanitizer and replaced with stable placeholders. The redacted prompt goes to Anthropic. The response comes back. Lucairn re-links the placeholders, signs the response and the corresponding sanitisation manifest, RFC 3161-timestamps it, anchors the receipt in the public Sigstore Rekor log, and forwards both the response and the receipt URL to the bank's application.

Six months later the applicant disputes the decision and the regulator opens an Article 12 investigation. The CISO opens ACT, sees the inventory entry, and locates the workflow's audit row. From that row ACT links to the Lucairn receipt for the specific request. The auditor downloads the receipt, fetches the public witness key from the well-known endpoint, verifies the Ed25519 signature offline, verifies the RFC 3161 timestamp via FreeTSA, and checks the Sigstore Rekor inclusion proof against the public Rekor log. Independent confirmation: the receipt existed at the timestamped instant, the input hash matches the regulator's submitted reproduction, and neither the bank nor Lucairn nor ServiceNow has the cryptographic ability to alter the record after the fact.

Neither vendor has stored the bank's customer identity data. Both have produced evidence. The auditor closes the investigation with a verdict the bank can defend.

What changes for Lucairn

Nothing structural. Two notes for the team and the sales motion.

When a buyer says "we already have ACT," that's an opening, not an objection. They have the control plane; they need the data plane. The right reply is: confirm with the buyer that ACT does not redact PII before the LLM sees the request, and that ACT does not produce per-response cryptographic receipts (per public docs, it does not). Map the gap to Lucairn, with the layer-mapping page as the procurement-grade answer.

Stop leading EU AI Act conversations with "compliance" alone. ServiceNow can outspend any startup on that exact phrase, and they should — they have a credible governance product for the buyer's CISO. Lead instead with the runtime guarantee. On the Developer and Pro hosted tiers, on-gateway pseudonymisation strips PII before your LLM sees the request, and every response carries a signed receipt that an auditor can verify independently against a public witness key. Those are claims a control plane structurally cannot make.

Enterprise self-hosted is a different shape. There, the entire architecture runs inside the customer perimeter, and no raw identity data leaves your environment at all — Sandbox A and the ID Bridge run on customer infrastructure; only pseudonymised payloads cross the boundary to the inference vendor. The Enterprise tier is also where the optional custom-trained PII shield lives, retrained on the customer's own domain corpus and priced per scope.

Closing — a healthy stack

A healthy enterprise AI stack will look like ACT (or a credible peer) at the governance layer, plus best-of-breed data-plane gateways like Lucairn in the request path, plus the customer's chosen LLM vendor. We've seen this pattern converge in every adjacent category — SIEM and EDR, IAM and secrets manager, CIEM and runtime protection. We expect it to converge here. Lucairn is built for the data plane.

If you're at an organization that runs ACT and is scoping the EU AI Act build-out, the layer-mapping page walks through the procurement-grade questions row by row. If you want to see the request-path artefacts in detail, audit-trail-for-ai and evidence-layer are the deepest reads. If you want to scope a pilot, the 15-minute compliance assessment is the fastest way in.

References

Related reading