Your AI audit trail,
and what it actually proves.
Four common approaches to logging AI inference: roll-your-own application logs, vendor dashboards, SIEM-only ingestion, or cryptographically signed receipts. Each has a moment when an auditor or regulator pushes on the integrity of the chain. Three of the four don't survive that moment.
An AI audit trail is only useful if it survives review. Roll-your-own logs are mutable. Vendor dashboards can disappear with the vendor. SIEM-only ingestion preserves only what was sent. Cryptographically signed receipts — what Lucairn produces — are tamper-evident, vendor-independent, and verifiable months later. This page compares the four on eight criteria; the receipt approach wins on six, ties on one, loses on one (operational simplicity).
How teams currently log
AI inference today.
Most regulated teams running AI in production have some form of inference logging. Whether it survives a regulator's review depends on which of the four approaches below it is.
Roll-your-own application logs
Log lines written from your application code. Stored in stdout, S3, or a managed log service. Mutable by anyone with write access. No cryptographic anchor.
Fine for engineering observability. Fails an audit that pushes on integrity.
Vendor dashboards
The LLM provider's own logging UI. Vendor-controlled retention, vendor-controlled access. Subject to the vendor's terms changes. Disappears if the vendor disappears.
Fine while the vendor is your friend. Fails any audit that requires deployer-controlled evidence.
SIEM-only ingestion
Application sends inference events to your SIEM (Splunk, Datadog, Sentinel). Same SIEM you use for everything else. Integrity is bounded by the SIEM's append-only properties — not cryptographic.
Better than the previous two. Fails when an auditor asks about per-decision tamper evidence.
Cryptographically signed receipts
Each inference produces an Ed25519-signed, RFC 3161-timestamped receipt anchored in a public Sigstore Rekor log. Tamper-evident, vendor-independent, verifiable without our cooperation.
Right for regulated work where audit integrity is load-bearing.
Eight criteria,
four approaches.
The criteria below are the ones an EU AI Act / DORA / NIS 2 / GDPR auditor will actually ask about. The Lucairn column is honest about where the approach is heavier than alternatives.
Three of the four
are the right answer somewhere.
If we said "always Lucairn," we'd be selling. The honest framing: most teams should pick the lightest option that survives their audit. Here's where each approach is right.
- Your AI use is internal tooling, not customer-facing decisioning
- You have no regulator with audit authority over the logs
- You can accept vendor lock-in for the audit chain
- Logging integrity is engineering, not compliance
- You already have mature SIEM operations and policies
- Your auditor accepts SIEM appendability as sufficient integrity
- Your AI use is internal and the audit window is short
- Cryptographic integrity is not a procurement requirement
- AI decisions affect customers under DORA, NIS 2, EU AI Act
- An external auditor will verify the audit chain
- Vendor disappearance is a real risk you must underwrite
- Cryptographic claims are part of your procurement security pack
- Public-sector or financial-services procurement is in your future
Audit trail for AI — questions,
answered.
Don't most cloud providers offer audit logging?
Yes — CloudTrail, Cloud Audit Logs, Azure Monitor. They cover infrastructure events: who provisioned what, when. They do not capture per-AI-decision content, sanitiser manifests, or cryptographic integrity over the inference path. Provider audit logs are necessary but not sufficient for AI audit trails.
Can I just append to a hash chain in my own application?
Yes, technically. The challenge isn't the hash chain — it's the witness key. Without a third-party-anchored signature, your hash chain is self-signed: an auditor has to trust your operations team didn't roll the chain. Lucairn's value is that the witness signature is anchored to a public log Lucairn cannot rewrite.
Why does Sigstore Rekor matter specifically?
Rekor is an append-only Merkle log operated by an independent open-source project (the Sigstore Foundation). Once an entry is published, it cannot be retroactively altered without breaking the Merkle proof. Verification doesn't require contacting Sigstore at all — you can pin tree heads and verify offline. That's the property regulators care about.
What's the operational overhead of Lucairn vs SIEM-only?
Lucairn is moderately heavier: you operate (or BYO) a witness key, route inference through the bridge, and accept a few hundred milliseconds of additional latency on the signing path. The payoff is per-decision integrity that survives vendor disappearance and 6-year audit horizons. For non-regulated workloads, this overhead isn't worth it; for regulated work, it's the lowest-overhead option that actually meets the audit bar.
Can I combine SIEM and Lucairn?
Yes — this is the most common production deployment. Lucairn produces the receipts; the receipt chain exports to your SIEM via JSONL feed. Your SIEM gets a tamper-evident input; your auditors verify against the public log when they need to. Lucairn does not replace the SIEM — it upgrades the trustworthiness of one specific input feed.
Does the audit trail still work when the response is streamed?
Yes. Streaming responses (stream:true / SSE) are supported on the OpenAI- and Anthropic-shape endpoints. Pseudonyms are reassembled across SSE chunk boundaries via a bounded-buffer streamer, and the same per-decision receipt is signed once the stream completes — the audit trail is identical to a non-streaming call.
From assessment
to production.
Run the self-serve assessment against your AI workflow and see if signed-receipt audit trails are the right call. 15 minutes. Output goes to your DPO.