ApproachPrivate AI inference

Private AI inference
for regulated work.

Run AI inference where your data stays in your zone, the inference vendor sees only de-identified payloads, and every response is cryptographically signed. The alternative to sending raw prompts to vendor dashboards.

TL;DR

For regulated work (finance, healthcare, government), the choice is rarely "use AI or don't." It's where the audit trail lives, who controls the keys, and what the vendor sees. Lucairn shifts all three to your side — same models, same SDKs, but PII never leaves Sandbox A and every response carries a verifiable signature.

See the comparison Platform overview
01Direct vendor vs Lucairn

Eight criteria,
side by side.

Where regulated-industry buyers actually compare the options. Not feature parity — control, retention, and verifiability.

Criterion
Direct vendor API
Lucairn private inference
PII visibility to the vendor
Does the inference vendor see customer identifiers in the prompt?
Yes
Identifiers in prompts reach the vendor in clear. Vendor's data-handling policy is the only constraint.
No
Identifiers stay in Sandbox A. Vendor sees only the de-identified payload. Architectural enforcement, not policy.
Audit log control
Who owns the per-decision logs your auditor will request?
Vendor
Vendor-side dashboards and exports. Retention follows vendor policy. Access can be revoked unilaterally.
You
Logs generated in your zone, signed with your witness key, retained in your storage. Vendor disappears, logs survive.
Cryptographic signature
Per-response signature you can verify without the vendor?
No
Tampering with logs (or vendor-side replay) is undetectable. Trust the vendor's word.
Ed25519 + Sigstore
Per-request signature; public Rekor anchor. Replay or modification is cryptographically detectable.
Data residency
Where do prompts, responses, and logs physically live?
Vendor regions
Bound to the vendor's available regions. EU regions improving, but transfer mechanisms (SCCs, etc.) still required.
Your choice
On-prem, sovereign cloud, EU-region object stores. The bridge sees the only data path; you configure it.
BYOK (bring your own key)
Can you swap inference vendors without re-instrumenting?
No
Vendor lock-in. SDK swap requires application changes per provider.
Yes
BYOK to Anthropic, OpenAI, Mistral, Cohere, or your self-hosted open-weight models. Customer-supplied API key. Same receipt format across.
Compliance mapping
Does the architecture map cleanly to GDPR / EU AI Act / DORA controls?
Customer responsibility
Mapping is on the controller. Vendor provides processor agreement; the rest is your DPO's problem.
Built-in
Each receipt tags the framework controls it satisfies. Auditor-ready evidence as a side-effect.
Vendor-disappearance risk
If the vendor shuts down, deprecates, or changes terms, what happens to historical evidence?
High
Logs gone. Historical decisions become unverifiable. Audit trail is hostage to vendor continuity.
Zero
Public transparency log + customer-side storage. Verification doesn't depend on Lucairn or the inference vendor staying alive.
Operational burden
What does it cost to run, in engineering and ops time?
Low
Vendor SDK, no infrastructure. Closest to "turnkey," ignoring the audit and compliance gaps.
Moderate
Self-hosted Platform requires k8s + object store. Agent variant runs in-process with no infra. Choose by need.
02When to choose what

Honest framing,
no false binaries.

Direct vendor APIs are the right call for plenty of workflows. Private inference is the right call for the ones below. If your workflow doesn't appear in either column, you probably don't need Lucairn.

Stay with the direct vendor when

Your workflow has
no regulated identifiers.

  • Coding assistants where the codebase isn't itself sensitive
  • Internal documentation generation with no customer or employee data
  • Prototypes / R&D not yet hitting production
  • Marketing copy, public content, generic chatbots
  • You don't have regulators waiting at the end of the year
Move to Lucairn when

Your workflow touches
PII in production.

  • Customer-facing AI handling KYC / AML / underwriting / claims
  • Clinical documentation, pre-auth, medical-device AI
  • Employee data: HR case routing, complaint triage, performance review
  • Government / public-sector decisions affecting citizens
  • Anything where an EU AI Act, GDPR, DORA, or NIS 2 auditor will eventually ask "show me the logs"
03Frequently asked

Private inference questions,
answered.

What's the latency cost of routing through Lucairn vs going direct to the vendor?

Sanitizer ensemble: 10–30 ms per request depending on payload size. Bridge signing: ~1 ms p99. Witness anchoring: async (doesn't block the response). Total Lucairn-side overhead is typically under 5% of the inference roundtrip for cloud LLMs, and lower for self-hosted models on the same network.

Does private inference mean I can't use the latest Anthropic / OpenAI / Mistral models?

No. Lucairn is BYOK — you bring your own provider key. Anthropic Claude, OpenAI GPT, Mistral, Cohere, plus self-hosted open-weight models all work with the same protocol. You stay on the latest model release; Lucairn handles the audit receipt around it.

Can I use this without committing to self-hosting infrastructure?

Yes. Lucairn Agent is a library variant (npm / pip / go-mod) that runs in-process inside your application. No services to operate. Same protocol as the full Platform deployment. Most teams start with Agent and graduate to Platform when they need multi-tenancy or HSM-backed keys.

How does this compare to managed AI gateways (Kong AI Gateway, Cloudflare AI Gateway, etc.)?

Managed AI gateways focus on routing, caching, and observability. Lucairn focuses on compliance evidence and PII isolation. They're complementary, not competitive — you can run both. Lucairn's value is the per-decision signed receipt that satisfies regulator requirements; the gateway's value is rate limiting and routing.

Is private AI inference always more secure than direct vendor APIs?

Not automatically. Private inference moves the security boundary under your control — that's a benefit if your team operates secure infrastructure, and a risk if you don't. Lucairn ships with sensible defaults, but a misconfigured deployment is still misconfigured. Direct vendor APIs benefit from the vendor's security team. Choose by who you trust to run the substrate.

04Get started

From assessment
to production.

Run the self-serve assessment against your AI workflow and see if private inference is the right call. 15 minutes. Output goes to your DPO.