See your data flow through Lucairn.
Live, with your own LLM key.
Upload a file. Pick a provider. Bring your own key. Watch the pipeline pseudonymize, route, witness, and certify your request — step by step. No screenshots. No screen-share. Your data stays in your browser.
Six steps.
Real pipeline. Real certificate.
The same six steps run inside the gated sandbox app. Each step shows status, latency, and the raw response — with a JSON drawer if you want the wire-level detail.
Upload & parse
Browser-side parse of CSV, JSON, Excel, PDF, plain text or markdown — up to 4 MB. Only the extracted text reaches our servers.
Sanitize (L1+L2)
Presidio NER plus heuristic rules plus known-entity matching. Identifiers are replaced with stable placeholders before anything else runs.
ID Bridge
Placeholders are mapped to opaque tokens by a separate service. The bridge holds the mapping; downstream services never see the original values.
Sandbox A → AI → Sandbox B
Tokenized prompt enters Sandbox A; reaches the LLM via your BYOK key; the response is relinked back through Sandbox B. Streaming visualizes the ≤24-byte cross-chunk relink buffer live.
Witness assemble
Claim chain plus sanitizer manifest plus TSA timestamp plus Rekor inclusion proof — assembled and signed by the Witness service.
Certificate
A signed, timestamped, anchored certificate that anyone can verify on /verify. The proof your AI request was actually pseudonymized end-to-end.
What this isn't.
We say so up front.
Not a screenshot tour.
Your file, your key, real pipeline. The same code paths that serve paying customers serve this demo.
Not training data.
Nothing you upload trains anything. Your file never leaves your browser; only the extracted text is sent to the gateway, and it is processed in-memory — not persisted as training material.
Not free-tier.
Gated by invitation while we prepare for general availability. Marc Schülke (founder) reviews each request personally.
Your LLM key stays in your browser.
Your LLM API key is held in your browser tab only — sent directly to the gateway with each inference request, never logged or persisted on our servers.
Rate-limited demo.
Rate-limited to 30 inference runs per session per hour and 200 globally — keeps the demo healthy for everyone.
By invitation.
Marc reviews every request.
We're rolling out the sandbox to a small set of evaluation prospects. If your team has an LLM project under regulatory pressure — clinical, financial, legal, agentic — fill out the request form. Marc Schülke (founder) reviews each request personally and emails you within 1–3 business days.