OpenAI SDK Setup Guide

Route the OpenAI SDK through Lucairn in 2 minutes

1

Get your API keys

You need two keys: your DSA API key (from Lucairn) and your OpenAI API key (from platform.openai.com). If you don't have a DSA key yet, sign up at /account/signup.

2

Set base_url in your OpenAI client

Point the OpenAI SDK (or any OpenAI-compatible tool — Cursor, Continue, custom scripts) at the Lucairn gateway. Pass your DSA key as the api_key, and your OpenAI key in the X-Upstream-Key header. Snippets below for Python, TypeScript, and cURL.

3

Send a request

Use any model the OpenAI Chat Completions API supports — gpt-4o, gpt-4o-mini, o1-mini. Lucairn intercepts, sanitises, isolates, and signs every request before forwarding to OpenAI.

4

Verify with the cURL snippet

Run the cURL command below to confirm the connection. A successful response includes a "metadata.dsa_compliance" block with a Lucairn certificate URL — your cryptographic proof of what was sanitised.

Python
from openai import OpenAI

client = OpenAI(
    api_key="lcr_live_...",                         # Lucairn API key
    base_url="https://gateway.lucairn.eu/v1",       # Lucairn Gateway
    default_headers={"X-Upstream-Key": "sk-..."},   # OpenAI key
)

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Classify this ticket: ..."}],
)
TypeScript
import OpenAI from "openai";

const client = new OpenAI({
  apiKey: "lcr_live_...",                         // Lucairn API key
  baseURL: "https://gateway.lucairn.eu/v1",       // Lucairn Gateway
  defaultHeaders: { "X-Upstream-Key": "sk-..." }, // OpenAI key
});

const response = await client.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "Classify this ticket: ..." }],
});
cURL (verify)
curl https://gateway.lucairn.eu/v1/chat/completions \
  -H "Authorization: Bearer lcr_live_..." \
  -H "X-Upstream-Key: sk-..." \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o",
    "messages": [{"role": "user", "content": "Say hello."}]
  }'

How it works

  • Every message is scanned for PII (names, emails, addresses, medical terms) before it reaches OpenAI.

  • Detected PII is replaced with safe placeholders. OpenAI never sees real personal data.

  • Developer-tier responses contain placeholders so your code never receives raw PII. Pro and Enterprise tiers can enable automatic re-linking back to the original values.

  • A Lucairn Certificate is generated for each request — cryptographic proof of what was sanitized.

  • Streaming (stream:true) is not yet supported on /v1/chat/completions — set stream=false for now. The reason is a per-chunk DLP-scan gap that would let unsanitised tokens reach the client before the post-hoc redaction runs.

Capability matrix

Before you paste the snippet above into a production app, check what the OpenAI-compatible proxy actually covers today. Under-promise, over-deliver — we list the real gaps.

  • Non-streaming chat completions (stream:false)

    Full PII sanitisation + signed Lucairn Certificate per request.

  • System prompts

    System message is sanitised end-to-end alongside the user turns.

  • Multi-turn conversations

    Each turn is sanitised; one certificate per request.

  • Streaming responses (stream:true / SSE)

    ✕ Roadmap

    Rejected at the gateway with HTTP 400 streaming_not_supported. Per-chunk DLP scanning is on the roadmap; until then unsanitised tokens cannot be allowed to reach the client.

  • Tool-calls / function calling (tools, tool_choice)

    ✕ Roadmap

    Tool definitions and tool-call arguments are not sanitised today. Sending tool inputs through this endpoint is unsafe — use the DSA Proxy API for explicit field routing or wait for the roadmap update.

  • Prompt caching

    ✕ Roadmap

    Each request is processed independently so the per-call evidence stays valid. No cache reuse across requests.

  • Embeddings / image / audio endpoints

    ✕ Roadmap

    Only /v1/chat/completions is proxied today. Other OpenAI endpoints have no Lucairn pipeline coverage — do not send PII through them.

Streaming and tool-call DLP are tracked on the roadmap. Subscribe to the changelog for ship dates. Read the changelog.

Want to see this in action?

Book a working session — we'll walk through your use case together.