Skip to main content
Security is built into every layer of the Better Data LLM Gateway. We take a “Zero Trust” approach to AI interactions.

Data Privacy

  • GDPR by Design: We do not store PII (Personally Identifiable Information) in our conversation logs by default.
  • Data Residency: Self-hosted gateways keep all data within your own cloud environment (Vercel, AWS, GCP).
  • Session Isolation: Every user’s conversation and cart are cryptographically isolated and cannot be accessed by other sessions.

Infrastructure Security

🔐 API Key Management

The gateway requires valid API keys for all non-discovery requests. Use the auth config to manage your secrets.

🛡️ Backend Shielding

The gateway acts as a protective shield for your Shopify or Square store. It prevents the LLM from making unfiltered or malicious queries directly to your sensitive APIs.

🚦 Rate Limiting

Prevent “Prompt Injection” attacks or bot scrapers from draining your commerce resources.

AI Safety

Hallucination Prevention

By using structured Zod schemas and capability contracts, we define exactly what the AI can and cannot ask for. The AI cannot “hallucinate” a 100% discount if your backend doesn’t support it.

Sandbox Execution

All backend handlers run in a constrained environment. They do not have access to your server’s filesystem or environment variables unless you explicitly provide them.

Prompt Security

Avoid “System Prompt Leaking” by hosting your base prompts inside the gateway rather than sending them with every client request. The gateway injects the “Ground Truth” product data securely before sending it to the LLM.

Best Practices

  1. Rotate Secrets: Regularly update your WEBHOOK_SECRET and GATEWAY_API_KEY.
  2. Minimal Scopes: Give your Storefront Access Tokens the minimum permissions required.
  3. Use Signal Tags: For the highest level of authenticity, require Signal Tag verification for high-value product updates.