Skip to main content
Before you launch your LLM Gateway to real users, follow this checklist to ensure stability, security, and performance.

1. Monitoring & Logging

Never run in the dark. The gateway supports structured logging that integrates with your existing tools.
  • Enable Verbose Logging: In production, set LOG_LEVEL=info.
  • Trace IDs: Use the X-Request-Id header to trace a single user question through the gateway and into your backends.
  • Analytics: Connect the Built-in Analytics to PostHog or Google Analytics to track tool usage patterns.

2. Infrastructure

  • Redis for Sessions: Do not use the default in-memory storage for production. Use a managed Redis like Upstash or AWS ElastiCache.
  • Rate Limiting: Protect your backends from “aggressive” LLMs by enabling the per-client rate limiter.
  • Concurrency: Set MAX_CONCURRENT_CALLS to prevent a single user from overwhelming your system.

3. Security

  • API Key Rotation: Document a process for rotating your gateway secrets.
  • Backend Secrets: Ensure your Shopify/Square secrets are never logged or exposed in the frontend.
  • Audit Logs: The gateway can be configured to log every tool call and its arguments for compliance and debugging.

4. LLM Hallucination Prevention

A common risk in conversational commerce is the AI making up features or prices.
  • Strict Schemas: Use the most restrictive Zod schemas possible.
  • Tool Descriptions: Provide very clear descriptions to the model. e.g., instead of “Search”, use “Search our live product catalog. Only return items that are currently in stock.”
  • Confirmation Steps: For destructive or high-value actions (like place_order), always require a tool-based confirmation or a link to a verified web page.

Scaling

For high-traffic environments, consider using the Better Data Hosted Cloud, which handles these production concerns automatically, providing 99.9% uptime and global edge deployment. Explore Hosted Features ->