Skip to main content
The LLM Gateway is designed to be resilient, ensuring that even if a backend or LLM provider fails, the conversation can continue gracefully.

Standard Error Codes

The gateway standardizes errors into a set of machine-readable codes:
CodeStatusDescription
auth_required401API key or session is missing/invalid.
rate_limit_exceeded429Too many requests for this account or session.
backend_timeout504Your commerce platform didn’t respond in time.
product_not_found404The requested ID does not exist in the catalog.
invalid_arguments400The LLM passed arguments that don’t match the schema.

Informing the LLM

When an error occurs during a tool call, the gateway doesn’t just crash. It returns an error object to the LLM that explains what went wrong, often with “hints” on how the AI can recover. Example Error Response to LLM:
{
  "error": "product_not_found",
  "message": "Product 'sku-123' is no longer in the catalog.",
  "hint": "Try searching again with keywords like 'running shoes' to find a similar item."
}
This allows the AI to say: “I’m sorry, I couldn’t find those specific shoes anymore. Should I look for a similar pair of running shoes for you?”

Circuit Breakers

In the Marketplace tier, the gateway includes automatic circuit breakers. If a specific merchant’s API is consistently failing, the gateway will temporarily disable their tools to prevent slowing down the entire network.

Client-Side Handling

When using the HTTP Adapter, you should check for a non-200 status code and handle the standard error JSON:
const response = await fetch(gatewayUrl, ...);
if (!response.ok) {
  const err = await response.json();
  console.error(`Gateway Error [${err.code}]: ${err.message}`);
}