Skip to main content

Export Inventory Snapshot

This guide shows you how to export inventory snapshot data for reporting and analysis.

Prerequisites

  • API Key: Valid API key with inventory.read permission
  • Organization Access: Access to your organization’s inventory data
  • Location ID: Location ID to export (optional, omit for all locations)

Step-by-Step Guide

Step 1: Get Inventory Items

Retrieve inventory items with pagination to handle large datasets. Request:
curl -X GET "https://app.betterdata.co/api/inventory?locationId=loc_123&page=1&limit=100" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json"
Response:
{
  "items": [
    {
      "id": "item_123",
      "productMasterId": "prod_123",
      "productName": "Product Name",
      "globalSku": "SKU-123",
      "locationId": "loc_123",
      "locationName": "Main Warehouse",
      "quantityOnHand": 200,
      "quantityReserved": 50,
      "quantityAvailable": 150,
      "lotId": "lot_123",
      "lotNumber": "LOT-2024-001",
      "expiryDate": "2024-12-31",
      "binId": "bin_123",
      "binName": "A-1-2",
      "unitCost": 10.50,
      "totalValue": 2100.00,
      "lastCountedAt": "2024-03-01T00:00:00Z",
      "updatedAt": "2024-03-15T00:00:00Z"
    }
  ],
  "pagination": {
    "page": 1,
    "pageSize": 100,
    "total": 500,
    "totalPages": 5
  }
}

Step 2: Fetch All Pages

Iterate through all pages to get complete inventory snapshot. Example Script:
#!/bin/bash

API_KEY="YOUR_API_KEY"
BASE_URL="https://app.betterdata.co/api"
LOCATION_ID="loc_123"
OUTPUT_FILE="inventory_snapshot.json"

# Initialize output array
echo "[]" > $OUTPUT_FILE

# Fetch all pages
PAGE=1
while true; do
  echo "Fetching page $PAGE..."
  
  RESPONSE=$(curl -s -X GET "${BASE_URL}/inventory?locationId=${LOCATION_ID}&page=${PAGE}&limit=100" \
    -H "Authorization: Bearer ${API_KEY}" \
    -H "Content-Type: application/json")
  
  # Extract items and append to output
  ITEMS=$(echo $RESPONSE | jq -r '.items[]')
  if [ -z "$ITEMS" ]; then
    break
  fi
  
  # Merge items into output file
  jq --argjson items "$(echo $RESPONSE | jq '.items')" '. += $items' $OUTPUT_FILE > ${OUTPUT_FILE}.tmp
  mv ${OUTPUT_FILE}.tmp $OUTPUT_FILE
  
  # Check if more pages
  TOTAL_PAGES=$(echo $RESPONSE | jq -r '.pagination.totalPages')
  if [ $PAGE -ge $TOTAL_PAGES ]; then
    break
  fi
  
  PAGE=$((PAGE + 1))
done

echo "Export complete: $(jq 'length' $OUTPUT_FILE) items exported to $OUTPUT_FILE"

Step 3: Get Channel-Location ATP Matrix

Export ATP data across channels and locations. Request:
curl -X GET "https://app.betterdata.co/api/inventory/channel-location?locationId=loc_123&page=1&limit=100" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json"
Response:
{
  "products": [
    {
      "productMasterId": "prod_123",
      "productName": "Product Name",
      "globalSku": "SKU-123",
      "channels": [
        {
          "channelId": "DTC",
          "channelName": "Direct to Consumer",
          "atp": 150,
          "onHand": 200,
          "reserved": 50,
          "available": 150
        }
      ],
      "totalOnHand": 200,
      "totalReserved": 50,
      "minAtp": 150,
      "maxAtp": 150
    }
  ],
  "pagination": {
    "page": 1,
    "pageSize": 100,
    "total": 500,
    "totalPages": 5
  }
}

Step 4: Export to CSV

Convert JSON export to CSV format for reporting. Python Example:
import requests
import csv
import json
from datetime import datetime

API_KEY = "YOUR_API_KEY"
BASE_URL = "https://app.betterdata.co/api"
LOCATION_ID = "loc_123"
OUTPUT_FILE = f"inventory_snapshot_{datetime.now().strftime('%Y%m%d_%H%M%S')}.csv"

def fetch_all_inventory(location_id):
    """Fetch all inventory items across all pages."""
    all_items = []
    page = 1
    
    while True:
        url = f"{BASE_URL}/inventory"
        headers = {
            "Authorization": f"Bearer {API_KEY}",
            "Content-Type": "application/json"
        }
        params = {
            "locationId": location_id,
            "page": page,
            "limit": 100
        }
        
        response = requests.get(url, headers=headers, params=params)
        response.raise_for_status()
        data = response.json()
        
        all_items.extend(data["items"])
        
        total_pages = data["pagination"]["totalPages"]
        if page >= total_pages:
            break
        page += 1
    
    return all_items

def export_to_csv(items, filename):
    """Export inventory items to CSV."""
    if not items:
        print("No items to export")
        return
    
    # Define CSV columns
    fieldnames = [
        "productMasterId",
        "productName",
        "globalSku",
        "locationId",
        "locationName",
        "quantityOnHand",
        "quantityReserved",
        "quantityAvailable",
        "lotId",
        "lotNumber",
        "expiryDate",
        "binId",
        "binName",
        "unitCost",
        "totalValue",
        "lastCountedAt",
        "updatedAt"
    ]
    
    with open(filename, 'w', newline='') as csvfile:
        writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
        writer.writeheader()
        
        for item in items:
            # Flatten nested structures if needed
            row = {
                "productMasterId": item.get("productMasterId", ""),
                "productName": item.get("productName", ""),
                "globalSku": item.get("globalSku", ""),
                "locationId": item.get("locationId", ""),
                "locationName": item.get("locationName", ""),
                "quantityOnHand": item.get("quantityOnHand", 0),
                "quantityReserved": item.get("quantityReserved", 0),
                "quantityAvailable": item.get("quantityAvailable", 0),
                "lotId": item.get("lotId", ""),
                "lotNumber": item.get("lotNumber", ""),
                "expiryDate": item.get("expiryDate", ""),
                "binId": item.get("binId", ""),
                "binName": item.get("binName", ""),
                "unitCost": item.get("unitCost", 0),
                "totalValue": item.get("totalValue", 0),
                "lastCountedAt": item.get("lastCountedAt", ""),
                "updatedAt": item.get("updatedAt", "")
            }
            writer.writerow(row)
    
    print(f"Exported {len(items)} items to {filename}")

if __name__ == "__main__":
    print("Fetching inventory data...")
    items = fetch_all_inventory(LOCATION_ID)
    print(f"Fetched {len(items)} items")
    
    print("Exporting to CSV...")
    export_to_csv(items, OUTPUT_FILE)
    print("Export complete")

Step 5: Get Inventory Analytics Summary

Export aggregated inventory analytics. Request:
curl -X GET "https://app.betterdata.co/api/inventory/analytics?locationId=loc_123&fromDate=2024-01-01&toDate=2024-12-31" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json"
Response:
{
  "analytics": {
    "totalValue": 50000.00,
    "totalUnits": 1000,
    "turnoverRate": 12.5,
    "averageDaysOnHand": 30,
    "slowMovingItems": 25,
    "expiringItems": 10,
    "byCategory": [
      {
        "categoryId": "cat_123",
        "categoryName": "Category",
        "value": 25000.00,
        "units": 500
      }
    ]
  },
  "period": {
    "from": "2024-01-01T00:00:00Z",
    "to": "2024-12-31T23:59:59Z"
  }
}

Complete Example

Here’s a complete Python script for inventory export:
import requests
import csv
import json
from datetime import datetime
from typing import List, Dict

API_KEY = "YOUR_API_KEY"
BASE_URL = "https://app.betterdata.co/api"

class InventoryExporter:
    def __init__(self, api_key: str):
        self.api_key = api_key
        self.headers = {
            "Authorization": f"Bearer {api_key}",
            "Content-Type": "application/json"
        }
    
    def fetch_all_pages(self, endpoint: str, params: Dict) -> List[Dict]:
        """Fetch all pages from a paginated endpoint."""
        all_items = []
        page = 1
        
        while True:
            params["page"] = page
            response = requests.get(f"{BASE_URL}/{endpoint}", headers=self.headers, params=params)
            response.raise_for_status()
            data = response.json()
            
            items_key = "items" if "items" in data else "products" if "products" in data else "data"
            items = data.get(items_key, [])
            all_items.extend(items)
            
            pagination = data.get("pagination", {})
            total_pages = pagination.get("totalPages", 1)
            
            if page >= total_pages:
                break
            page += 1
        
        return all_items
    
    def export_inventory(self, location_id: str = None, output_format: str = "csv") -> str:
        """Export inventory snapshot."""
        params = {"limit": 100}
        if location_id:
            params["locationId"] = location_id
        
        print("Fetching inventory data...")
        items = self.fetch_all_pages("inventory", params)
        print(f"Fetched {len(items)} items")
        
        timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
        
        if output_format == "csv":
            filename = f"inventory_snapshot_{timestamp}.csv"
            self._export_to_csv(items, filename)
        else:
            filename = f"inventory_snapshot_{timestamp}.json"
            self._export_to_json(items, filename)
        
        return filename
    
    def _export_to_csv(self, items: List[Dict], filename: str):
        """Export items to CSV."""
        if not items:
            print("No items to export")
            return
        
        fieldnames = [
            "productMasterId", "productName", "globalSku",
            "locationId", "locationName",
            "quantityOnHand", "quantityReserved", "quantityAvailable",
            "lotId", "lotNumber", "expiryDate",
            "binId", "binName",
            "unitCost", "totalValue",
            "lastCountedAt", "updatedAt"
        ]
        
        with open(filename, 'w', newline='') as csvfile:
            writer = csv.DictWriter(csvfile, fieldnames=fieldnames, extrasaction='ignore')
            writer.writeheader()
            writer.writerows(items)
        
        print(f"Exported {len(items)} items to {filename}")
    
    def _export_to_json(self, items: List[Dict], filename: str):
        """Export items to JSON."""
        with open(filename, 'w') as f:
            json.dump({
                "exportedAt": datetime.now().isoformat(),
                "totalItems": len(items),
                "items": items
            }, f, indent=2)
        
        print(f"Exported {len(items)} items to {filename}")

if __name__ == "__main__":
    exporter = InventoryExporter(API_KEY)
    filename = exporter.export_inventory(location_id="loc_123", output_format="csv")
    print(f"Export complete: {filename}")

Idempotency Notes

  • Inventory Queries: Inventory data queries are read-only and idempotent. Multiple exports return the same data for the same point in time.
  • Snapshot Timing: Inventory data changes over time. For consistent snapshots, export all data in a single session.
  • Pagination: Each page fetch is independent. Fetching pages multiple times returns the same data.

Pagination Notes

  • Inventory Items: Supports pagination with page and limit (default: 20, max: 100).
  • Channel-Location ATP: Supports pagination with page and limit (default: 50, max: 200).
  • Best Practice: Always fetch all pages to get complete inventory snapshot.
  • Rate Limits: Be mindful of rate limits when fetching large datasets. Consider adding delays between requests.

Common Issues

Export Incomplete

Symptom: Export missing some inventory items. Solutions:
  1. Verify all pages are fetched (check totalPages in pagination)
  2. Ensure pagination loop continues until all pages are retrieved
  3. Check for API errors during pagination
  4. Verify location filters aren’t excluding items

Export Too Slow

Symptom: Export takes too long to complete. Solutions:
  1. Increase limit parameter to fetch more items per page
  2. Use parallel requests for multiple locations (if applicable)
  3. Export during off-peak hours
  4. Consider exporting by location or category separately

Memory Issues

Symptom: Script runs out of memory during export. Solutions:
  1. Process items in batches instead of loading all into memory
  2. Write to file incrementally instead of building full dataset
  3. Use streaming JSON/CSV writers
  4. Export by location or date range to reduce dataset size


Permissions & Roles

Inventory export requires inventory.read permission. No special roles required beyond basic authentication.