Docs/Rate limits
◆ Rate limits

Built for real-time dealer apps.

Per-dealership bucket with burst headroom. Limits by plan. Headers returned on every response so you never have to guess. Bulk endpoints for when you need to move 10,000 hulls in one go.

  • Per-dealership limits · shared across all keys
  • Leaky-bucket algorithm · burst 3× for 30 seconds
  • Rate headers on every response
  • Bulk endpoints count as one request
Request a higher limit Backoff patterns
Limits by plan

Three tiers. Burst on top.

Steady-state is what you get forever. Burst is a 30-second window of 3× capacity — useful for backfills, imports, and pager-alert-driven sync. Limit is per dealership, shared across every key you've issued.

PlanSteadyBurstNotes
Dock 1,000 req/min 3,000 req/min · 30s For single-rooftop dealerships and most third-party apps.
Marina 5,000 req/min 15,000 req/min · 30s For multi-rooftop groups and high-volume syndication.
Fleet 10,000+ custom Custom Enterprise dealer groups. Dedicated capacity on request.
Response headers

Every response, every time.

Four headers. Two of them are always present, two only on 429s. Watch Remaining — when it gets low, slow down.

X-RateLimit-Limit Steady-state requests per minute for the dealership.
X-RateLimit-Remaining Requests left in the current window. Refills continuously.
X-RateLimit-Reset Unix timestamp at which the bucket fully refills.
Retry-After 429 ONLY Seconds to wait before retrying. Respect it. Then add jitter.
429 handling

What a limit looks like, and what to do.

You get back a 429, a Retry-After, and a small explanatory body. Don't just sleep and retry once — back off exponentially with jitter, cap at 32 seconds.

429 Too Many Requests
# Response headers
HTTP/2 429
retry-after: 12
x-ratelimit-limit: 1000
x-ratelimit-remaining: 0
x-ratelimit-reset: 1713800412

{
  "error": {
    "code": "rate_limited",
    "message": "Slow down."
  }
}
backoff.ts · exponential + jitter
// Respect Retry-After, then backoff
async function call(req, attempt = 0) {
  const r = await fetch(req);
  if (r.status !== 429) return r;
  const after = +r.headers.get(
    'retry-after') ?? 1;
  const jitter = Math.random() * 0.3;
  const wait = Math.min(32,
    after * (1 + jitter) * 2 ** attempt);
  await sleep(wait * 1000);
  return call(req, attempt + 1);
}
Specialized buckets

Webhooks, bulk, and lists.

Webhook delivery

Separate bucket. Up to 200 deliveries/sec per dealership. Fan-out to multiple endpoints does not multiply cost.

Bulk endpoints

/v1/hulls/batch accepts up to 500 items and counts as one request. Use bulk for migrations and nightly syncs.

Pagination calibration

Fetch in chunks of 250. On Dock that's 4 requests/sec sustained — well under 1,000/min with room to spare.

Practice

What the SDK does for you.

The official TypeScript and Python SDKs handle all of this automatically: they inspect headers, track remaining budget, back off on 429s, and expose an optional concurrency limiter.

  • Inspect X-RateLimit-Remaining on every response. If it drops below 10% of limit, add a small sleep.
  • Stagger cron jobs across dealerships — don't fire 50 syncs on the same minute boundary.
  • Prefer bulk endpoints for migrations. One /batch call is cheaper than 500 singles.
  • Use webhooks to react to changes, not polling. A lead.qualified event beats polling /v1/leads every minute.
  • Need more capacity? Email api@boater.os with your traffic profile — we'll size an uplift.