Skip to content

Rate Limiting

Some webhook receivers have their own rate limits or cannot handle unlimited traffic. Per-endpoint rate limiting lets you cap how fast HookBridge delivers to a specific destination, preventing your receiver from being overwhelmed.

Rate limits are optional. If you do not set them, HookBridge delivers as fast as it can process the queue.

Each outbound endpoint can be configured with:

  • rate_limit_rps — maximum requests per second to this endpoint.
  • burst — how many requests can be sent in a short spike before rate limiting kicks in. Defaults to the rate_limit_rps value if not specified.

Both values must be at least 1 when set. Both are optional — omit them to have no per-endpoint rate limit.

When a delivery is rate-limited, it is not counted as a failed attempt. The message stays in the queue and is retried shortly without consuming one of the automatic retry attempts.

  1. Open Endpoints and select an outbound endpoint.
  2. Rate limits are displayed in the endpoint detail view when configured.
  3. To set or update rate limits, use the API workflow below.

API reference:

Terminal window
curl -X POST https://api.hookbridge.io/v1/endpoints \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"url": "https://your-app.example/webhooks",
"description": "Rate-limited endpoint",
"rate_limit_rps": 50,
"burst": 100
}'

2) Update rate limits on an existing endpoint

Section titled “2) Update rate limits on an existing endpoint”
Terminal window
curl -X PATCH https://api.hookbridge.io/v1/endpoints/YOUR_ENDPOINT_ID \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"rate_limit_rps": 100, "burst": 200}'

In addition to per-endpoint rate limits, HookBridge applies a project-level burst ceiling to API requests made with an API key. If your application sends requests faster than this ceiling, you will receive a 429 response with the error code BURST_LIMIT_EXCEEDED and a retry_after_ms value in the error details.

This is a safety mechanism to protect the platform from runaway traffic. Under normal usage you are unlikely to hit it.

  • Start with a rate limit that matches your receiver’s documented capacity, then adjust based on observed behavior.
  • If you see messages spending longer in pending_retry than expected, check whether your rate limit is too low for your traffic volume.
  • Rate limits apply to delivery attempts only, not to the send API.
Personalize Examples

Enter your credentials to populate code examples throughout the docs.