Build a Reusable, Rate‑Limit‑Aware Connector for REST & GraphQL in n8n

Learn how to create a reusable, rate-limit-aware connector for REST and GraphQL APIs in n8n — with a low-code sub-workflow approach and a full custom node blueprint.

Overview

APIs often enforce rate limits and have different patterns for REST and GraphQL. In n8n, you can build a reusable connector that centralizes authentication, error handling, retries, and rate-limit awareness so your workflows stay reliable and maintainable.

This tutorial shows two approaches:

  • A reusable, no/low-code sub-workflow that behaves like a connector using the HTTP Request and Function nodes.
  • A blueprint for a full custom node (TypeScript) that you can add to n8n for repeated use.
  • You’ll learn how to handle 429 responses, respect Retry-After headers, implement exponential backoff, add concurrency controls, and adapt for REST pagination and GraphQL batching.

    Why centralize rate-limit logic?

  • Avoid duplicating retry and backoff logic across workflows.
  • Ensure consistent error handling and logging.
  • Make it simple to upgrade rate-limit strategy (e.g., global throttling) in one place.
  • Keywords: n8n, automation, API, integrations, rate limiting, GraphQL, REST

    Approach A — Reusable Sub-workflow (Recommended for most users)

    This approach builds a sub-workflow that performs an HTTP call and handles rate limiting and retries. Other workflows call it using the Execute Workflow node.

    Architecture

    1. Parent workflow calls sub-workflow with parameters: method, url, headers, body, type (rest/graphql), retries.
    2. Sub-workflow uses a Function node to implement retry/backoff logic and then an HTTP Request node to perform the call.
    3. When a 429 or transient error occurs, the Function node computes wait time and the workflow uses the Wait node to pause before retrying.

    Step-by-step

    1. Create a new workflow named `Connector: API Request`.

    2. Add a Webhook or set this workflow to be executed by `Execute Workflow` (recommended for local reuse).

    3. Add a `Set` node (or Webhook input) to define parameters your connector will accept:

  • method (GET, POST, etc.)
  • url
  • headers (JSON)
  • body (string/object)
  • type: rest|graphql
  • maxRetries (default 5)
  • backoffBase (ms, default 500)
  • 4. Add a `Function` node called `ComputeAttempt` with code to initialize retry state. Example:


    // inputs[0].json contains params
    const params = items[0].json;
    return [{ json: { ...params, attempt: 0, nextDelay: params.backoffBase || 500 } }];

    5. Add an `HTTP Request` node configured to use values from the incoming JSON via expressions (URL, method, headers, body). Connect `ComputeAttempt` to it.

    6. Add a `Function` node named `RateLimitCheck` after the HTTP Request node that examines the response status and headers. If status === 429 or other transient code, set an action to retry.

    Example of `RateLimitCheck` code (used in a Function node):


    const res = items[0].json;
    const status = res.statusCode || res.status || 200;
    const headers = res.headers || {};

    if (status === 429) {
    // parse Retry-After if available
    let wait = headers['retry-after'] ? parseInt(headers['retry-after'], 10) * 1000 : (items[0].json.nextDelay || 500);
    items[0].json.retry = true;
    items[0].json.waitMs = wait;
    items[0].json.nextDelay = Math.min((items[0].json.nextDelay || 500) * 2, 60000);
    } else if (status >= 500 && status < 600) {
    // transient server error
    items[0].json.retry = true;
    items[0].json.waitMs = items[0].json.nextDelay || 500;
    items[0].json.nextDelay = Math.min((items[0].json.nextDelay || 500) * 2, 60000);
    } else {
    items[0].json.retry = false;
    }

    return items;

    7. Add a `IF` node to check `retry`. If retry is true and `attempt < maxRetries`, route to a `Wait` node configured with expression `{{$json.waitMs}}` milliseconds, increment attempt in a `Function` node, and loop back to the HTTP Request node. If retries are exhausted, route to error handling. 8. When success, transform response for consumers and return.

    GraphQL specifics

  • Send queries as POST with a JSON body { “query”: “…”, “variables”: {…} }.
  • For large operations, consider persisting queries on the server and only send identifiers to reduce complexity.
  • Handle partial errors: GraphQL may return 200 with errors in the response body — treat those as potential retry or logical failures.
  • Making it reusable

  • Standardize inputs and outputs: always return { success: boolean, statusCode, body, headers }
  • Store the workflow as a template and use `Execute Workflow` with expressions to call it from other workflows.
  • Approach B — Full Custom Node (TypeScript) Blueprint

    If you plan to ship a connector to multiple users or want tight integration, build a custom n8n node.

    High-level steps:

    1. Clone n8n-node-dev starter (official docs). Add a new node folder.
    2. Implement node description (displayName, properties, credentials).
    3. In execute(), implement a request helper that:
    – Uses node’s credentials to attach auth tokens
    – Performs fetch/axios request
    – Detects 429 / Retry-After and performs exponential backoff
    – Exposes configurable concurrency controls (global or credential-scoped)

    Simplified execute skeleton:


    async execute() {
    const items = this.getInputData();
    for (let i = 0; i < items.length; i++) {
    const params = this.getNodeParameter('params', i);
    // call helper which performs retry/backoff
    const result = await this.requestWithRetry(params);
    return this.helpers.returnJsonArray(result);
    }
    }

    Implement requestWithRetry to check response headers for `retry-after`, do exponential backoff, and respect a maximum attempts setting. For distributed deployments, coordinate throttling using Redis locks or an external queue.

    Best practices

  • Use the `Retry-After` header when available.
  • Prefer server-supplied limits over client guesses.
  • Cache tokens and refresh proactively to avoid auth failures.
  • Log attempts and expose metrics for monitoring (count 429s, avg retry time).
  • For GraphQL, use persisted queries and limit depth when possible.
  • On self-hosted n8n, use Redis or database locks to implement cross-instance throttling when needed.
  • Security & Error Handling

  • Never log full credentials. Mask sensitive headers.
  • Return structured errors to callers so parent workflows can decide to escalate or skip.
  • Expose maxRetries and backoffBase as configurable per-credential values.
  • Example: Simple JS retry helper (usable in a Function node)


    async function callWithRetry(fetchFn, maxRetries = 5, base = 500) {
    let attempt = 0;
    let delay = base;
    while (true) {
    const res = await fetchFn();
    if (res.status !== 429 && (res.status < 500 || res.status < 600)) return res;
    attempt++;
    if (attempt >= maxRetries) throw new Error('Max retries exceeded');
    const ra = res.headers && res.headers['retry-after'] ? parseInt(res.headers['retry-after'], 10) * 1000 : delay;
    await new Promise(r => setTimeout(r, ra));
    delay = Math.min(delay * 2, 60000);
    }
    }

    Conclusion and Next Steps

    You now have two practical paths to create a reusable, rate-limit-aware connector in n8n:

  • The sub-workflow approach is quick to implement, easy to maintain, and great for most teams.
  • The full custom node is ideal for distribution, tighter integration, and advanced concurrency controls.

Next steps:

1. Build the sub-workflow and replace a few HTTP calls across your workflows to validate behavior.
2. Add monitoring for 429s and average retry time.
3. If you need multi-instance throttling, plan a Redis-based lock or queue.

If you’d like, I can generate the complete Workflow JSON for the sub-workflow or a starter TypeScript file for a custom node—tell me which you’d prefer.

Related Posts