Introduction
In this tutorial I’ll show how to build a distributed idempotent event orchestrator in n8n with Redis Streams and durable checkpoints. In my projects I’ve used consumer groups and Redis’ Pending Entries List (PEL) to coordinate multiple n8n workers, avoid duplicate processing, and survive restarts. This post walks through a production-ready pattern using the n8n HTTP Request node (or a small Redis gateway) plus n8n Function nodes and workflow static data for durable checkpoints.
I’ll explain the architecture, show node settings you can paste into n8n, and include code snippets (Upstash/Redis HTTP examples and n8n Function code). If you self-host n8n and need a reliable, scalable pipeline, this pattern will save you debugging time and edge-case bugs I’ve repeatedly seen in the wild.
Why this matters
A distributed idempotent event orchestrator in n8n with Redis ensures multiple workers can read a single stream of events, process each event exactly once (or at-least-once with idempotency), and persist checkpoints so crashes or restarts don’t cause message loss or duplicate side effects. This is essential for scaling integrations, microservices orchestration, and reliable automation.
Related long-tail terms: n8n HTTP Request node, n8n error handling, self-hosted n8n workflow, Redis Streams consumer groups.
Prerequisites
- A self-hosted n8n instance or n8n cloud with access to environment variables (I use environment variables for secrets).
- Redis with Streams and consumer groups enabled. I recommend Upstash (HTTP API) for simplicity, or any Redis cluster accessible from n8n.
- Familiarity with n8n nodes: HTTP Request node, Function node, SplitInBatches, Set, and If.
- An external idempotency key store (we’ll use a Redis Set or Redis key) — Redis itself is used for idempotency checks.
- N8N_CONSUMER_NAME — unique per n8n instance (e.g., n8n-1, n8n-2)
- UPSTASH_REST_TOKEN — Upstash REST token (or other Redis HTTP auth)
- REDIS_STREAM_NAME — the stream name (e.g., events:main)
- REDIS_GROUP — consumer group name (e.g., events-group)
- Method: POST
- URL: https://us1-upstash.io/v1/commands (or your Redis HTTP endpoint)
- Authentication: Header Authorization: Bearer {{$env.UPSTASH_REST_TOKEN}}
- Body (JSON):
I’ve used Upstash in examples because it exposes Redis commands via HTTPS which works nicely with the n8n HTTP Request node.
Architecture overview
1. n8n worker(s) act as consumers in a Redis Streams consumer group.
2. Each n8n instance runs the same workflow and identifies itself with a unique consumer name (e.g., n8n-worker-1). This is an environment variable like N8N_CONSUMER_NAME.
3. Worker polls Redis with XREADGROUP to receive messages. Messages are split into items inside n8n.
4. For each event, n8n computes an idempotency key and attempts SADD to a Redis Set (or SETNX) to ensure only one worker processes it.
5. If SADD returns 1 (new), process the event. On success: XACK the stream message. On failure: leave unacked so other workers can claim later.
6. A separate reclaim workflow periodically CLAIMs pending messages older than threshold and retries them.
This uses durable checkpoints (consumer groups + PEL + reclaim logic) and idempotency to reach effective exactly-once processing for side effects.
Step-by-step guide
1. Create environment variables
2. Build the poller workflow
Numbered nodes in n8n:
1) Trigger: Cron (every 2s or a small interval) or Webhook to start poller
2) HTTP Request: “Read from stream (XREADGROUP)”
json
{
"command": ["XREADGROUP","GROUP","{{$env.REDIS_GROUP}}","{{$env.N8N_CONSUMER_NAME}}","COUNT","50","BLOCK","2000","STREAMS","{{$env.REDIS_STREAM_NAME}}",">"]
}
3) Set: “Normalize entries”
4) SplitInBatches: split into single-item batches (batch size = 1) so each message is processed independently.
5) Function: “Compute idempotency key”
javascript
// Use a deterministic key from payload or message id
const item = items[0].json;
const payload = item.payload || {};
const eventId = payload.id || item.messageId;
const idempotencyKey = `idem:${$env.REDIS_STREAM_NAME}:${eventId}`;
return [{ json: { ...item, idempotencyKey } }];
6) HTTP Request: “Try SADD idempotency”
json
{
"command": ["SADD","idempotency_set", "{{$json.idempotencyKey}}"]
}
7) If node: “Is new?”
8a) On true -> main processing chain (API calls, DB writes)
9) On successful processing -> HTTP Request: XACK
json
{
"command": ["XACK","{{$env.REDIS_STREAM_NAME}}","{{$env.REDIS_GROUP}}","{{$json.messageId}}"]
}
10) On failure during processing -> leave message unacked. A reclaim workflow claims PEL entries older than N ms and reassigns.
3. Reclaim workflow (separate scheduled n8n workflow)
json
{ "command": ["XPENDING","{{$env.REDIS_STREAM_NAME}}","{{$env.REDIS_GROUP}}","-","+","100"] }
Example XCLAIM body:
json
{ "command": ["XCLAIM","{{$env.REDIS_STREAM_NAME}}","{{$env.REDIS_GROUP}}","{{$env.N8N_CONSUMER_NAME}}","60000",""] }
When claiming, re-run the same idempotency check to avoid duplicates.
Example: n8n Function node durable checkpoint (optional)
I often store a high-water mark in workflow static data for visibility and fallback: in a Function node after processing a message:
javascript
const wfStatic = this.getWorkflowStaticData('global');
const last = wfStatic.lastProcessed || null;
wfStatic.lastProcessed = $json.messageId;
return items;
This is not the canonical checkpoint for Redis (consumer groups are), but it’s a lightweight durable checkpoint surfaced in the workflow UI and survives restarts.
Best practices
Common pitfalls & fixes
– Fix: Use unique names via environment variables and include the pod/container ID in the name.
– Fix: Offload heavy processing to a background worker or use an async workflow: mark idempotency, enqueue a job in a secondary queue, XACK early if safe.
– Fix: Prefer business ID (order_id, event_id) for idempotency. Message ID changes if event is re-inserted.
– Fix: Implement reclaim workflow using XPENDING -> XCLAIM with a visibility timeout and maximum retry counter stored in message metadata.
Performance and scaling tips
FAQ
What happens if my n8n worker crashes while processing a message?
If a worker crashes before XACK, the message remains in the PEL. A reclaim workflow (XPENDING + XCLAIM) or another consumer can claim and retry the message. Idempotency keys prevent duplicate side effects.
Can I use the Function node to connect to Redis directly?
Function nodes run inside n8n’s Node environment but cannot install arbitrary NPM packages at runtime. For direct Redis access you would need a custom n8n node or preload a client into the environment. Using Redis HTTP (Upstash) or a small Redis gateway is the simplest approach with the n8n HTTP Request node.
How many consumers should I run?
Start with 2–4 and measure throughput and pending entries. The optimal number depends on your processing latency and Redis capacity. Scaling horizontally with stateless n8n workers plus unique consumer names is effective.
Is this pattern safe for exactly-once processing?
Redis Streams + idempotency achieves effectively exactly-once side effects if the idempotency checks and external systems are designed to be idempotent. Without idempotency, Streams provide at-least-once semantics.
Conclusion
Building a distributed idempotent event orchestrator in n8n with Redis Streams and durable checkpoints gives you a reliable, scalable automation backbone. In my projects this approach reduced duplicate side effects and made recovery from crashes predictable. Implement the polling pattern with XREADGROUP, idempotency checks via SADD/SETNX, and a reclaim workflow (XPENDING/XCLAIM), and you’ll have a production-grade orchestrator.
Next steps: Try this in your n8n instance today. See our guide on [n8n Fundamentals](/fundamentals) for core node details and combine that knowledge with this pattern. If you use Upstash, test the HTTP command bodies above in an n8n HTTP Request node and iterate on your retry/claim thresholds.