Scale Your n8n Workflows: The Beginner-Friendly Guide to Workers and Queue Mode
Your automations are humming along. Then a busy day hits: tests lag, webhook calls stack up, one CPU pegs at 100%, and everything feels slow. If that’s you, you’re ready for n8n’s Queue Mode with workers. This guide explains what it is, why it matters for n8n performance, and how to set it up without getting lost in jargon or DevOps complexity.
The Short Version
By default, n8n runs everything in one process: the editor UI, triggers, and workflow executions. That’s simple, but it creates a traffic jam under load.
Queue Mode splits the work: one “main” process handles the UI and incoming triggers, while one or more “workers” run workflows in parallel.
Redis is the fast “to‑do list” that workers pull from; PostgreSQL stores your workflows, credentials, and execution history.
You can scale horizontally by adding workers. No rewrites, no vendor lock‑in, just more capacity.
What n8n Does Today (And Why It Slows Down)
If you’re new to n8n, think of a workflow as a small assembly line: a trigger starts the line (like a webhook or a schedule), and each node performs a step. An “execution” is one run through that line.
In the default setup, one process serves the editor UI and API, receives triggers (webhooks, schedules, pollers), and runs the workflows. It’s simple and great to start with. But as usage grows, long or heavy workflows block others. Tests and manual runs compete with production jobs. Webhook bursts overwhelm the single process. One process means one ceiling.
Queue Mode in Plain English
Queue Mode separates “handling requests” from “doing work.” Picture a small bakery: the front desk (main process) takes orders fast and puts them on a board; the board (Redis) holds the queue of orders; bakers (workers) grab orders from the board and bake in parallel; the recipe book and records (PostgreSQL) keep everything consistent across bakers. You don’t need to change your workflows. You just add bakers (workers) when orders increase.
[ Users & Webhooks ]
|
Main (UI/API & Triggers)
| \
| \ writes/reads
Redis <--> PostgreSQL
|
Workers (execute workflows in parallel)
The Pieces and How They Talk
Main process
Serves the editor UI, handles webhooks and schedules, and creates execution records. It enqueues job IDs into Redis. In proper queue mode it doesn’t run the jobs itself (unless you configure it to).
Workers
Dequeue jobs from Redis, fetch workflows and credentials from PostgreSQL, execute, then write results and logs back to the database.
Redis (the message broker)
Fast, in‑memory queue for job IDs. Think “traffic cop” for distributing work, not a general‑purpose cache inside n8n. This separation is what enables distributed workflows and horizontal scaling.
PostgreSQL (the system of record)
Stores workflows, encrypted credentials, execution logs, and binary data pointers. Built for concurrency; perfect when multiple workers need the same truth.
Why PostgreSQL Beats Local Storage
On a single n8n box, you can keep data locally. The moment you add workers, local files become a liability: they aren’t shared, they drift, and they break under parallel access. PostgreSQL keeps everything consistent across processes and machines. That means reliable credentials, accurate logs, and safe restarts — key to stable, distributed workflows.
When Queue Mode Makes Sense
Use Queue Mode when you handle lots of webhooks or spike‑y traffic, run heavy API calls or file processing, want a snappier editing and testing experience while production jobs run, or need a clear, incremental path to n8n scalability.
A Minimal Setup You Can Copy
Below is a small Docker Compose stack with a main process, Redis, PostgreSQL, and one worker. You can run it locally or on a small VPS and scale up later.
docker-compose.yml
services:
redis:
image: redis:7
command: ["redis-server", "--appendonly", "yes"]
volumes:
- ./redis-data:/data
networks: [n8n-net]
restart: always
postgres:
image: postgres:15
environment:
POSTGRES_USER: n8n
POSTGRES_PASSWORD: your_pg_password
POSTGRES_DB: n8ndb
volumes:
- ./pgdata:/var/lib/postgresql/data
networks: [n8n-net]
restart: always
n8n:
image: n8nio/n8n:latest
ports:
- "5678:5678"
env_file:
- ./.env.main
depends_on:
- redis
- postgres
volumes:
- ./n8n-data:/home/node/.n8n
networks: [n8n-net]
restart: always
n8n-worker:
image: n8nio/n8n:latest
command: worker
env_file:
- ./.env.worker
depends_on:
- redis
- postgres
networks: [n8n-net]
restart: always
networks:
n8n-net:
driver: bridge
.env.main
# Core
N8N_HOST=localhost
N8N_PORT=5678
N8N_PROTOCOL=http
WEBHOOK_URL=http://localhost:5678
N8N_ENCRYPTION_KEY=put_a_long_random_secret_here
# Queue mode
EXECUTIONS_MODE=queue
OFFLOAD_MANUAL_EXECUTIONS_TO_WORKERS=true
# Database (PostgreSQL)
DB_TYPE=postgresdb
DB_POSTGRESDB_HOST=postgres
DB_POSTGRESDB_DATABASE=n8ndb
DB_POSTGRESDB_USER=n8n
DB_POSTGRESDB_PASSWORD=your_pg_password
# Redis (Queue)
QUEUE_BULL_REDIS_HOST=redis
QUEUE_BULL_REDIS_PORT=6379
QUEUE_BULL_REDIS_DB=0
# Set QUEUE_BULL_REDIS_PASSWORD if you secured Redis
# Logging
N8N_LOG_LEVEL=info
.env.worker
# Must match main
EXECUTIONS_MODE=queue
N8N_ENCRYPTION_KEY=put_a_long_random_secret_here
# Database
DB_TYPE=postgresdb
DB_POSTGRESDB_HOST=postgres
DB_POSTGRESDB_DATABASE=n8ndb
DB_POSTGRESDB_USER=n8n
DB_POSTGRESDB_PASSWORD=your_pg_password
# Redis (Queue)
QUEUE_BULL_REDIS_HOST=redis
QUEUE_BULL_REDIS_PORT=6379
QUEUE_BULL_REDIS_DB=0
# Set QUEUE_BULL_REDIS_PASSWORD if you secured Redis
# Optional: tune concurrency (default is 10)
# You can also pass it via command: ["n8n","worker","--concurrency","5"]
N8N_LOG_LEVEL=info
Start the stack with docker compose up -d
.
Add more workers any time with docker compose up -d --scale n8n-worker=3
.
Tip: Pin a specific image tag (for example, n8nio/n8n:1.64.0
) on all services to keep versions consistent.
Essential Environment Variables
Required: set EXECUTIONS_MODE=queue
on main and all workers; keep N8N_ENCRYPTION_KEY
identical everywhere; use DB_TYPE=postgresdb
and the same DB connection details for all processes; configure QUEUE_BULL_REDIS_HOST/PORT[/PASSWORD]
for Redis connectivity; set WEBHOOK_URL
to your public URL so incoming webhooks work correctly.
Optional but handy: OFFLOAD_MANUAL_EXECUTIONS_TO_WORKERS=true
to keep the editor responsive and N8N_LOG_LEVEL=debug
during setup (switch to info/warn afterward).
Optional: Webhook Processors for Spiky Traffic
If you receive lots of concurrent webhooks, add lightweight “webhook” processes that only accept webhook requests. Route /webhook/*
to them behind your load balancer and keep the editor UI on main for a snappier experience.
n8n-webhook:
image: n8nio/n8n:latest
command: webhook
env_file:
- ./.env.worker
depends_on:
- redis
- postgres
networks: [n8n-net]
restart: always
For stricter separation, set N8N_DISABLE_PRODUCTION_MAIN_PROCESS=true
on the main instance so it never executes production jobs.
Security Basics You Should Not Skip
Use HTTPS via a reverse proxy (Traefik, Caddy, or Nginx). Never expose Redis publicly; use a password and firewall rules. Keep the same N8N_ENCRYPTION_KEY
secret and safe. Prefer PostgreSQL over SQLite for reliability in distributed setups. Back up your PostgreSQL volume; your workflows and credentials live there.
If you enable Redis persistence, append‑only files (AOF) or snapshots (RDB) help with durability. See the Redis docs for both approaches.
Test That Everything Works
Create a simple schedule‑based workflow that runs every minute. In the executions list, you should see “Queued” then “Success.”
Check worker logs with docker logs n8n-worker-1 -f
and look for messages like “started execution <id>” and “finished execution <id>.”
Quick Redis check (inside the Redis container): docker exec -it <redis_container> redis-cli ping
which should return PONG
.
If a worker can’t read credentials, your encryption keys likely don’t match.
Monitor and Know When to Scale
Watch CPU and memory with docker stats
or htop
on the host. Track Redis health with redis-cli info memory
and redis-cli info stats
to keep latency low. Observe PostgreSQL under load: connection spikes and slow queries suggest it needs more resources or tuning. Scale workers when wait times grow or executions sit “Queued” too long. Keep main relatively small; it’s rarely CPU‑bound. Consider multiple mains only for high availability or heavy UI/API traffic (use sticky sessions if you go multi‑main).
Designing Workflows for Performance
Reserve a “heavy” worker group (lower concurrency) for big file handling, large API fan‑outs, or AI steps. Use Split In Batches to chunk large lists safely and improve n8n performance. Stream or offload big binaries to S3 or similar; avoid local filesystem storage in distributed workflows. Add retries with backoff for flaky APIs and make steps idempotent when possible so safe retries don’t duplicate work.
Common Gotchas and Quick Fixes
Mismatched versions across main and workers can break migrations or job formats — pin the same tag everywhere. WEBHOOK_URL
must be set to a reachable public URL; otherwise, external services can’t hit your endpoints. If the database gets hammered, lower worker concurrency or scale PostgreSQL. If Redis is down or unreachable, jobs won’t move — confirm host/password and network connectivity.
Helpful References
n8n docs: Queue mode and scaling — https://docs.n8n.io/hosting/scaling/queue-mode/
Redis (message broker and persistence options) — https://redis.io/docs/
PostgreSQL (reliable, concurrent database) — https://www.postgresql.org/docs/
Ready To Scale?
Spin up the compose stack, add one worker, and run a scheduled test. Watch the logs for a clean “Queued → Success,” then scale workers to match your demand. You’ll feel the difference immediately: faster runs, fewer bottlenecks, and a simple, beginner‑friendly path to horizontal scaling with n8n.
Want a one click deployment?
Go to this link, Click Deploy, Signup via Github, You get 1 month free!
FAQs
What is n8n Queue Mode and why would I use it?
It splits work across a main process and worker processes, using Redis as a queue and PostgreSQL for storage. This lets you scale horizontally by adding more workers without rewriting workflows, improving throughput and reliability.
What are the main components in n8n Queue Mode and what does each do?
The main process handles the UI, triggers, and enqueues jobs; Redis acts as the queue; workers dequeue and execute jobs; PostgreSQL stores workflows, credentials, executions, and logs.
When should I consider using Queue Mode?
When you need higher throughput, smoother editing/testing during production, fewer stuck jobs during bursts, and an easy path to scale by adding more workers.
How do the main process and workers interact in Queue Mode?
The main process receives triggers and creates execution records, enqueuing jobs to Redis. Workers pull jobs from Redis, load workflows and credentials from the database, execute steps, write results back, and report completion.
What are the essential environment variables for Queue Mode?
EXECUTIONS_MODE=queue
; N8N_ENCRYPTION_KEY
must be the same across all processes; database and Redis connection variables; and WEBHOOK_URL
for external webhooks. Optional: OFFLOAD_MANUAL_EXECUTIONS_TO_WORKERS
and N8N_LOG_LEVEL
for debugging.
How should Redis be configured for Queue Mode?
Keep Redis close to n8n, set a password, and secure it behind a firewall. Enable persistence (AOF or RDB) with a reasonable save policy, and avoid exposing Redis publicly. You can tune for speed vs durability as needed.
How do I deploy and scale n8n with Docker Compose?
Use a compose file with services for Redis, PostgreSQL, and the main n8n process. Add separate worker services (and optionally webhook processors). Scale workers with docker compose up -d --scale n8n-worker=3
, and configure per-service environment variables.
How can I test that my Queue Mode setup is working?
Create a simple scheduled workflow that runs regularly, then watch for Executions moving from Queued to Success. Check worker logs for messages like “Worker started execution” and “finished execution.” You can also ping Redis to verify connectivity.
What are important security and data practices in Queue Mode?
Use the same N8N_ENCRYPTION_KEY
across all processes; don’t expose Redis publicly; enable HTTPS via a reverse proxy; secure .env files; prefer PostgreSQL over SQLite; and consider S3 for binary data when needed.
What should I monitor and how do I troubleshoot issues in Queue Mode?
Monitor CPU/memory with docker stats
or htop
, and Redis with redis-cli info
. Check logs from each service for errors, ensure encryption keys match, and verify all components use the same image tag. For webhooks or stuck jobs, check routing, worker logs, and Redis connectivity, and adjust concurrency if the database becomes a bottleneck.