n8n Queue Mode: Von 23 auf 162 Anfragen/Sekunde mit Workers skalieren
Komplettanleitung zur Einrichtung des n8n Queue Mode mit Redis und Worker-Prozessen. Inklusive Docker Compose-Konfiguration und Skalierungsbefehle.
When Regular Mode Isn't Enough
n8n's default "regular" mode runs everything on a single Node.js process. This works fine for 10-20 workflows with moderate traffic. But when you hit 50+ workflows, concurrent webhook triggers, or long-running AI agent workflows, you'll see:
- Webhook responses delayed by 5-30 seconds because the process is busy with another execution
- Workflow executions queuing up and timing out
- The n8n editor becoming sluggish or unresponsive during peak load
- OOM crashes because all executions share one memory pool
Queue mode solves this by separating the n8n main process (which handles the UI, API, and webhook reception) from worker processes (which execute workflows). The main process puts jobs into a Redis queue. Workers pull jobs and execute them independently, each in their own process with their own memory.
The result: n8n community benchmarks show queue mode scaling from ~23 requests/second (single process) to 162 requests/second with 5 workers -- a 7x throughput increase.
Architecture Overview
In queue mode, n8n runs as three components:
- Main process -- handles the editor UI, REST API, and webhook endpoints. Receives triggers and puts them into the Redis queue. Does NOT execute workflows.
- Worker processes -- pull jobs from Redis and execute workflows. You can run as many workers as your VPS can handle.
- Redis -- the message broker that connects main and workers. Stores the job queue, active executions, and completion status.
All components share the same PostgreSQL database (SQLite is not supported in queue mode).
Prerequisites
Queue mode requires:
- PostgreSQL -- SQLite doesn't support concurrent connections from multiple processes
- Redis -- the job queue broker
- n8n v1.0+ -- queue mode has been available since early versions but stabilized in v1.0
If you're still on SQLite, migrate to PostgreSQL first (we have a complete migration guide for that).
Complete Docker Compose Configuration
Here's a production-ready Docker Compose that sets up all four services:
version: "3.8"
services:
postgres:
image: postgres:16-alpine
restart: always
environment:
POSTGRES_USER: n8n
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: n8n
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U n8n"]
interval: 10s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
restart: always
command: redis-server --maxmemory 256mb --maxmemory-policy allkeys-lru
volumes:
- redis_data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
n8n-main:
image: n8nio/n8n:latest
restart: always
ports:
- "127.0.0.1:5678:5678"
environment:
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_PORT=5432
- DB_POSTGRESDB_DATABASE=n8n
- DB_POSTGRESDB_USER=n8n
- DB_POSTGRESDB_PASSWORD=${DB_PASSWORD}
- N8N_ENCRYPTION_KEY=${ENCRYPTION_KEY}
- EXECUTIONS_MODE=queue
- QUEUE_BULL_REDIS_HOST=redis
- QUEUE_BULL_REDIS_PORT=6379
- N8N_HOST=${N8N_DOMAIN}
- N8N_PROTOCOL=https
- WEBHOOK_URL=https://${N8N_DOMAIN}/
- N8N_PROXY_HOPS=1
- N8N_DEFAULT_BINARY_DATA_MODE=filesystem
- N8N_AVAILABLE_BINARY_DATA_MODES=filesystem
- GENERIC_TIMEZONE=UTC
- NODE_OPTIONS=--max-old-space-size=2048
volumes:
- n8n_data:/home/node/.n8n
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
n8n-worker:
image: n8nio/n8n:latest
restart: always
command: worker
environment:
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_PORT=5432
- DB_POSTGRESDB_DATABASE=n8n
- DB_POSTGRESDB_USER=n8n
- DB_POSTGRESDB_PASSWORD=${DB_PASSWORD}
- N8N_ENCRYPTION_KEY=${ENCRYPTION_KEY}
- EXECUTIONS_MODE=queue
- QUEUE_BULL_REDIS_HOST=redis
- QUEUE_BULL_REDIS_PORT=6379
- N8N_DEFAULT_BINARY_DATA_MODE=filesystem
- N8N_AVAILABLE_BINARY_DATA_MODES=filesystem
- GENERIC_TIMEZONE=UTC
- NODE_OPTIONS=--max-old-space-size=4096
volumes:
- n8n_data:/home/node/.n8n
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
deploy:
replicas: 2
resources:
limits:
memory: 6g
cpus: '2.0'
reservations:
memory: 2g
cpus: '1.0'
volumes:
postgres_data:
redis_data:
n8n_data:
And the corresponding .env file:
DB_PASSWORD=your-secure-database-password
ENCRYPTION_KEY=your-n8n-encryption-key
N8N_DOMAIN=n8n.yourdomain.com
Critical Configuration Details
The command: worker Line
Workers are started with the worker command. This tells n8n to run in worker mode -- it connects to Redis, pulls jobs, and executes them. It does NOT start the web UI or listen for webhooks.
Shared N8N_ENCRYPTION_KEY
Every component (main + all workers) must use the exact same N8N_ENCRYPTION_KEY. If a worker has a different key, it can't decrypt stored credentials. Workflows will fail with credential errors that don't appear in the main process logs.
The encryption key mismatch is the #1 cause of queue mode failures. If workers can pull and start workflows but every execution fails with credential errors, the keys don't match. Check with: docker exec n8n-main env | grep ENCRYPTION_KEY and compare with the worker.
Binary Data Mode: Filesystem
N8N_DEFAULT_BINARY_DATA_MODE=filesystem
In queue mode, binary data (files, images) must be stored on the filesystem -- not in memory. The main process receives the file via webhook, stores it on disk, and the worker picks up the job and reads the file from the same disk location.
This means the n8n_data volume must be shared between the main process and all workers. In the Docker Compose above, all services mount the same n8n_data volume.
Redis Configuration
redis-server --maxmemory 256mb --maxmemory-policy allkeys-lru
This caps Redis at 256 MB and evicts the least recently used keys when full. For n8n's job queue, 256 MB is more than enough for thousands of concurrent jobs. The allkeys-lru policy ensures old completed job data gets cleaned up automatically.
Scaling Workers
Scale with Docker Compose
The simplest way to add more workers:
# Scale to 5 workers
docker compose up -d --scale n8n-worker=5
Each new worker connects to Redis and starts pulling jobs immediately. No restart of the main process needed.
How Many Workers Do You Need?
| Workload | Workers | Total vCPU | Total RAM |
|---|---|---|---|
| Light (10-20 workflows, low concurrency) | 1-2 | 4 | 8 GB |
| Medium (20-50 workflows, moderate webhooks) | 2-3 | 6-8 | 16 GB |
| Heavy (50+ workflows, high concurrency) | 3-5 | 8-12 | 16-32 GB |
| AI-heavy (agent workflows with LLM calls) | 5-10 | 12-16 | 32-64 GB |
Each worker can handle multiple concurrent executions. The number of concurrent executions per worker is controlled by the concurrency setting.
Concurrency Tuning
N8N_CONCURRENCY_PRODUCTION_LIMIT
This controls how many workflows each worker can execute simultaneously:
# Default: 5 concurrent executions per worker
N8N_CONCURRENCY_PRODUCTION_LIMIT=5
Setting this too high starves individual workflows of CPU and memory. Setting it too low wastes worker capacity on idle time.
Recommended settings by workload type:
| Workflow Type | Concurrency | Why |
|---|---|---|
| API-heavy (HTTP requests, webhooks) | 10-20 | Mostly waiting on network I/O |
| Data processing (transforms, merges) | 3-5 | CPU-bound, needs headroom |
| AI agents (LLM calls, tool chains) | 2-3 | High memory, long execution times |
| Mixed workloads | 5 | Safe default |
Monitor to Find Your Sweet Spot
Enable Prometheus metrics to track worker utilization:
N8N_METRICS=true
N8N_METRICS_INCLUDE_QUEUE_METRICS=true
Key metrics to watch:
n8n_queue_active-- number of jobs currently executing across all workersn8n_queue_waiting-- jobs in the queue waiting for a workern8n_queue_completed-- total completed jobs
If n8n_queue_waiting is consistently above 0, you need more workers or higher concurrency. If workers are idle most of the time, you can scale down to save resources.
Database Connection Pool Sizing
Each worker opens its own connections to PostgreSQL. With 5 workers at default settings, you can have 50+ database connections. PostgreSQL's default max_connections is 100.
Fix: Increase PostgreSQL Connections
services:
postgres:
image: postgres:16-alpine
command: postgres -c max_connections=200
Or for more fine-grained control:
# In postgresql.conf
max_connections = 200
shared_buffers = 256MB
work_mem = 4MB
Fix: Use Connection Pooling
For larger deployments (10+ workers), add PgBouncer as a connection pooler:
services:
pgbouncer:
image: edoburu/pgbouncer:latest
environment:
DATABASE_URL: postgres://n8n:${DB_PASSWORD}@postgres:5432/n8n
POOL_MODE: transaction
MAX_CLIENT_CONN: 500
DEFAULT_POOL_SIZE: 25
depends_on:
- postgres
Then point n8n at PgBouncer instead of PostgreSQL directly.
Contabo
Contabo VPS 3: 12 vCPU, 24 GB RAM for $13.49/mo. Run n8n main + 5 workers + PostgreSQL + Redis on a single server.
* Affiliate link — we may earn a commission at no extra cost to you.
Monitoring Queue Mode
Quick Health Check
# Check all services are running
docker compose ps
# Check Redis queue status
docker exec redis redis-cli LLEN bull:n8n:jobs:wait
docker exec redis redis-cli LLEN bull:n8n:jobs:active
# Check worker logs for errors
docker compose logs n8n-worker --tail 50
# Check PostgreSQL connection count
docker exec postgres psql -U n8n -c "SELECT count(*) FROM pg_stat_activity;"
Set Up Grafana Dashboard
n8n provides a pre-built Grafana dashboard (ID: 24474) that shows execution rates, queue depths, and worker performance. Connect it to your Prometheus instance for real-time monitoring.
Common Pitfalls and Fixes
| Problem | Cause | Fix |
|---|---|---|
| Workers start but never execute | Wrong encryption key | Use identical N8N_ENCRYPTION_KEY on all services |
| "Database is locked" errors | Using SQLite | Migrate to PostgreSQL (required for queue mode) |
| Binary data not found | Separate volumes | Share the same n8n_data volume across main and workers |
| Credential decryption fails | Key mismatch | Copy the exact key from main to all workers |
| "Too many connections" | DB connection exhaustion | Increase max_connections or add PgBouncer |
| Workers idle, jobs waiting | Redis connectivity issue | Check QUEUE_BULL_REDIS_HOST matches service name |
| Workflows fail after scaling | Memory pressure | Increase --max-old-space-size or add more RAM |
| Duplicate executions | Multiple main processes | Only ONE main process should run; everything else must be workers |
A common mistake: running multiple instances of the main process instead of workers. Only one main process should exist. Additional capacity comes from workers (command: worker), not from duplicating the main process.
Migrating from Regular to Queue Mode
If you have an existing n8n instance, follow this migration path:
- Back up everything -- database,
.n8ndirectory, encryption key - Set up PostgreSQL if not already (queue mode requires it)
- Set up Redis -- add it to your Docker Compose
- Add queue environment variables to your existing n8n service:
EXECUTIONS_MODE=queueQUEUE_BULL_REDIS_HOST=redisQUEUE_BULL_REDIS_PORT=6379
- Add a worker service to Docker Compose with
command: worker - Set binary data mode to
filesystem - Restart --
docker compose down && docker compose up -d - Verify -- check logs, test a webhook trigger, confirm the worker picks up the execution
Your existing workflows, credentials, and execution history are preserved. Queue mode is a deployment change, not a data migration.
VPS Sizing for Queue Mode
| Setup | RAM | vCPU | Cost | What It Handles |
|---|---|---|---|---|
| Main + 1 worker + PG + Redis | 8 GB | 4 | $4.50-8/mo | Entry-level queue mode |
| Main + 2 workers + PG + Redis | 16 GB | 6-8 | $8-15/mo | Moderate production load |
| Main + 5 workers + PG + Redis | 32 GB | 8-12 | $15-30/mo | High-throughput webhooks |
| Main + 10 workers + PG + Redis + PgBouncer | 64 GB | 16 | $30-60/mo | Enterprise-grade workloads |
Provider picks for queue mode:
- Contabo VPS 1 ($4.50/mo): 8 GB RAM, 4 vCPU -- enough for main + 1 worker
- Contabo VPS 2 ($8.49/mo): 16 GB RAM, 6 vCPU -- main + 3 workers comfortably
- Hostinger KVM 4 ($15.99/mo): 16 GB RAM -- great management panel for monitoring
- DigitalOcean ($24-48/mo): 8-16 GB RAM with managed PostgreSQL option
- Vultr ($12-48/mo): Flexible plans with hourly billing for scaling up during peaks
Hostinger
Hostinger KVM 4: 16 GB RAM for $15.99/mo — run n8n queue mode with multiple workers and PostgreSQL on a single server.
* Affiliate link — we may earn a commission at no extra cost to you.
Conclusion
Queue mode is the single biggest architectural upgrade you can make to a self-hosted n8n instance. It transforms n8n from a single-threaded process that chokes under concurrent load into a distributed system that scales horizontally.
Start with the Docker Compose config in this guide, set your encryption key consistently across all services, use filesystem binary data mode, and scale workers based on your workload. Monitor with Prometheus metrics and scale up when the queue starts backing up.
The jump from 23 to 162 requests per second doesn't require expensive hardware -- it requires running n8n the way it was designed to run at scale.
Bereit zum Automatisieren? Holen Sie sich heute einen VPS.
Starten Sie noch heute mit Hostinger VPS-Hosting. Sonderpreise verfügbar.
* Affiliate-Link — wir erhalten möglicherweise eine Provision ohne zusätzliche Kosten für Sie