n8n Out of Memory: Fix JavaScript Heap Errors on Your VPS
Getting 'n8n may have run out of memory' errors? Here's how to fix heap allocation, optimize workflows, and choose the right VPS for your workload.
The Error Everyone Hits
If you've self-hosted n8n for more than a week, you've probably seen one of these:
Execution stopped at this node – n8n may have run out of memory
while running this execution
FATAL ERROR: Reached heap limit Allocation failed
– JavaScript heap out of memory
These aren't bugs in your workflows. They're resource limits. n8n runs on Node.js, which by default caps the JavaScript heap at around 1.5-2 GB. When a workflow processes large datasets, chains AI agent calls, or runs complex Code nodes, it hits that ceiling and crashes.
Here's how to fix it at every level: Node.js configuration, workflow design, Docker limits, and VPS sizing.
Quick Fix: Increase the Heap Size
The fastest way to stop OOM crashes is to tell Node.js to use more memory:
NODE_OPTIONS="--max-old-space-size=4096"
This allocates 4 GB to the V8 heap. Add it to your Docker Compose environment:
services:
n8n:
image: n8nio/n8n:latest
environment:
- NODE_OPTIONS=--max-old-space-size=4096
deploy:
resources:
limits:
memory: 6g
reservations:
memory: 3g
How Much Heap Should You Allocate?
The rule: set max-old-space-size to 50-70% of your available RAM, leaving room for the OS, PostgreSQL, and other processes.
| VPS RAM | Recommended Heap Size | NODE_OPTIONS Value |
|---|---|---|
| 2 GB | 1024 MB | --max-old-space-size=1024 |
| 4 GB | 2048-2560 MB | --max-old-space-size=2048 |
| 8 GB | 4096-5120 MB | --max-old-space-size=4096 |
| 16 GB | 8192-10240 MB | --max-old-space-size=8192 |
Always set the heap size LOWER than your Docker memory limit. If they're equal, the Node.js process has no room for buffers, native code, and stack space — and the Docker OOM killer will terminate the container instead of letting Node.js garbage collect gracefully.
Understand What's Eating Memory
n8n's baseline is around 180 MB at idle. But memory spikes come from specific patterns:
Code Nodes
Code nodes run JavaScript/Python inside n8n's process. A Code node that loads a 50 MB JSON file into memory, transforms it, and outputs it consumes at least 150 MB (input + working copy + output). If that node is in a loop, multiply by the number of iterations.
Fix: Keep Code nodes minimal. Use built-in nodes (HTTP Request, Set, IF) when possible -- they're optimized for n8n's data pipeline.
AI Agent Nodes
AI workflows accumulate context. Each message in a conversation adds tokens that stay in memory. After 10+ rounds with tool calls, context can exceed 150,000 tokens -- consuming significant heap space.
Fix: Use the Chat Memory Manager node to limit conversation length. Set a maximum number of messages or tokens to keep in context.
Large Dataset Processing
Fetching 10,000+ items from an API, merging datasets with the Merge node, or converting large files to Base64 in memory are all common OOM triggers.
Fix: Process in batches using the SplitInBatches node.
Wait Nodes
Wait nodes in Regular Mode keep the entire execution context in RAM during the wait period. A workflow waiting 24 hours with a large dataset loaded holds that memory the entire time.
Fix: Use Wait nodes in Webhook mode when possible, or split the workflow so the waiting step holds minimal data.
Optimize Workflows: SplitInBatches
The SplitInBatches node is your primary tool for preventing OOM errors. Instead of processing 10,000 items at once, process them in groups of 50:
HTTP Request (10,000 items) → SplitInBatches (50 per batch) → Process → Loop back
Recommended batch size: 50 items. This balances memory usage against execution time.
Critical warning: Do NOT connect the SplitInBatches "Done" output back to itself. This creates an infinite loop that consumes memory until n8n crashes. Connect "Done" to a downstream node (like a summary or notification).
Optimize Workflows: Sub-Workflows
Sub-workflows provide memory isolation. When a sub-workflow completes, n8n frees its entire memory allocation. The parent workflow only receives the small result set.
Pattern:
Parent: Fetch Data → SplitInBatches → Execute Sub-Workflow (batch) → Collect Results
Sub: Receive Batch → Process → Return Summary
Each sub-workflow execution gets fresh memory. A 10,000-item dataset processed in 200-item sub-workflow batches never holds more than 200 items in memory at once.
v2.0 Task Runners: Built-In Memory Isolation
n8n v2.0 introduced task runners, which execute Code nodes in isolated V8 sandboxes. This is a major improvement -- a runaway Code node can no longer crash the entire n8n process.
Task runners are enabled by default in v2.0. They run as a separate Docker image:
services:
n8n:
image: n8nio/n8n:latest
# ... main n8n config
n8n-runner:
image: n8nio/runners:latest
environment:
- N8N_RUNNERS_MODE=internal
Resource recommendations for task runners:
| Workload | Runner RAM | Runner CPU |
|---|---|---|
| Lightweight | 4 GB | 2 vCores |
| Moderate | 8 GB | 4 vCores |
| Heavy concurrent | 16 GB | 8 vCores |
Binary Data Mode Change
v2.0 also changes how binary data (files, images) is handled. Set this to prevent in-memory binary bloat:
N8N_DEFAULT_BINARY_DATA_MODE=filesystem
This stores file data on disk instead of in the JavaScript heap, significantly reducing memory pressure for workflows that process files.
Contabo
Contabo Cloud VPS 1: 4 vCPU and 8 GB RAM for $4.50/mo. Enough headroom for n8n + task runners without hitting memory limits.
* Affiliate link — we may earn a commission at no extra cost to you.
Set Up Memory Monitoring
Don't wait for crashes to tell you about memory problems. Enable Prometheus metrics:
N8N_METRICS=true
N8N_METRICS_INCLUDE_QUEUE_METRICS=true
This exposes metrics at the /metrics endpoint, including heap size, garbage collection duration, and event loop lag. Connect to Grafana for dashboards (pre-built dashboard ID: 24474).
Quick monitoring without Prometheus:
# Check n8n container memory usage
docker stats n8n --no-stream
# Check for OOM kills in system logs
dmesg | grep -i oom
# Check Node.js heap from inside the container
docker exec n8n node -e "console.log(process.memoryUsage())"
Alert threshold: If resident memory stays above 850 MB for 10+ minutes continuously, your instance is at risk of OOM.
Docker Memory Configuration
Set explicit memory limits in Docker Compose to prevent n8n from consuming all VPS RAM (which would crash other services like PostgreSQL):
services:
n8n:
image: n8nio/n8n:latest
deploy:
resources:
limits:
cpus: '2.5'
memory: 4g
reservations:
cpus: '1.0'
memory: 2g
environment:
- NODE_OPTIONS=--max-old-space-size=2560
The relationship: Docker limit (4g) > heap size (2560m) > reservation (2g). This gives Node.js room for non-heap memory while preventing it from starving other processes.
VPS Sizing Guide for n8n
| Workload | RAM | vCPU | Cost | What It Handles |
|---|---|---|---|---|
| Dev/testing | 2 GB | 1 | ~$5/mo | Simple workflows, expect occasional OOM |
| Light production | 4 GB | 2 | ~$6-12/mo | 10-20 workflows, no AI, small datasets |
| Standard production | 8 GB | 4 | ~$8-24/mo | 20-50 workflows, AI agents, moderate data |
| Heavy production | 16 GB | 4-8 | ~$15-48/mo | 50+ workflows, heavy AI, large datasets |
| Queue mode | 16-32 GB | 8-16 | ~$29-100/mo | Main + workers, high concurrency |
Our recommendations by provider:
- Under $5/mo: Contabo VPS 1 — 8 GB RAM, 4 vCPU at $4.50/mo. Best resources per dollar.
- $5-10/mo: Hostinger KVM 1 — 4 GB RAM at $6.49/mo. Great control panel and support.
- $10-25/mo: DigitalOcean Pro — 4 GB RAM, 2 vCPU at $24/mo. Best docs, monitoring built in.
- $5-10/mo: Vultr Compute Plus — 2 GB RAM at $10/mo. 32 global locations.
Hostinger
Hostinger KVM 2 at $8.49/mo gives you 8 GB RAM — plenty of headroom for n8n production workloads with AI agents.
* Affiliate link — we may earn a commission at no extra cost to you.
Troubleshooting Checklist
When n8n runs out of memory:
- Check which workflow caused it — look at the execution log for the failed execution
- Check what the workflow processes — large datasets, file conversions, or AI calls are the usual suspects
- Increase heap size — set
NODE_OPTIONS=--max-old-space-size=4096 - Add SplitInBatches — for any workflow processing more than 100 items
- Use sub-workflows — for memory isolation on heavy processing
- Set binary data mode —
N8N_DEFAULT_BINARY_DATA_MODE=filesystem - Upgrade v2.0 task runners — for Code node isolation
- Monitor memory — enable Prometheus metrics or use
docker stats - Upgrade your VPS — if none of the above helps, you need more RAM
Conclusion
n8n memory errors are frustrating but entirely solvable. Start with the quick fix (increase max-old-space-size), then work through workflow optimizations (batching, sub-workflows, binary data mode). For sustained workloads, upgrade to v2.0 task runners and set up monitoring so you catch problems before they crash your instance.
And if you're still hitting limits after all optimizations, the simplest fix is more RAM. A VPS upgrade from 4 GB to 8 GB typically costs an extra $2-4/month and eliminates most memory issues outright.
Ready to start automating? Get a VPS today.
Get started with Hostinger VPS hosting today. Special pricing available.
* Affiliate link — we may earn a commission at no extra cost to you