Skip to main content
教程10 min read

n8n 内存不足:修复 VPS 上的 JavaScript 堆错误

遇到 'n8n may have run out of memory' 错误?以下是修复堆分配、优化工作流和选择合适 VPS 的方法。

作者 AutomationVPS

The Error Everyone Hits

If you've self-hosted n8n for more than a week, you've probably seen one of these:

Execution stopped at this node – n8n may have run out of memory
while running this execution
FATAL ERROR: Reached heap limit Allocation failed
– JavaScript heap out of memory

These aren't bugs in your workflows. They're resource limits. n8n runs on Node.js, which by default caps the JavaScript heap at around 1.5-2 GB. When a workflow processes large datasets, chains AI agent calls, or runs complex Code nodes, it hits that ceiling and crashes.

Here's how to fix it at every level: Node.js configuration, workflow design, Docker limits, and VPS sizing.

Quick Fix: Increase the Heap Size

The fastest way to stop OOM crashes is to tell Node.js to use more memory:

NODE_OPTIONS="--max-old-space-size=4096"

This allocates 4 GB to the V8 heap. Add it to your Docker Compose environment:

services:
  n8n:
    image: n8nio/n8n:latest
    environment:
      - NODE_OPTIONS=--max-old-space-size=4096
    deploy:
      resources:
        limits:
          memory: 6g
        reservations:
          memory: 3g

How Much Heap Should You Allocate?

The rule: set max-old-space-size to 50-70% of your available RAM, leaving room for the OS, PostgreSQL, and other processes.

VPS RAMRecommended Heap SizeNODE_OPTIONS Value
2 GB1024 MB--max-old-space-size=1024
4 GB2048-2560 MB--max-old-space-size=2048
8 GB4096-5120 MB--max-old-space-size=4096
16 GB8192-10240 MB--max-old-space-size=8192
⚠️

Always set the heap size LOWER than your Docker memory limit. If they're equal, the Node.js process has no room for buffers, native code, and stack space — and the Docker OOM killer will terminate the container instead of letting Node.js garbage collect gracefully.

Understand What's Eating Memory

n8n's baseline is around 180 MB at idle. But memory spikes come from specific patterns:

Code Nodes

Code nodes run JavaScript/Python inside n8n's process. A Code node that loads a 50 MB JSON file into memory, transforms it, and outputs it consumes at least 150 MB (input + working copy + output). If that node is in a loop, multiply by the number of iterations.

Fix: Keep Code nodes minimal. Use built-in nodes (HTTP Request, Set, IF) when possible -- they're optimized for n8n's data pipeline.

AI Agent Nodes

AI workflows accumulate context. Each message in a conversation adds tokens that stay in memory. After 10+ rounds with tool calls, context can exceed 150,000 tokens -- consuming significant heap space.

Fix: Use the Chat Memory Manager node to limit conversation length. Set a maximum number of messages or tokens to keep in context.

Large Dataset Processing

Fetching 10,000+ items from an API, merging datasets with the Merge node, or converting large files to Base64 in memory are all common OOM triggers.

Fix: Process in batches using the SplitInBatches node.

Wait Nodes

Wait nodes in Regular Mode keep the entire execution context in RAM during the wait period. A workflow waiting 24 hours with a large dataset loaded holds that memory the entire time.

Fix: Use Wait nodes in Webhook mode when possible, or split the workflow so the waiting step holds minimal data.

Optimize Workflows: SplitInBatches

The SplitInBatches node is your primary tool for preventing OOM errors. Instead of processing 10,000 items at once, process them in groups of 50:

HTTP Request (10,000 items) → SplitInBatches (50 per batch) → Process → Loop back

Recommended batch size: 50 items. This balances memory usage against execution time.

Critical warning: Do NOT connect the SplitInBatches "Done" output back to itself. This creates an infinite loop that consumes memory until n8n crashes. Connect "Done" to a downstream node (like a summary or notification).

Optimize Workflows: Sub-Workflows

Sub-workflows provide memory isolation. When a sub-workflow completes, n8n frees its entire memory allocation. The parent workflow only receives the small result set.

Pattern:

Parent: Fetch Data → SplitInBatches → Execute Sub-Workflow (batch) → Collect Results
Sub:    Receive Batch → Process → Return Summary

Each sub-workflow execution gets fresh memory. A 10,000-item dataset processed in 200-item sub-workflow batches never holds more than 200 items in memory at once.

v2.0 Task Runners: Built-In Memory Isolation

n8n v2.0 introduced task runners, which execute Code nodes in isolated V8 sandboxes. This is a major improvement -- a runaway Code node can no longer crash the entire n8n process.

Task runners are enabled by default in v2.0. They run as a separate Docker image:

services:
  n8n:
    image: n8nio/n8n:latest
    # ... main n8n config

  n8n-runner:
    image: n8nio/runners:latest
    environment:
      - N8N_RUNNERS_MODE=internal

Resource recommendations for task runners:

WorkloadRunner RAMRunner CPU
Lightweight4 GB2 vCores
Moderate8 GB4 vCores
Heavy concurrent16 GB8 vCores

Binary Data Mode Change

v2.0 also changes how binary data (files, images) is handled. Set this to prevent in-memory binary bloat:

N8N_DEFAULT_BINARY_DATA_MODE=filesystem

This stores file data on disk instead of in the JavaScript heap, significantly reducing memory pressure for workflows that process files.

Contabo

Contabo Cloud VPS 1: 4 vCPU and 8 GB RAM for $4.50/mo. Enough headroom for n8n + task runners without hitting memory limits.

Visit Contabo

* Affiliate link — we may earn a commission at no extra cost to you.

Set Up Memory Monitoring

Don't wait for crashes to tell you about memory problems. Enable Prometheus metrics:

N8N_METRICS=true
N8N_METRICS_INCLUDE_QUEUE_METRICS=true

This exposes metrics at the /metrics endpoint, including heap size, garbage collection duration, and event loop lag. Connect to Grafana for dashboards (pre-built dashboard ID: 24474).

Quick monitoring without Prometheus:

# Check n8n container memory usage
docker stats n8n --no-stream

# Check for OOM kills in system logs
dmesg | grep -i oom

# Check Node.js heap from inside the container
docker exec n8n node -e "console.log(process.memoryUsage())"

Alert threshold: If resident memory stays above 850 MB for 10+ minutes continuously, your instance is at risk of OOM.

Docker Memory Configuration

Set explicit memory limits in Docker Compose to prevent n8n from consuming all VPS RAM (which would crash other services like PostgreSQL):

services:
  n8n:
    image: n8nio/n8n:latest
    deploy:
      resources:
        limits:
          cpus: '2.5'
          memory: 4g
        reservations:
          cpus: '1.0'
          memory: 2g
    environment:
      - NODE_OPTIONS=--max-old-space-size=2560

The relationship: Docker limit (4g) > heap size (2560m) > reservation (2g). This gives Node.js room for non-heap memory while preventing it from starving other processes.

VPS Sizing Guide for n8n

WorkloadRAMvCPUCostWhat It Handles
Dev/testing2 GB1~$5/moSimple workflows, expect occasional OOM
Light production4 GB2~$6-12/mo10-20 workflows, no AI, small datasets
Standard production8 GB4~$8-24/mo20-50 workflows, AI agents, moderate data
Heavy production16 GB4-8~$15-48/mo50+ workflows, heavy AI, large datasets
Queue mode16-32 GB8-16~$29-100/moMain + workers, high concurrency

Our recommendations by provider:

  • Under $5/mo: Contabo VPS 1 — 8 GB RAM, 4 vCPU at $4.50/mo. Best resources per dollar.
  • $5-10/mo: Hostinger KVM 1 — 4 GB RAM at $6.49/mo. Great control panel and support.
  • $10-25/mo: DigitalOcean Pro — 4 GB RAM, 2 vCPU at $24/mo. Best docs, monitoring built in.
  • $5-10/mo: Vultr Compute Plus — 2 GB RAM at $10/mo. 32 global locations.

Hostinger

Hostinger KVM 2 at $8.49/mo gives you 8 GB RAM — plenty of headroom for n8n production workloads with AI agents.

Visit Hostinger

* Affiliate link — we may earn a commission at no extra cost to you.

Troubleshooting Checklist

When n8n runs out of memory:

  1. Check which workflow caused it — look at the execution log for the failed execution
  2. Check what the workflow processes — large datasets, file conversions, or AI calls are the usual suspects
  3. Increase heap size — set NODE_OPTIONS=--max-old-space-size=4096
  4. Add SplitInBatches — for any workflow processing more than 100 items
  5. Use sub-workflows — for memory isolation on heavy processing
  6. Set binary data modeN8N_DEFAULT_BINARY_DATA_MODE=filesystem
  7. Upgrade v2.0 task runners — for Code node isolation
  8. Monitor memory — enable Prometheus metrics or use docker stats
  9. Upgrade your VPS — if none of the above helps, you need more RAM

Conclusion

n8n memory errors are frustrating but entirely solvable. Start with the quick fix (increase max-old-space-size), then work through workflow optimizations (batching, sub-workflows, binary data mode). For sustained workloads, upgrade to v2.0 task runners and set up monitoring so you catch problems before they crash your instance.

And if you're still hitting limits after all optimizations, the simplest fix is more RAM. A VPS upgrade from 4 GB to 8 GB typically costs an extra $2-4/month and eliminates most memory issues outright.

准备好开始自动化了吗?立即获取VPS。

立即开始使用 Hostinger VPS 主机。特惠价格可用。

获取 Hostinger VPS

* 联盟链接 — 我们可能会获得佣金,不会增加您的费用

#n8n#memory#performance#troubleshooting#vps