将 n8n 从 SQLite 迁移到 PostgreSQL(不丢失数据)
自托管 n8n 从 SQLite 迁移到 PostgreSQL 的分步指南。涵盖 DB_TYPE 陷阱和安全导出/导入。
Why You Need to Migrate
n8n ships with SQLite as its default database. It works fine for getting started, but it'll hold you back in production:
- File locking -- SQLite locks the entire database on writes. With concurrent workflows, write operations start timing out.
- No safe live backups -- you can't safely backup SQLite while n8n is running without risking corruption.
- Queue mode requires PostgreSQL -- if you want to scale with workers, SQLite isn't an option.
- Data corruption risk -- high concurrency can corrupt the SQLite file entirely.
PostgreSQL handles all of these. It's what n8n recommends for production, and as of v2.0, MySQL and MariaDB support has been deprecated -- PostgreSQL is the only supported production database.
The #1 Mistake: DB_TYPE Is "postgresdb", Not "postgres"
This trips up more people than any other migration issue. The correct environment variable is:
DB_TYPE=postgresdb
Not DB_TYPE=postgres. If you use postgres, n8n silently falls back to SQLite without any error message. You'll think you're running on PostgreSQL while your data goes into the SQLite file.
If you set DB_TYPE to "postgres" (without the "db"), n8n will start normally and appear to work. But it's using SQLite. Check your logs for "DB type set to postgresdb" to confirm you're actually connected to PostgreSQL.
Before You Start: Backup Everything
Save Your Encryption Key
n8n encrypts all stored credentials (API keys, OAuth tokens, passwords) using N8N_ENCRYPTION_KEY. If you lose this key, every credential becomes permanently unrecoverable.
Find your current key:
# If set as environment variable
docker exec n8n env | grep N8N_ENCRYPTION_KEY
# If stored in config file
docker exec n8n cat /home/node/.n8n/config
Write it down and store it somewhere safe. You'll need the exact same key on your new PostgreSQL-backed instance.
If you've never set an encryption key explicitly, n8n auto-generated one on first startup. Find it in the config file above and set it as an environment variable going forward.
Export Your Data
Use n8n's built-in export commands to extract everything:
# Export all entities (workflows, credentials, tags) — recommended for v1.67+
docker exec -u node n8n n8n export:entities --outputDir=/tmp/n8n-backup
# Copy the backup out of the container
docker cp n8n:/tmp/n8n-backup ./n8n-backup
# Also backup the .n8n directory (contains encryption key and settings)
docker cp n8n:/home/node/.n8n ./n8n-dot-backup
If you're on an older version without export:entities, export separately:
docker exec -u node n8n n8n export:workflow --all --output=/tmp/workflows.json
docker exec -u node n8n n8n export:credentials --all --decrypted --output=/tmp/credentials.json
docker cp n8n:/tmp/workflows.json ./
docker cp n8n:/tmp/credentials.json ./
Use --decrypted when exporting credentials. This ensures you can import them with a new encryption key if needed. Store the decrypted file securely and delete it after migration.
Step 1: Set Up PostgreSQL
Add PostgreSQL to your Docker Compose. Use PostgreSQL 16 (recommended):
services:
postgres:
image: postgres:16-alpine
restart: always
environment:
POSTGRES_USER: n8n
POSTGRES_PASSWORD: your-secure-database-password
POSTGRES_DB: n8n
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U n8n"]
interval: 10s
timeout: 5s
retries: 5
volumes:
postgres_data:
Start PostgreSQL and wait for it to be healthy:
docker compose up -d postgres
docker compose ps # Wait until postgres shows "healthy"
Step 2: Update n8n Configuration
Update your n8n service in Docker Compose with the PostgreSQL connection:
services:
n8n:
image: n8nio/n8n:latest
restart: always
ports:
- "5678:5678"
environment:
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_PORT=5432
- DB_POSTGRESDB_DATABASE=n8n
- DB_POSTGRESDB_USER=n8n
- DB_POSTGRESDB_PASSWORD=your-secure-database-password
- N8N_ENCRYPTION_KEY=your-saved-encryption-key
- N8N_HOST=n8n.yourdomain.com
- N8N_PROTOCOL=https
- WEBHOOK_URL=https://n8n.yourdomain.com/
- N8N_PROXY_HOPS=1
volumes:
- n8n_data:/home/node/.n8n
depends_on:
postgres:
condition: service_healthy
Key points:
DB_POSTGRESDB_HOSTmust match the Docker service name (postgres), notlocalhostN8N_ENCRYPTION_KEYmust be identical to what you backed up earlier- The
.n8nvolume must still be mounted -- even with PostgreSQL, n8n stores config and encryption metadata there
Step 3: Start n8n with PostgreSQL
docker compose up -d n8n
n8n will detect the empty PostgreSQL database and automatically run its migration scripts to create all required tables. Watch the logs:
docker compose logs -f n8n
Look for: "DB type set to postgresdb" and "Migrations complete". If you see references to SQLite, your DB_TYPE is wrong.
Step 4: Import Your Data
Copy your backup into the container and import:
# Copy backup into the running container
docker cp ./n8n-backup n8n:/tmp/n8n-backup
# Import everything
docker exec -u node n8n n8n import:entities --inputDir=/tmp/n8n-backup
If you exported separately:
# Import credentials FIRST (workflows reference them)
docker cp ./credentials.json n8n:/tmp/credentials.json
docker exec -u node n8n n8n import:credentials --input=/tmp/credentials.json
# Then import workflows
docker cp ./workflows.json n8n:/tmp/workflows.json
docker exec -u node n8n n8n import:workflow --input=/tmp/workflows.json
Step 5: Verify the Migration
Check the UI
- Open n8n in your browser
- Go to Workflows -- all your workflows should be listed
- Go to Credentials -- all credentials should be visible and not empty
- Open a workflow and run a test execution
Check the Database
docker exec -it postgres psql -U n8n -d n8n -c "\dt"
You should see tables like workflow_entity, credentials_entity, execution_entity, etc.
Check Credentials Decrypt Properly
If credentials appear empty or show errors, the N8N_ENCRYPTION_KEY doesn't match. You'll need to:
- Stop n8n
- Re-import with
--decryptedcredentials - Make sure the environment variable is set correctly
Step 6: Clean Up
Once everything is verified:
# Remove the old SQLite file (from inside the .n8n volume)
docker exec n8n rm -f /home/node/.n8n/database.sqlite
# Delete temporary backup files
docker exec n8n rm -rf /tmp/n8n-backup
# Securely delete local decrypted credentials
rm -f ./credentials.json
DigitalOcean
DigitalOcean offers managed PostgreSQL starting at $15/mo — zero maintenance, automated backups, and seamless connection to your n8n Droplet.
* Affiliate link — we may earn a commission at no extra cost to you.
Setting Up PostgreSQL Backups
Now that you're on PostgreSQL, set up automated backups:
# Manual backup
docker exec postgres pg_dump -U n8n n8n > n8n_backup_$(date +%Y%m%d).sql
# Restore from backup
docker exec -i postgres psql -U n8n n8n < n8n_backup_20260411.sql
Automate with a cron job:
# Add to crontab: backup daily at 3 AM
0 3 * * * docker exec postgres pg_dump -U n8n n8n | gzip > /backups/n8n_$(date +\%Y\%m\%d).sql.gz
Or use a managed PostgreSQL service from DigitalOcean ($15/mo) which includes automated daily backups and point-in-time recovery.
Common Errors and Fixes
| Error | Cause | Fix |
|---|---|---|
| n8n starts but uses SQLite | DB_TYPE=postgres (wrong) | Change to DB_TYPE=postgresdb |
| "connection refused" on startup | PostgreSQL not ready or wrong host | Use Docker service name, add healthcheck |
| Credentials appear empty | Encryption key mismatch | Use exact same N8N_ENCRYPTION_KEY |
| "column already exists" on import | Database not empty | Drop and recreate the database, or use --truncateTables |
| "Migration ID mismatch" | Version mismatch | Use same n8n version for export and import |
Recommended VPS Specs for n8n + PostgreSQL
Running both n8n and PostgreSQL on the same VPS requires slightly more resources than n8n alone:
| Setup | Minimum RAM | Recommended RAM | Cost |
|---|---|---|---|
| n8n + PostgreSQL (light) | 2 GB | 4 GB | $5-10/mo |
| n8n + PostgreSQL (production) | 4 GB | 8 GB | $8-24/mo |
| n8n + PostgreSQL + Redis (queue mode) | 8 GB | 16 GB | $15-48/mo |
Contabo's Cloud VPS 1 at $4.50/mo gives you 8 GB RAM and 4 vCPU -- more than enough to run n8n, PostgreSQL, and Redis comfortably on a single server.
Contabo
Contabo Cloud VPS 1: 8 GB RAM for $4.50/mo — run n8n + PostgreSQL + Redis on a single server without breaking a sweat.
* Affiliate link — we may earn a commission at no extra cost to you.
Conclusion
Migrating from SQLite to PostgreSQL is a one-time investment that pays off immediately in reliability, performance, and the ability to scale with queue mode. The process takes about 30 minutes if you follow the steps above and avoid the common pitfalls (especially the postgresdb vs postgres gotcha).
Back up your encryption key before anything else, export with --decrypted for safety, and verify your credentials work after import. Once you're on PostgreSQL, you'll never go back.
准备好开始自动化了吗?立即获取VPS。
立即开始使用 Hostinger VPS 主机。特惠价格可用。
* 联盟链接 — 我们可能会获得佣金,不会增加您的费用