What n8n Is and Why It Matters
n8n (pronounced "n-eight-n") is a fair-code workflow automation tool — think Zapier or Make, but self-hostable, source-available, and built for engineers who want to own their automation layer. You connect nodes (HTTP, databases, queues, AI providers, SaaS APIs) into a directed graph that fires on schedules, webhooks, or events.
The operational story is what makes it interesting: it runs as a regular Node.js service, persists workflows in a database, and scales horizontally via a queue mode that splits the executor from the main process. That gives you a Zapier-class UX without the per-execution pricing or the lock-in.
Local Setup — Three Ways
1. npx (fastest, throwaway)
The quickest sniff test: install Node 18+ and run npx n8n. It boots SQLite by default, exposes the editor at http://localhost:5678, and you can build your first workflow in under a minute. State lives in ~/.n8n — delete it and the install vanishes.
Good for: trying nodes, demos, throwaway experiments.
Bad for: anything you care about losing. SQLite plus a single Node process is not a production posture.
2. Docker (recommended for local dev)
The official n8nio/n8n image is the canonical way to run locally. Map a volume to /home/node/.n8n so workflows survive container restarts, and set a few env vars: N8N_HOST, N8N_PORT, WEBHOOK_URL (critical for webhook nodes), N8N_ENCRYPTION_KEY (encrypts credentials at rest — generate once, never lose it), and TZ so cron triggers fire in the right timezone.
The single biggest mistake people make on day one is skipping N8N_ENCRYPTION_KEY. n8n auto-generates one on first boot, but if you ever wipe the volume or migrate the database without that exact key, every stored credential becomes unreadable. Treat it like a JWT secret — generate it explicitly, store it in your password manager or secrets store, and pin it as an env var from the start.
3. Docker Compose with PostgreSQL
The minute you care about workflows surviving, ditch SQLite and stand up a Compose file with two services: n8n and Postgres 15+. Wire DB_TYPE=postgresdb, DB_POSTGRESDB_HOST / PORT / DATABASE / USER / PASSWORD, and mount a named volume for the Postgres data directory. Add a healthcheck so n8n waits for the DB to be ready before booting.
This setup is also what you'll mirror in production — keeping local and prod topologies identical eliminates a whole class of "works on my machine" bugs around timezone, database driver quirks, and migration timing.
Persistence: SQLite vs PostgreSQL
n8n ships with SQLite for zero-config first runs. It's fine for solo local dev, but it has real limits:
- ▹No concurrent writers — a single workflow burst can lock the file
- ▹No queue mode — required for horizontal scaling
- ▹Migrations get painful at scale
For anything beyond a hobby instance, use PostgreSQL. MySQL is technically supported but n8n upstream prefers Postgres, and the operational tooling (PITR, logical replication, managed offerings) is better. If you're on AWS, RDS PostgreSQL with automated backups is the boring correct answer. On smaller VPS deploys, a co-located Postgres container with daily pg_dump to S3 is enough.
Webhooks Behind a Reverse Proxy
Webhook URLs are the most common production footgun. n8n needs to know its public URL so it can hand out webhook endpoints that clients can actually reach. Set WEBHOOK_URL (or the newer N8N_EDITOR_BASE_URL and WEBHOOK_URL pair) to the HTTPS public address, never the internal one.
Put nginx, Caddy, or Traefik in front. Caddy is the lowest-friction option for self-hosters — point a domain at your server, set up a one-line reverse proxy block, and Caddy provisions Let's Encrypt certificates automatically with auto-renewal. Make sure to forward the Host, X-Forwarded-Proto, and X-Forwarded-For headers, and bump the proxy body size if you process large payloads.
Queue Mode — The Production Switch
For any non-trivial deployment, flip n8n into queue mode. This splits the single process into three roles:
- ▹Main — serves the editor UI, handles webhooks, schedules
- ▹Workers — execute workflows pulled from the queue
- ▹Redis — message broker between main and workers
Set EXECUTIONS_MODE=queue, QUEUE_BULL_REDIS_HOST / PORT / PASSWORD, then run one or more n8n worker processes alongside the main. Workers scale independently — you can run two on a small VPS or twenty across a Kubernetes deployment.
The operational win: a misbehaving workflow can saturate a worker without killing the UI, and you can horizontally add workers during traffic spikes (think marketing campaign blasts) and drain them back down afterward.
Deploying to Production — Pick Your Lane
Self-Hosted Docker on a VPS (Hetzner, DigitalOcean)
The cheapest serious deployment. A small VPS plus Docker Compose plus Caddy in front is enough for solo and small-team use. Daily pg_dump piped to S3 or Backblaze gives you a recovery story. Use systemd or Compose's restart: unless-stopped to survive reboots. Add fail2ban and disable password SSH.
Good for: hobbyists, indie hackers, internal team automation.
AWS — ECS Fargate + RDS + ElastiCache
The production-grade path on AWS:
- ▹ECS Fargate runs the main and worker tasks (separate task definitions, scaled independently)
- ▹RDS PostgreSQL with Multi-AZ for the workflow database
- ▹ElastiCache Redis for the queue
- ▹ALB in front for TLS termination and routing
- ▹Secrets Manager for the encryption key, DB password, and any third-party credentials
- ▹CloudWatch Logs for execution traces, with a metric filter on failed runs
The encryption key belongs in Secrets Manager, not in the task definition env directly. Reference it via secrets on the task so it's pulled at runtime and never sits in plain CloudFormation state.
Railway / Render / Fly.io (managed-ish)
If you want "deploy from a Dockerfile and forget," these PaaS options are the path of least resistance. Railway in particular has first-class Postgres and Redis add-ons — wire them in with their built-in env-var references, set the encryption key as a Railway secret, and you're live in under ten minutes. Costs scale with usage but stay reasonable for small teams.
Kubernetes (Helm)
The community n8n-io/n8n Helm chart (or the well-maintained 8gears/n8n chart) gives you a parameterised deployment with main and worker replicas, an HPA on the workers, a Postgres dependency chart, and Ingress. It's the right answer if you already operate Kubernetes — overkill if you don't.
n8n Cloud (zero ops)
If you don't want to operate the infrastructure at all, n8n Cloud is the official managed offering. Pay per workflow execution; trade money for time. Reasonable starting point if you're validating whether n8n fits your problem before committing to self-hosted.
Hardening Checklist
Before exposing n8n to the public internet, walk through this list:
- ▹TLS everywhere — Caddy / Let's Encrypt or ACM via ALB. No plaintext.
- ▹Basic auth on the editor — set N8N_BASIC_AUTH_ACTIVE=true with strong credentials, or front the editor with an SSO proxy (oauth2-proxy, Cloudflare Access, Pomerium)
- ▹Encryption key pinned and backed up — losing it bricks all stored credentials
- ▹Database backups — automated, off-site, tested by actually restoring at least once
- ▹Resource limits — cap CPU and memory on workers so one runaway workflow can't take down the host
- ▹Disable telemetry if your compliance requires it: N8N_DIAGNOSTICS_ENABLED=false
- ▹Set EXECUTIONS_DATA_PRUNE=true with a retention window — execution data otherwise grows unbounded and will eat your database
- ▹Pin n8n versions — don't use :latest in production. Pick a tag, test upgrades in staging.
Workflow Versioning and CI
Workflows are JSON. Treat them like code:
- ▹Use n8n export:workflow --all --output ./workflows to dump workflows to disk
- ▹Commit them to a Git repo alongside your infrastructure code
- ▹On deploy or PR merge, run n8n import:workflow --separate --input ./workflows to apply changes
- ▹For environments split (dev / staging / prod), keep the same workflow IDs and use credentials that vary by environment
This turns workflows from "clicked together in the UI and prayed" into reproducible artifacts that survive a database wipe and can be code-reviewed before they hit production.
When to Reach for n8n vs Writing Code
n8n shines for integration glue — pulling data between SaaS APIs, kicking off automations on events, scheduled enrichment jobs, AI orchestration with branching logic. It's the right tool when the workflow is mostly "call API A, transform, call API B," and the operations team wants visibility without reading a codebase.
It's the wrong tool for tight loops, latency-critical paths, or anything where you'd rather express logic in a typed language with proper tests. Don't replace your backend with n8n — augment it.
Closing
Get local Docker plus Postgres running today. Set the encryption key explicitly. Put Caddy in front with HTTPS. Flip on queue mode the day you onboard the second user. Back up the database. Pin versions. Export workflows to Git.
Do those seven things and you have a production-grade n8n deployment that survives the next outage, the next team change, and the next region failure — all for the cost of a VPS coffee budget.