Zapier's Professional plan costs $100/month for 2,000 tasks. Run 50,000 tasks and you're looking at $500+. Self-hosting n8n, Windmill, or Activepieces on a $10/month Hetzner VPS gives you unlimited executions, full data control, and no per-task billing. The question isn't whether self-hosting saves money; it's which tool fits your stack.

  1. n8n is the most popular option with 400+ integrations, but its Fair Code license has restrictions
  2. Windmill is the fastest engine (10x faster than Airflow), built for developers who prefer code over drag-and-drop
  3. Activepieces is fully MIT-licensed with native MCP support for AI agents

The three tools overlap in concept but diverge sharply in philosophy. Here's what actually matters when you're choosing between them.

Why Self-Host Your Workflow Automation?

Here's what the monthly cost looks like at different scales:

Monthly tasksZapier (Professional)Make (Pro)Self-hosted (Hetzner CX22)
2,000$100$18~$10
10,000$250$55~$10
50,000$500+$165+~$10
UnlimitedNot availableNot available~$10

The VPS cost stays flat regardless of execution volume. But cost isn't the only reason to self-host:

  • Data privacy: Sensitive data never leaves your network. No third-party processing, no compliance headaches
  • No vendor lock-in: Your workflows live on your infrastructure. If the company pivots or raises prices, you keep running
  • Customization: Fork it, patch it, extend it. Open source means you own the stack
  • Latency: Local execution eliminates round-trips to external APIs for internal automations

The tradeoff is maintenance. You handle updates, backups, and scaling. For teams already running Docker or Kubernetes, this overhead is minimal. For teams without DevOps experience, it's a real consideration.

n8n: The Visual Automation Workhorse

n8n is the most widely adopted open-source workflow automation tool, with over 68,000 GitHub stars. Its node-based visual editor lets you drag, drop, and connect services without writing code, though you can inject JavaScript or Python when needed.

Where n8n Excels

The integration ecosystem is n8n's strongest asset. Over 400 built-in nodes cover everything from Slack and Gmail to Postgres and HTTP requests. The community has contributed thousands of workflow templates, so you rarely start from scratch.

n8n's AI capabilities have matured significantly. The platform supports LangChain-based AI agents, tool calling, and vector store integrations. You can build a RAG pipeline or a multi-step AI agent directly in the visual editor.

Where n8n Falls Short

Memory consumption is the most common complaint in self-hosted deployments. n8n idles at 300-500 MB RAM and spikes to 1-2 GB during complex workflow executions. Add PostgreSQL (the recommended production database), and you're looking at 1.75-2.75 GB steady state on a modest VPS.

The licensing model is another sticking point. n8n uses a "Fair Code" license (Sustainable Use License), not a traditional open-source license. You can self-host for free, but there are restrictions on offering n8n as a service to third parties. For internal use, this rarely matters. For agencies or platform builders, it's a deal-breaker.

Self-Hosting n8n

Getting n8n running takes about 5 minutes with Docker:

docker run -it --rm \
  --name n8n \
  -p 5678:5678 \
  -v n8n_data:/home/node/.n8n \
  n8nio/n8n

For production, use Docker Compose with PostgreSQL instead of the default SQLite:

services:
  n8n:
    image: n8nio/n8n
    ports:
      - "5678:5678"
    environment:
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_HOST=postgres
      - DB_POSTGRESDB_DATABASE=n8n
      - DB_POSTGRESDB_USER=n8n
      - DB_POSTGRESDB_PASSWORD=changeme
    volumes:
      - n8n_data:/home/node/.n8n
    depends_on:
      - postgres
    deploy:
      resources:
        limits:
          memory: 4G
        reservations:
          memory: 2G
  postgres:
    image: postgres:16
    environment:
      - POSTGRES_DB=n8n
      - POSTGRES_USER=n8n
      - POSTGRES_PASSWORD=changeme
    volumes:
      - postgres_data:/var/lib/postgresql/data
volumes:
  n8n_data:
  postgres_data:

Set the memory limit to at least 4 GB if you're running data-heavy workflows. SQLite works for testing but corrupts under concurrent access in production.

Common n8n Self-Hosting Issues

Webhook URLs break behind reverse proxies. Set WEBHOOK_URL to your public domain, not localhost. Without this, n8n generates internal URLs that external services can't reach.

Executions disappear after restart. If you're using SQLite and didn't mount a volume, your data lives inside the container. Always mount /home/node/.n8n to a persistent volume.

Memory spikes on large datasets. n8n loads entire datasets into memory between nodes. If you're processing 100,000 rows from a database query, split the work with pagination or use the "Execute Command" node to offload to a script.

One pattern that catches many teams off guard: running n8n alongside other services on a 2 GB VPS. A Slack notification workflow triggers a database query, which triggers a webhook, and suddenly three workflows execute simultaneously. The container hits its memory limit and restarts, losing in-progress executions. Budget at least a dedicated 4 GB instance for production n8n, or use Kubernetes resource limits to prevent one workflow from starving others.

Windmill: The Developer-First Speed Machine

Windmill targets developers who find visual node editors limiting. Instead of drag-and-drop, you write scripts in Python, TypeScript, Go, Bash, or SQL, and Windmill handles execution, scheduling, retries, and UI generation.

Where Windmill Excels

Performance is Windmill's headline feature. The engine executes workflows without spinning up containers for each step, which eliminates the startup overhead that plagues tools like Airflow.

The numbers back this up:

  • Python cold start: ~60ms
  • Deno/Bun cold start: ~30ms
  • JavaScript expressions: ~8ms per expression
  • Dedicated worker throughput: up to 1,000 steps per second
  • 10x faster than Apache Airflow on comparable workflow benchmarks

Windmill also generates UIs automatically from your scripts. Define input parameters with types, and Windmill creates a form-based interface that non-technical users can interact with. This bridges the gap between "developer tool" and "team tool."

Where Windmill Falls Short

The learning curve is steeper than n8n or Activepieces. There's no drag-and-drop for simple automations; everything starts with a script. If your team includes non-technical users who need to build their own workflows, Windmill will frustrate them.

The integration ecosystem is smaller. Windmill relies on direct API calls and community scripts rather than pre-built connectors. You'll write more boilerplate to connect services that n8n handles with a single node.

AI features exist but are less polished than competitors. Windmill supports LLM integrations through scripts, but there's no visual AI agent builder or native MCP support.

Self-Hosting Windmill

Windmill's Docker Compose setup is straightforward:

curl https://raw.githubusercontent.com/windmill-labs/windmill/main/docker-compose.yml -o docker-compose.yml
docker compose up -d

The default stack includes PostgreSQL and runs on port 8000. Windmill is lighter than n8n at idle, consuming roughly 150-300 MB RAM for the core service.

For production, Windmill scales horizontally by adding worker containers. Each worker handles script execution independently, so you can dedicate workers to specific workflow types or priority levels.

Common Windmill Self-Hosting Issues

Workers don't pick up jobs. Check that workers are connected to the same PostgreSQL instance as the server. Misconfigured DATABASE_URL in worker containers is the most common cause.

Python dependencies fail to install. Windmill caches pip packages per (package, version) pair. If a package requires system-level dependencies (like libpq-dev for psycopg2), you need a custom worker image with those packages pre-installed.

TypeScript scripts timeout. The default timeout is 900 seconds. For long-running scripts, set a custom timeout in the script metadata or adjust the TIMEOUT_WAIT_RESULT environment variable.

Activepieces: AI-Native and Fully Open Source

Activepieces is the newest of the three, and it's positioned itself as the AI-first automation platform. Unlike n8n's Fair Code license, Activepieces uses the MIT license, making it the most permissive option for commercial use.

Where Activepieces Excels

The MCP (Model Context Protocol) integration is Activepieces' killer feature. The platform automatically exposes its 400+ integration pieces as MCP tools. This means AI agents running in Claude Desktop, Cursor, or any MCP-compatible client can directly call Gmail, Slack, Stripe, HubSpot, and hundreds of other services through Activepieces.

You can also package an entire multi-step workflow into a single MCP tool. Build a workflow that checks inventory, creates an invoice, and sends a notification, then expose it as one callable action for your AI agent.

The user interface scores a 9.1 for ease of setup on G2, compared to n8n's 7.7. The step-based flow builder feels more like a modern web app than a traditional automation tool, making it accessible to non-technical team members.

Where Activepieces Falls Short

Activepieces is younger than n8n and Windmill. The community is smaller, which means fewer templates, fewer forum answers, and less battle-tested production deployments. Performance under heavy load has been a reported concern; the Activepieces community forum has threads about slower processing speeds compared to n8n for high-volume workflows.

The integration count (280+ pieces) is lower than n8n's 400+ nodes. Critical integrations exist, but niche services may require custom piece development.

Self-Hosting Activepieces

Activepieces runs on Docker with minimal configuration:

git clone https://github.com/activepieces/activepieces.git
cd activepieces
cp .env.example .env
docker compose up -d

The default setup includes PostgreSQL and Redis. Memory footprint is comparable to n8n at 300-500 MB RAM for typical workloads.

Common Activepieces Self-Hosting Issues

Pieces fail to load after update. Clear the pieces cache and restart. Version mismatches between the core platform and pieces packages cause silent failures.

OAuth connections break after domain change. Re-register OAuth apps with the new callback URL. Activepieces stores the redirect URI at connection time, and it doesn't auto-update.

High memory on AI workflows. AI pieces that process large documents or images spike RAM significantly. Set container memory limits and monitor usage with docker stats.

Head-to-Head Comparison

Featuren8nWindmillActivepieces
LicenseFair Code (Sustainable Use)AGPLv3 / CommercialMIT
Primary interfaceVisual node editorCode editor + auto-UIStep-based flow builder
Built-in integrations400+~100 (+ API calls)280+
AI agent supportLangChain nodes, tool callingScript-based LLM callsNative MCP, AI SDK
MCP supportNoNoYes (400+ MCP tools)
Cold start (per step)~50-100ms~30-60ms~50-100ms
Idle RAM300-500 MB150-300 MB300-500 MB
Best forTeams mixing technical and non-technical usersDevelopers who prefer code over UIAI-first workflows, MCP-heavy setups
Self-host difficultyEasy (Docker)Easy (Docker)Easy (Docker)
GitHub stars68k+15k+12k+

Which One Should You Pick?

Choose n8n if you need the largest integration ecosystem and your team includes both developers and non-technical users. The visual editor is mature, the community is massive, and most common workflows have existing templates. Just account for the higher memory footprint and Fair Code licensing restrictions.

Choose Windmill if your team is developer-heavy and performance matters. Processing thousands of workflow steps per second with sub-100ms cold starts makes Windmill the right choice for data pipelines, ETL jobs, and high-throughput automation. The tradeoff is a steeper learning curve and a smaller integration library.

Choose Activepieces if AI agent orchestration is your primary use case. The native MCP support, MIT license, and beginner-friendly interface make it the best fit for teams building AI-powered automation. Watch the performance benchmarks as your volume grows, and be prepared to contribute custom pieces for niche integrations.

All three deploy in under 10 minutes with Docker, and all three work with OCI-compatible runtimes like Podman or nerdctl if you prefer to avoid Docker Desktop licensing.

If you're already running n8n for tasks like AI image generation, you don't need to rip it out. A common pattern is keeping n8n for integration-heavy workflows while adding Windmill for performance-critical data pipelines, or Activepieces for AI agent orchestration via MCP. Pick the tool that matches the workload, not the one that matches a blog ranking.