Running Coolify on a Raspberry Pi 5: what I deploy, and what I don't
A Raspberry Pi 5 sitting on a shelf, running Coolify behind Cloudflare Tunnels, has quietly become one of the most useful pieces of infrastructure I own. It hosts a dozen small applications, costs about R30 a month in electricity, and has uptime numbers that would embarrass some cloud providers.
But it's not a silver bullet. There are workloads I absolutely do not trust it with, and there are operational habits I had to learn the hard way. This is a year-in review.
What I actually run on it
The workloads that live happily on the Pi:
- Internal dashboards and admin tools — Laravel apps behind login, single-digit users, low-traffic
- Staging environments for client work — separate per-project, ephemeral
- Side-project MVPs where I'm still validating whether anyone cares
- Static documentation sites (Nuxt SSG, Docusaurus, MkDocs)
- Gotenberg for PDF rendering in Laravel apps
- A Ollama instance for local LLM experimentation
- Internal utilities — cron-driven scrapers, webhook receivers, small APIs
The workloads that I deliberately keep off the Pi:
- Anything client-facing in production with real users
- Anything touching money
- Anything with a strict SLA
- Databases holding data I can't rebuild from source
- Long-running queue workers for production systems
- Anything that needs to stay up during a load-shedding stage 4 slot
This split is the whole point. The Pi is for things where "it went down for two hours, I rebuilt it, here it is again" is an acceptable outcome. Anything beyond that goes to AWS.
The stack
For anyone considering a similar setup:
- Hardware: Raspberry Pi 5 (8GB model), active cooling case, 1TB NVMe SSD via the official PCIe HAT, 27W USB-C power supply
- OS: Raspberry Pi OS 64-bit (Bookworm), running headless
- Orchestration: Coolify v4 (self-hosted PaaS, Docker under the hood)
- Networking: Cloudflare Tunnel (
cloudflared) with wildcard DNS under a domain I control - Storage: NVMe for everything — the SD card is for boot only, Docker volumes, databases, and Coolify's own state all live on NVMe
The NVMe upgrade was the single biggest reliability win. SD cards wear out, and the failure mode is atrocious — filesystem corruption at 3am on a Sunday, not a clean "disk full" error you'd actually notice.
Why Cloudflare Tunnels instead of port-forwarding
I'll never go back to opening ports on a home router. Cloudflare Tunnel gives you:
- No inbound ports open on your network at all
- DDoS protection, WAF, and rate limiting for free on Cloudflare's side
- A wildcard subdomain like
*.your-domain.comthat I can point at any container on the Pi via a simplecloudflaredconfig - Easy origin certificate management — TLS terminates at Cloudflare, traffic to the Pi is authenticated
The config for cloudflared is about ten lines of YAML. The ingress rules map subdomains to local services:
tunnel: my-tunnel-id
credentials-file: /etc/cloudflared/my-tunnel-id.json
ingress:
- hostname: coolify.your-domain.com
service: http://localhost:8000
- hostname: staging.your-domain.com
service: http://localhost:8080
- hostname: "*.your-domain.com"
service: http://localhost:80
- service: http_status:404
Coolify handles the rest — when I deploy a new app, it spins up a container on port 80 behind a reverse proxy (Traefik) and the wildcard rule routes it automatically based on the subdomain Coolify assigns.
What Coolify gets right
After trying Dokku, Caprover, and a homegrown Docker Compose setup, Coolify won on three dimensions:
One-click deploys from Git. Push to main, Coolify pulls, builds, swaps traffic. For a side project this is the killer feature.
Sane defaults for Laravel. Coolify knows about Laravel — it'll auto-detect the framework, run migrations on deploy, set up PHP-FPM, and wire up environment variables without me having to write a Dockerfile unless I want to.
A UI I can actually use at 11pm. When something's wrong, I'm tired, and I just want to restart a container, I don't want to SSH in and remember docker-compose commands. Coolify's UI is genuinely good.
What Coolify gets wrong (or at least, is rough)
It's still v4 software and it shows:
- Upgrades occasionally break things. I've had two upgrades in the last year that required shelling in to fix containers that wouldn't restart. Always snapshot before upgrading.
- The UI lies sometimes. A container shown as "running" in the UI can be stuck in a restart loop in reality.
docker psis still the source of truth. - Resource limits aren't enforced by default. Spin up enough apps and a memory leak in one will take the whole Pi down. Set CPU and memory limits explicitly for every service.
- Backup story is DIY. Coolify can back up databases to S3, but it's per-app and fiddly. I ended up writing my own cron job that
pg_dumps every Postgres database on the Pi to a separate volume, then syncs that volume to Backblaze B2.
The load-shedding problem
I live in South Africa. If you don't, feel free to skim this section — for anyone local, it's probably the most important one.
A Raspberry Pi draws about 5W. With a decent UPS (I use an Eaton 3S, about R2,500), you get roughly 4-6 hours of runtime depending on what's connected. That covers every stage 4 slot with comfort.
But the UPS is only half the story. Your internet probably isn't on the UPS. My fibre ONT is, my router is, my switch is. Yours might not be. The Pi staying powered is worthless if the fibre backhaul lost power at the street cabinet and you have no upstream.
Cloudflare Tunnel's behaviour during an outage is actually great — Cloudflare serves a generic "origin is unreachable" page, your DNS doesn't flap, and when the Pi comes back, the tunnel reconnects within about 30 seconds. But this still isn't production-grade. If you have clients depending on the service being up, don't run it on a Pi.
Monitoring
Uptime Kuma, running on a different Pi (I have two — one for Coolify, one for monitoring), checks every service on the main Pi every 60 seconds. If the main Pi falls over, the monitoring Pi pings my phone.
This "second Pi" thing sounds excessive until you remember that the whole point is to have an external observer. Monitoring hosted on the same box that you're monitoring is a well-known anti-pattern.
What I'd do differently
If I were starting fresh today:
- Start with the NVMe HAT. Don't put yourself through the SD-card-corruption phase. It's R600 in hardware that buys you about three hours of life, tops.
- Put Coolify's database on external storage from day one. Coolify stores its own state in a Postgres instance. When that state corrupts, you lose the UI, and recovery is painful. Mount that directory on NVMe, back it up weekly.
- Two Pis, not one. Monitoring on a separate Pi is not overkill. Neither is having a second one for "the apps I actually care about" vs "experimental junk". The blast radius when one Pi dies should be smaller than "everything I host".
- Don't skip the UPS. R2,500 to avoid filesystem corruption from unclean shutdowns during load-shedding pays for itself the first time it saves you a restore.
The bottom line
The Pi is not a replacement for proper cloud infrastructure — it's a complement. It's where I put things I want to exist without giving them a monthly DigitalOcean bill. It's where I experiment. It's where I host tools that help me do my actual work.
For the cost of a decent dinner out, it runs all year. When a client pays me for a production system, that system goes on AWS. But everything else? The Pi is quietly exceptional.