Every new project has the same early decision: how do I want to deploy this? For a while I was picking per-project — Ploi for most Laravel work, Vercel for Next.js, a custom rsync script for the odd static site. The result was that every project had a slightly different deploy story, I had to remember which was which, and onboarding a new client meant explaining three different conventions.

Eventually I got tired of that and standardised on a Docker + GitHub Actions + GitHub Container Registry setup that works for Laravel, Next.js, Nuxt, and basically anything else I build. One pattern, one mental model, one place to look when something breaks. This post walks through the whole thing.

The design goals

Before showing the pipeline, the constraints I was optimising for:

  1. Stack-agnostic. The workflow file should not know or care whether the project is Laravel, Next.js, or a Go service. All stack-specific logic lives in the project's Dockerfile.
  2. Environment-differentiated. Same pipeline, different behaviour for staging vs production. Staging deploys on every push to develop; production deploys on tag push.
  3. No vendor lock-in. I don't want Heroku-style buildpacks that only work on one platform. The output is a standard OCI image that can run anywhere Docker runs.
  4. Cheap. Free tier for small projects. GHCR is free for public images and has generous limits for private.
  5. Debuggable. When it breaks at midnight, I want to be able to understand why without reading through thousands of lines of YAML.

The architecture

GitHub Repo
    ↓ (push to develop or tag)
GitHub Actions
    ↓ (build Docker image from Dockerfile)
GitHub Container Registry (ghcr.io)
    ↓ (deploy trigger)
Target environment (Coolify on Pi / AWS ECS / wherever)
    ↓ (pulls new image, rotates containers)
Live

The key insight: the artifact produced by CI is a tagged Docker image. That image is the unit of deployment. Everything downstream — staging, prod, local reproduction of a bug — pulls the same image.

The reusable workflow

I keep this in a dedicated repo (hazelbag/gha-workflows) so I can version it and update every project at once when something changes upstream.

.github/workflows/build-and-push.yml:

name: Build and push Docker image

on:
  workflow_call:
    inputs:
      image-name:
        required: true
        type: string
        description: "Name of the image (e.g. 'my-laravel-app')"
      dockerfile:
        required: false
        type: string
        default: "./Dockerfile"
      build-args:
        required: false
        type: string
        default: ""
      platforms:
        required: false
        type: string
        default: "linux/amd64"

jobs:
  build:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      packages: write

    steps:
      - name: Checkout
        uses: actions/checkout@v4

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Log in to GHCR
        uses: docker/login-action@v3
        with:
          registry: ghcr.io
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: Extract metadata
        id: meta
        uses: docker/metadata-action@v5
        with:
          images: ghcr.io/${{ github.repository_owner }}/${{ inputs.image-name }}
          tags: |
            type=ref,event=branch
            type=ref,event=pr
            type=semver,pattern={{version}}
            type=semver,pattern={{major}}.{{minor}}
            type=sha,prefix={{branch}}-,format=short
            type=raw,value=latest,enable={{is_default_branch}}

      - name: Build and push
        uses: docker/build-push-action@v5
        with:
          context: .
          file: ${{ inputs.dockerfile }}
          platforms: ${{ inputs.platforms }}
          push: true
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}
          build-args: ${{ inputs.build-args }}
          cache-from: type=gha
          cache-to: type=gha,mode=max

Two things doing real work here:

docker/metadata-action generates image tags from git context automatically. A push to develop produces develop and develop-abc123 tags. A push to main produces main, latest, and main-abc123. A tag push for v1.2.3 produces 1.2.3, 1.2, and latest. You never manually compute tags.

GitHub Actions cache (cache-from: type=gha) makes subsequent builds drastically faster. A clean Laravel Docker build is maybe 3 minutes on GHA; with warm cache it's closer to 40 seconds.

The per-project workflow

Each project has its own tiny wrapper that calls the reusable workflow:

.github/workflows/ci.yml:

name: CI

on:
  push:
    branches: [main, develop]
    tags: ['v*']
  pull_request:
    branches: [main, develop]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Set up PHP
        uses: shivammathur/setup-php@v2
        with:
          php-version: '8.3'
          coverage: none

      - name: Install dependencies
        run: composer install --prefer-dist --no-progress

      - name: Run tests
        run: php artisan test

  build:
    needs: test
    if: github.event_name == 'push'
    uses: hazelbag/gha-workflows/.github/workflows/build-and-push.yml@main
    with:
      image-name: my-laravel-app
      dockerfile: ./docker/Dockerfile
    secrets: inherit

  deploy-staging:
    needs: build
    if: github.ref == 'refs/heads/develop'
    runs-on: ubuntu-latest
    steps:
      - name: Trigger Coolify deploy
        run: |
          curl -X POST "${{ secrets.COOLIFY_STAGING_WEBHOOK }}" \
            -H "Authorization: Bearer ${{ secrets.COOLIFY_TOKEN }}"

  deploy-production:
    needs: build
    if: startsWith(github.ref, 'refs/tags/v')
    runs-on: ubuntu-latest
    environment: production  # requires manual approval via GitHub environments
    steps:
      - name: Trigger production deploy
        run: |
          curl -X POST "${{ secrets.DEPLOY_PROD_WEBHOOK }}" \
            -H "Authorization: Bearer ${{ secrets.DEPLOY_PROD_TOKEN }}"

Boiling it down:

  • Tests run on every push and PR
  • Builds happen only on push (not PRs, to save time and registry space)
  • Staging deploys on every push to develop
  • Production deploys only on tag push (and via GitHub's "environment" protection, require a manual approval click)

The Laravel Dockerfile

The stack-specific piece. This lives at docker/Dockerfile in each Laravel project:

# syntax=docker/dockerfile:1.6

# ---- Stage 1: Composer dependencies ----
FROM composer:2 AS composer-deps

WORKDIR /app
COPY composer.json composer.lock ./
RUN composer install --no-dev --no-scripts --no-autoloader --prefer-dist

# ---- Stage 2: Node build ----
FROM node:20-alpine AS node-build

WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci

COPY resources/ ./resources/
COPY vite.config.js ./
COPY tailwind.config.js ./
COPY postcss.config.js ./
RUN npm run build

# ---- Stage 3: Production runtime ----
FROM php:8.3-fpm-alpine AS runtime

RUN apk add --no-cache \
    nginx \
    supervisor \
    postgresql-dev \
    libzip-dev \
    && docker-php-ext-install pdo pdo_pgsql zip opcache

# PHP config
COPY docker/php.ini /usr/local/etc/php/conf.d/app.ini
COPY docker/php-fpm.conf /usr/local/etc/php-fpm.d/www.conf

# Nginx config
COPY docker/nginx.conf /etc/nginx/nginx.conf

# Supervisor config (runs nginx + php-fpm + horizon)
COPY docker/supervisord.conf /etc/supervisor/conf.d/supervisord.conf

WORKDIR /var/www/html

# Copy application
COPY --from=composer-deps /app/vendor/ ./vendor/
COPY --from=node-build /app/public/build/ ./public/build/
COPY . .

# Finish composer autoload (now that app code is present)
RUN composer dump-autoload --optimize --no-dev

# Laravel optimisations
RUN php artisan config:cache \
    && php artisan route:cache \
    && php artisan view:cache

# Permissions
RUN chown -R www-data:www-data /var/www/html \
    && chmod -R 775 storage bootstrap/cache

EXPOSE 80

CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]

Three stages. The first two produce artifacts (composer vendor dir, compiled frontend assets). The third is the runtime image — minimal, production-focused, includes nginx, PHP-FPM, and supervisor to run them both.

The multi-stage build matters because the final image size comes in around 180MB instead of the 600MB+ you'd get from a single-stage build that drags Composer, Node, and build tooling along for the ride.

What to do about migrations

The tricky Laravel-in-Docker question: when do migrations run? Not at image build time (the DB isn't available). Not on every container startup (race conditions if you're scaling horizontally).

My current answer: a dedicated init container that runs once per deploy.

# docker-compose.yml (or equivalent ECS task definition)
services:
  migrate:
    image: ghcr.io/hazelbag/my-laravel-app:latest
    command: php artisan migrate --force
    env_file: .env
    restart: "no"
    depends_on:
      - db

  app:
    image: ghcr.io/hazelbag/my-laravel-app:latest
    depends_on:
      - migrate
    restart: unless-stopped
    # ... rest of app config

The migrate service runs the migration command once, exits, and the app service starts only after it succeeds. If the migration fails, the deploy fails loudly before any user traffic hits the new version.

The secrets question

One thing I deliberately don't do: bake secrets into the image. Every image I push to GHCR is configuration-free. All env-specific values (DB credentials, API keys, app URL) come from the runtime environment.

This means:

  • The same image runs in staging and production
  • Rolling back is just pointing the environment at an older image tag
  • GHCR never holds anything sensitive

Secrets live in:

  • GitHub Actions secrets (for the deploy webhook trigger)
  • The deploy target's own secret store (Coolify's env panel, AWS Secrets Manager, Doppler, etc.)

Never in the image. Never in the repo.

What I'd tell past-me

A few lessons from getting this wrong a few times:

  1. Pin your base images to specific tags, not latest. php:8.3-fpm-alpine is fine. php:latest is a reproducibility bug waiting to happen.
  2. Run one process per container. Supervisor is tempting, but ECS, Kubernetes, and most orchestrators prefer one-process containers. My Laravel image uses supervisor for local dev convenience; in prod I sometimes split nginx into its own container.
  3. Cache, cache, cache. The GHA cache config above probably saves me an hour of build time per week across all projects.
  4. Keep the workflow file dumb. Any time the workflow needs to know about the stack (PHP version, Node version), move that knowledge into the Dockerfile or a project-level config, not into the CI YAML.

The takeaway

A stack-agnostic pipeline is not harder than a stack-specific one. It's arguably simpler, because the mental model is the same for every project: build image, push image, deploy image. What changes between a Laravel app and a Next.js app is the Dockerfile — and that's where it belongs.

If you're currently juggling three different deploy pipelines across your projects, consolidating to one Docker-based pattern is one of those investments that pays off every time you start a new project or come back to an old one.