Post-mortem: crypto mining malware across an end-of-life Laravel install
A compromised Laravel 9 installation on a shared server let an attacker install a cryptocurrency miner, drop webshells across several unrelated applications sharing the same host, and hold persistence for roughly two weeks before anyone noticed. This post walks through how the intrusion happened, what the attacker did inside, and the practical hardening I now apply on every production box as a result.
Names and exact identifiers are redacted. Everything else is from the actual incident.
Timeline
Week 0 — The Laravel 9 app hadn't been updated since roughly its original deploy. PHP 8.1, Laravel 9.x, no Dependabot, no automated composer audits. The application was low-traffic internal tooling that "just worked" and nobody touched.
Week 2 — Attacker exploits a known RCE in an outdated dependency (not Laravel core itself, but a package in the tree). Shell access as the web user.
Week 2 + few hours — Attacker drops a webshell disguised as a Laravel Blade view file in storage/framework/views/. The filename follows Laravel's compiled-view convention ({hash}.php), making it nearly invisible to anyone browsing the directory.
Week 2 + 1 day — Attacker enumerates the server. Finds other Laravel apps in neighbouring directories, shared user accounts, writable tmp directories. Drops similar webshells in three more applications' storage directories.
Week 2 + 2 days — Cryptocurrency miner installed as a systemd user service, running under the web user's account. Named systemd-update-manager to blend in with system processes. CPU usage cranks to 70% sustained.
Week 4 — Client notices the server feels sluggish and the hosting bill has spiked due to CPU burst credits being consumed. Calls me.
Discovery to clean — About 14 hours of work: triage, isolation, forensic snapshot, full rebuild, DNS cutover, post-clean validation.
How the attack worked
The initial foothold was the interesting part. The Laravel app itself wasn't directly vulnerable — the codebase was fine, auth was correct, no obvious SQLi or mass assignment issues. The vulnerability was in a third-party package deep in the dependency tree, one of those transitive dependencies that you don't realise you're pulling in.
Specifically: an ImageMagick-related PHP library with a known RCE when processing certain malformed image inputs. The Laravel app accepted user-uploaded avatar images and passed them to this library for thumbnail generation. Feed it a crafted file, get code execution as the web user.
This is not an exotic attack. It's the oldest web-app vulnerability class there is. What made it work in 2025 was the combination of:
- A known-vulnerable version of a package
- No
composer auditin the pipeline - No automated security updates
- A Laravel version that had just dropped out of security support, so even Laravel itself wasn't getting patches anymore
Each of those alone would probably have been fine. All four together was an open door.
What the attacker actually did
Once in, three phases:
Phase 1: Persistence
Webshell files dropped in:
storage/framework/views/{hash}.php ← main webshell
public/.well-known/status.php ← fallback, in a rarely-audited dir
bootstrap/cache/config.php ← only invoked if the legit cache is cleared
Each shell was different code, not a copy-paste, so signature-based detection on one wouldn't find the others. One used base64-encoded payloads via eval(), one used system() through a crafted $_REQUEST field, one phoned home to a command-and-control URL for instructions.
Phase 2: Lateral movement
The shared hosting setup meant the web user had read access to other applications on the box. The attacker enumerated /var/www/, found three other Laravel sites, and dropped webshells in each. None of those sites were directly exploitable — the attacker didn't need to re-exploit, they just needed write access, which the web user already had.
This is the bit that turned a single-site compromise into a multi-site compromise. The initial vulnerability wasn't in the other apps, but the shared filesystem meant they shared the blast radius.
Phase 3: Monetisation
The miner itself was a standard XMRig build, configured to pool-mine Monero. It was installed in /home/{web-user}/.local/bin/, launched via a user-level systemd unit at /home/{web-user}/.config/systemd/user/update-manager.service. The unit file contained a reasonable-looking description and pointed at a binary with a plausible system-sounding name.
The giveaway was CPU usage — sustained 70% on a server that should idle at 5%. Hosting cost alerts eventually fired, but two weeks after the fact.
What I learned and changed
Everything below is what I now do by default on every Laravel production server I touch, as a direct result of this incident.
1. composer audit in CI on every build
# In the CI workflow
- name: Audit dependencies
run: composer audit --abandoned=report
This catches known-vulnerable packages at PR time, before they ship. If a transitive dependency picks up a CVE, you find out within hours, not weeks. It's free, it takes five seconds per build, and the one time it catches something it's worth every CI minute spent on it.
2. Weekly automated dependency updates
Dependabot or Renovate, configured to:
- Group minor and patch updates into one weekly PR
- Flag major version changes separately for manual review
- Auto-merge patch updates after tests pass (only for trusted packages — Laravel, Symfony, popular frameworks)
The version drift that enabled this whole incident was a matter of a package being three minor versions behind. An automated weekly update would have closed the window.
3. Don't run Laravel on EOL versions
Laravel's LTS policy is clear — security fixes end on a known schedule. Running EOL Laravel in production is running a server that explicitly has no safety net for new CVEs. Budget an upgrade before a version goes EOL, not after.
For clients running EOL versions, the conversation is now: "I can maintain this at my current rate, but for an additional fee, because manual backporting of security fixes costs time. Or we budget a week to upgrade and you stop paying the EOL tax." Most clients, faced with explicit pricing, pick the upgrade.
4. File integrity monitoring
aide or tripwire running nightly, checksumming application files. Alerts on any change outside a deploy window.
A webshell in storage/framework/views/ is hard to spot manually. A monitoring system that reports "file X changed at 3:47am, no deploy scheduled" catches it in hours, not weeks.
Cheap modern alternative: a git-based equivalent. On deploy, git ls-files | xargs sha256sum > /var/log/file-manifest.txt. Nightly cron compares current filesystem against the manifest. Any unexpected additions get emailed.
5. Separate user per application
This was the change that would've limited the damage most. If each Laravel app ran as its own Linux user with filesystem permissions scoped to that app's directory, the initial compromise would have stayed contained to one site.
Modern Docker-based setups get this for free — each container runs isolated. Traditional LAMP setups with shared users are where this bites.
If you must share a box, each app gets its own system user, its own PHP-FPM pool running as that user, and its own chmod 750 directory tree. Nginx reverse-proxies to the appropriate pool per vhost.
6. Resource monitoring with alerts
CPU usage alerts that fire on sustained-high, not just peaks. A server at 70% CPU for four hours straight is almost always either a runaway process or a miner. Either way, you want to know about it the same day.
Set the alert threshold below what your actual workload would ever hit. For most Laravel apps, sustained > 40% CPU is abnormal and worth a look.
7. Egress filtering
The crypto miner phoned home to a stratum pool on a well-known port. If the server had had egress filtering that only allowed outbound traffic to specific destinations (package repositories, external APIs it actually used, logging endpoints), the miner would have been prevented from connecting to its pool and the attacker would have gotten zero value out of it.
This is easy on cloud providers — AWS security groups, DigitalOcean firewalls, etc. It's a one-time setup. Block all outbound by default, allowlist what's actually needed. This alone would have made the compromise unprofitable.
8. Read-only filesystem where possible
Application code directories mounted read-only at runtime. storage/, bootstrap/cache/, and public/uploads/ need to be writable; everything else doesn't. A webshell dropped into app/ fails to save.
This is easy in Docker (read-only root filesystem, named volumes for writable paths) and possible but fiddlier on bare-metal Linux.
The cost
For the client, direct costs of the incident:
- Roughly R35,000 in my time to investigate, clean, and harden
- Roughly R8,000 in extra hosting bills from the miner's CPU usage over two weeks
- Unknown indirect cost — internal staff time dealing with slow systems, trust impact of "our servers were compromised"
None of the compromised sites handled sensitive data beyond internal tooling, so there was no data-breach notification obligation. If it had been a customer-facing app with personal data, the POPIA reporting requirements would have added another dimension of cost entirely.
The takeaway
Two weeks of undetected access on a low-traffic internal tool, across four applications, because nobody was watching a server that seemed fine.
The lesson isn't "Laravel is insecure" — it isn't. The lesson is that any application you stop touching eventually becomes a liability. Security isn't a one-time deploy setting; it's an ongoing maintenance commitment, and if you don't budget for it, the attackers will charge you for it instead.
If you're running any Laravel app today that hasn't had its dependencies updated in the last six months: stop reading this post, run composer audit, and go fix whatever it tells you about.