Apr 29, 2026
When I take over a project, as well as checking performance, security I noticed was how deployments are handled.
Recently, I've see site with a manual deploy. Someone would SSH into the server, pull the latest code, run a few commands, hope nothing broke, and then check the site. No documented steps, no consistent process, no rollback plan.
This is more common than people admit. And it carries more risk than most teams realise.
What's actually at risk with manual deployments
The problem with a manual deployment process isn't that it's slow. It's that it's inconsistent. Every deployment is slightly different depending on who's doing it, how much time they have, whether they remembered the step they skipped last time. Under pressure — when you're pushing a hotfix at 6pm because something broke — the chance of missing something goes up.
Common failure modes I've seen with manual processes:
Forgetting to run database migrations after a code change
Deploying to the wrong environment
Missing a cache clear, leaving stale code serving users
Overwriting a config file with a local version
No record of what was deployed or when !!!
Any one of these can cause a production incident. All of them are preventable.
What we replaced it with
For my platforms, I introduced an automated deployment workflow using GitHub Actions. The process now works like this: code is pushed to the main branch, the pipeline kicks off automatically, runs the required steps in the correct order, and the deployment is complete in around 30 seconds. No manual steps, no variation, no missed commands.
The pipeline handles:
Pulling the latest code to the server
Installing/updating dependencies
Running any required database migrations
Clearing caches
Restarting services if needed
Every deployment follows the same steps, in the same order, every time. The pipeline run is logged, so there's always a record of what deployed and when. If something goes wrong, the failure shows up in the pipeline — not discovered by a user in production.
The difference it made
Deployments go from a nerve-wracking manual process to a non-event. That matters more than it sounds. When deploying is easy and reliable, teams do it more often. Smaller, more frequent releases mean less risk per deployment and faster delivery of fixes and improvements. Holding changes back because deployment feels risky is a common pattern that automation breaks.
The time saving is secondary. The real benefit is confidence, knowing that what works in development will be deployed in exactly the right way, every single time.
Is this worth doing for smaller platforms?
Yes. The complexity of the setup scales with the platform — a simple PHP application doesn't need a sophisticated pipeline. But even a basic automated deployment that pulls code and clears caches is significantly better than doing it by hand. The investment is typically a few hours, and the benefit is ongoing.
If you're running a production PHP platform with a manual deployment process, it's worth reviewing before the next time something goes wrong during a release.
Want to talk through your deployment setup?
If you're not sure whether your current process is as reliable as it should be, I'm happy to take a look. Get in touch — a short conversation usually makes it clear whether there's anything worth improving.
Enjoyed this? Let's work together.
If you're dealing with a similar challenge on your platform, I'd be happy to take a look. Get in touch for a straightforward conversation — no jargon, no pressure.
Get in Touch →How a PHP config change and libary update eliminated every PDF failure on a 27-module SaaS
Consultants on a health and safety compliance platform were hitting intermittent PDF failures in the field. The root cause was PHP memory configuration. Here's how we found it and fixed it.