Why I Switched From Building on Coolify to GHCR for Faster, Safer Deployments
Recently, I changed the way I deploy one of my projects. Earlier, I was building the Docker image directly inside Coolify on my VPS. It worked, but it was not the most efficient setup for my current server resources. My VPS has 4GB RAM, 50GB SSD, and 2GB swap, so every deployment had to be planned carefully.
After a few deployments, one thing became very clear: building on the server itself was expensive. RAM usage would spike during the build process, deployments would take a long time, and the overall experience felt heavier than it needed to be. Because of that, I switched to a workflow where I build the image elsewhere and push it to GitHub Container Registry (GHCR), then let Coolify simply pull and run the ready-made image.
My Previous Setup: Building Directly in Coolify
At first, my deployment flow was simple. I pushed my code, Coolify pulled the repository, built the Docker image on the VPS, and then deployed it. This is convenient when getting started because everything happens in one place. You do not need to think much about registries, image tags, or external build pipelines.
But convenience has a cost, especially on a smaller VPS.
During builds, I noticed:
- RAM usage would spike heavily while dependencies were installed and the project was built.
- The server would feel slower during deployment windows.
- Deployments could take 5 to 10 minutes, and sometimes even longer.
- The build process competed with the already running services on the same machine.
That is the main drawback of building directly on a low-resource server: the server is trying to act as both your runtime environment and your build machine at the same time.
Why This Became a Problem
On a large server, this might be acceptable. But on a VPS with only 4GB RAM, every extra build step matters. Package installation, Docker layer creation, framework compilation, and asset generation all consume memory. Even with 2GB swap, relying on swap too much can slow everything down.
The issue was not only deployment time. The bigger concern was predictability. A deployment should feel boring and stable. Instead, server-side builds were introducing pressure exactly when I wanted things to be calm.
That is what pushed me to rethink the setup.
The New Setup: Build Once, Push to GHCR, Deploy Anywhere
Now I build the Docker image outside the VPS and push it to GitHub Container Registry (GHCR). Coolify no longer needs to build the image from source during deployment. It only needs to pull the already-built image and run it.
This changes the deployment model completely:
- Build work happens outside the VPS.
- The VPS only downloads the final image and starts the container.
- Coolify becomes a runtime orchestrator instead of a heavy build worker.
In practice, that made deployment much more comfortable.
What Improved After Switching to GHCR
After moving to GHCR-based deployments, the improvements were easy to notice.
- Deployment time dropped to under 5 minutes in my workflow.
- RAM spikes during deployment were greatly reduced.
- The server stayed more responsive while new versions were being deployed.
- The process became more repeatable and easier to trust.
Instead of asking the VPS to install dependencies and compile the app every time, I now ship a ready-to-run artifact. That is a much better fit for a small production server.
Why GHCR Makes Sense Here
There are several reasons GHCR worked well for this setup.
1. It removes build pressure from the VPS
This is the biggest win. My server should focus on serving traffic, not on doing heavy Docker builds.
2. It gives faster deployments
Pulling a built image is usually much faster than rebuilding everything from scratch. That alone made deployments feel much smoother.
3. It is cleaner operationally
With GHCR, the deployable unit is the Docker image itself. That means I can version it, reuse it, and deploy the exact same artifact anywhere I want.
4. It is better for smaller servers
If your server resources are limited, offloading build work is one of the simplest optimisations you can make.
Why I Still Use Coolify
This change does not mean Coolify was the problem. I still use Coolify because it makes self-hosting and deployment management much easier. The improvement came from changing how I use Coolify.
Instead of using it to both build and run the app, I now use it mainly to:
- pull the prebuilt image,
- manage the container,
- handle environment variables,
- manage networking and domains,
- and keep deployments simple.
That split of responsibilities feels much better.
When Building Directly in Coolify Still Makes Sense
To be fair, building directly in Coolify is not always a bad idea. It is often completely fine if:
- your server has plenty of RAM and CPU headroom,
- your application is small and builds very quickly,
- you want the simplest possible setup in the early stage of a project,
- or you do not yet want to manage image registries and build pipelines.
But once deployments start feeling slow, memory-heavy, or operationally uncomfortable, moving to a prebuilt image flow becomes a very practical upgrade.
My Takeaway
For my current infrastructure, GHCR-based deployments are the better fit. The VPS is relatively small, and I would rather keep its resources focused on running the application than rebuilding it every time I ship a change.
The result is simple:
- less RAM pressure,
- faster deployments,
- better reliability,
- and a cleaner deployment workflow overall.
If you are using Coolify on a smaller VPS and seeing slow builds or RAM spikes, moving your Docker build pipeline to GHCR is absolutely worth considering.
Final Thoughts
Sometimes the best deployment improvement is not changing your app at all. It is changing where the heavy work happens.
That is exactly what this switch did for me. Coolify still handles deployment nicely, but GHCR now handles the image delivery part in a much more efficient way for my server size. For a 4GB RAM VPS with 50GB SSD and 2GB swap, this setup feels more stable, faster, and more production-friendly.