Self-Hosting Simplification

=> Return to Posts

Published: 2025-01-12T14:27:23+01:00

I have undertaken a journey to radically change, and in my opinion, simplify, my self-hosting setup. For the past 5+ years, I have been manually creating and maintaining nginx configs—one for each service.

=> But now I plan to gradually move everything to Coolify.

The Matrix server, which runs on the same VPS as most services, is deployed through its own Ansible playbook. This makes the current configuration very complicated: the playbook expects to own the webserver. Getting it to play nicely with other services is slightly challenging.

The VPS environment is quite old, and the Matrix Ansible playbook used to use nginx as a proxy. But they have since changed it to use Traefik. That is good, but all my configurations are still stuck on nginx! So right now, I have nginx fronting the Matrix server's traefik instance, along with all other services.

This means that nginx is the source of truth for everything like SSL certs, routing, service configuration, etc. It also means that I lose Coolify's more interesting features like automatic SSL certificate acquisition, automatically binding services to domains, etc.

Benefits of Coolify

Even though I have to solve the proxy problem, Coolify is still giving me benefits:

Centralization

My self-hosted services are strewn across several servers, both on VPSes and at home. I proxy the services running at my house through frp to the VPS, and expose them via separate nginx server blocks, sometimes with special configs for web sockets and the like. This is very annoying to manage by hand (though after 5 years, I've gotten it down to a science).

Services on the VPS are usually bash scripts. They either start a container, or run a program directly. Services at home are usually Docker Compose deployments that I have to manage directly, with a configuration separate from the docker-compose.yaml distributed from upstream. I maintain private forks of several services for both config and functionality changes.

In order to update a service, I have to do it completely manually. AND I have to remember where it's actually running. AND I have to make sure I don't break any configuration. This does not scale. I'm one person, and pretty much all of these are hobby projects. But my time is valuable, because I have a family to take care of, a job, and other hobbies. The less time I spend messing around with underlying stuff, the more time I have for the interesting parts: using my self-hosted services, building something new, etc.

One-Click Deployments

This is one of the biggest selling points of Coolify. There's a bunch of applications that you can install with one click. Notably, though, it doesn't allow you to override or map ports with these one-click installations. Given my proxy problem, I can't make much use of this feature yet. But even without it, I can still make use of the built-in docker compose deployments. Coolify allows a docker-compose.yaml file to be directly uploaded, and it then handles deployment of this to one or more servers.

Automation

Coolify aims to be an open source Vercel/Netlify replacement, and the integration with Git repositories is excellent. Coolify has direct integration with GitHub, but can pull from anything that it can access over SSH. It also supports webhook deployments from the common git forges, including self-hosted ones like Gitea/Forgejo. This allows me to completely replace the entire Drone pipeline, and I no longer need to rely on privileged directory mounts into the Drone runner.

New Deployment Process

Original deployment process, before Coolify:

This has some obvious disadvantages: it's stuck on one server, it cannot scale, and it's also less secure. While a Gemini server will likely never need to scale, having to mount directories from the host VPS is not so great.

Here is the n

ew deployment process, after Coolify:

The Docker Compose build serves all content directly as a container, which allows the static site to be hosted anywhere (not just the VPS), AND it does not require the VPS directory to be mounted anywhere. This increases isolation and security of the server.

New docker-compose.yaml

The new Docker Compose file is radically small. The Gemini host is defined below, and the only volume mount is an existing mount that holds the TLS certificate for gemini://agnos.is.

services:
  capsule:
    container_name: agnosis-capsule
    hostname: capsule
    build:
      context: .
      dockerfile: Dockerfile
    init: true
    volumes:
      - /path/to/.certificates:/app/.certificates
    environment:
      - GEMINI_HOST=${GEMINI_HOST}
    ports:
      - 1965:1965
  http-proxy:
    container_name: agnosis-http-proxy
    build:
      context: .
      dockerfile: Dockerfile.kineto
      args:
        CSS_FILE: /files/agnosis.css
    environment:
      - GEMINI_HOST=${GEMINI_HOST}
    ports:
      - 9080:8080

License: CC-BY-SA-4.0.

‗‗‗‗‗‗‗‗‗‗‗‗‗‗‗‗‗‗‗‗

=> ⤴️ [/posts] | 🏠 Home

Proxy Information
Original URL
gemini://agnos.is/posts/self-hosting-simplification.gmi
Status Code
Success (20)
Meta
text/gemini;lang=en-US
Capsule Response Time
6.424254 milliseconds
Gemini-to-HTML Time
1.337155 milliseconds

This content has been proxied by September (ba2dc).