View code on GitHub

From Code to Cloud: Deploying to a VPS

I recently built this very portfolio blog to document my learning. I could have simply pushed to Vercel or Netlify and called it a day—platforms that let you deploy a site with a single click or git push. But where’s the fun (and learning) in that? Instead, I chose to spice things up by standing up my own infrastructure on a DigitalOcean VPS and handling every layer of the stack myself.

The DIY Decision

When you use a managed host, much of the heavy lifting—server provisioning, OS updates, TLS certificates, and scaling—is abstracted away. That’s ideal for quick prototypes, but it hides the operational details. By doing this manually:

  • I gained a deeper understanding of Linux administration.
  • I experienced firsthand the challenges of securing a server.
  • I mastered containerization, orchestration, and reverse proxy configuration.

Follow along as I walk through each step, with real code snippets and configurations.

1. Provisioning on DigitalOcean

I kicked off with a 8GB droplet in the London region. After SSH’ing in as root, I:

apt update && apt upgrade -y
adduser deployer
usermod -aG sudo deployer

Switching to a non-root user is a small step, but it’s fundamental for secure operations.

2. Hardening the Server

Out-of-the-box, SSH root login and default firewalls leave open doors. I:

  • Disabled root SSH in /etc/ssh/sshd_config:

    PermitRootLogin no
    
  • Set up UFW rules:

    ufw allow OpenSSH
    ufw allow http
    ufw allow https
    ufw enable
    
  • Automated OS patching:

    apt install unattended-upgrades -y
    dpkg-reconfigure unattended-upgrades
    

3. Building a Minimal Docker Image

Instead of trusting a build on a hosted platform, I crafted a multi-stage Dockerfile:

FROM golang:1.24.3-alpine3.20 AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build -trimpath -ldflags="-s -w" -o server ./cmd/sulaymanhasan

FROM scratch
COPY --from=builder /app/server /server
COPY --from=builder /app/migrations /migrations
EXPOSE 8000
CMD ["./server"]

Going from golang:alpine to scratch yields an ultra-small, static binary with zero extra layers.

4. Running Postgres in a Container

Rather than a managed DB, I spun up Postgres locally with volume persistence:

blogdb:
  image: postgres:13-alpine
  restart: always
  env_file: [.env]
  environment:
    POSTGRES_USER: postgres
    POSTGRES_DB: blog
    POSTGRES_PASSWORD: ${DB_PASSWORD}
  volumes:
    - db-data:/var/lib/postgresql/data
  healthcheck:
    test: ["CMD-SHELL", "pg_isready -U postgres"]
    interval: 10s
    timeout: 5s
    retries: 5
volumes:
  db-data: {}

Data lives in a named Docker volume, giving me both simplicity and control.

5. Orchestration with Docker Compose

Bringing up the entire stack with one command:

version: "3.8"
services:
  blog-app: <...>
  blogdb: <...>
  caddy:
    image: caddy:latest
    container_name: caddy-service
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - $PWD/site:/srv
      - caddy_data:/data
      - caddy_config:/config
volumes:
  db-data: {}
  caddy_data: {}
  caddy_config: {}

One docker-compose up -d and my app, database, and reverse proxy were all live.

6. Zero-Touch HTTPS with Caddy

For TLS, I used Caddy’s automatic TLS:

sulaymanhasan.dev {
	tls {
		protocols tls1.2 tls1.3
	}
	reverse_proxy blogapp:8000 {
		header_down Strict-Transport-Security max-age=31536000;
	}
}

Caddy fetched and renewed certificates automatically—no manual steps.

7. Deploying & Future CI/CD

Each deploy today is manual:

git pull && docker-compose pull && docker-compose up -d --remove-orphans

Next, I plan to build a GitHub Actions pipeline to automate building, testing, and deploying via SSH or webhooks.

Wrapping Up

By choosing this manual route, I traded convenience for mastery. I now understand every layer of my deployment pipeline—from kernel patches to TLS certificates.

Back to all posts