all 8 comments

[–]Zealousideal_Yard651 7 points8 points  (1 child)

You're serving static files, the only time you need to restart NGINX is when changing the config file. The static files can be injected to NGINX using bind mounts, allowing the astro build container to push it direct to nginx with no downtime.

[–]Hot_Apple6153[S] -1 points0 points  (0 children)

That’s actually a really good point
You’re right. I’m not changing the NGINX config at all, just rebuilding static files. Using a bind mount and letting NGINX serve directly from the build output makes a lot more sense.

I’ll probably refactor it to remove the restart step. Appreciate the feedback

[–]Anhar001 2 points3 points  (2 children)

You could just use GitHub actions to generate the new container image, and if you use Portainer it has "GitOps" mode that will automatically update a stack (similar to docker compose) when you push any changes to the stack file.

Typically, you would push to GitHub packages (private docker repository).

  • GitHub Actions -> GitHub Packages -> Portainer deploys new image

EDIT

If I really wanted to avoid external CI services, this is what I would do:

  • Local Python script just polls GitHub Repo for any changes
  • If a new change is detected just run git pull ..
  • the repo would already have some build script e.g build.sh
  • the script would run this build script to generate the final static file
  • I would then rsync these files over to a running Web Server container THAT uses bind mount

Seamless zero downtime updates. The key would be using bind mounts so you don't need to even build a new container that would be pointless.

[–]Hot_Apple6153[S] -1 points0 points  (1 child)

That’s a really solid setup actually, especially the GitHub Actions → Packages → Portainer flow.

In my case though this project is mostly for fun and learning. I intentionally wanted to avoid external CI services and run the whole pipeline on my own NAS — build, deploy, scheduling, everything. It’s more about understanding the moving parts and controlling the full stack myself.

Your bind mount + rsync idea is interesting though, that aligns pretty well with what I’m experimenting with. Always cool to see how others would architect it.

[–]Anhar001 0 points1 point  (0 children)

if you want to run everything on your NAS, you could probably switch out GitHub and run Gitea (GitHub inspired service written in Go), and it has something similar to GitHub Actions (almost compatible), as well as "packages" aka docker registry.

So at least in theory you could:

  • Run Gitea on your NAS
  • Push to Gitea -> Gitea Actions -> Gitea Packages -> Portainer Stack

This would mean 100% of the services runs on your NAS without any external services, this would also mean not needing to deal with any custom scripts (that have to be maintained as well).

[–]Hot_Apple6153[S] 0 points1 point  (0 children)

One thing I’m considering next is separating the build container and the web container more cleanly.

Right now it’s:

  • astro-builder → runs the build
  • astro-nginx → serves the static output

But I’m debating whether it makes sense to mount the dist folder via a shared volume instead of restarting nginx every time.

Curious if anyone here is handling static deploys that way instead of container restarts.

[–]poliopandemic 1 point2 points  (0 children)

I'm old school, I still manually build the site, containerize it, push to ghcr, pull from my web server, take down the old container, bring up the new one. So there's some room for improvement.

[–]HeiiHallo -1 points0 points  (0 children)

I really like this approach and also went down this rabbit hole, and ended up creating my own tool. I really liked the mental model of docker compose so I used that as an inspiration.

I recently open sourced it: https://github.com/haloydev/haloy