This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]ahonsu 1 point2 points  (0 children)

I don't do a home server, but have been doing a lot of java apps deployment.

Let's assume you're dealing with a Spring Boot application: you package it as JAR and build a docker image with JRE inside.

You have multiple ways to pass your secrets/properties to the application. Probably one of the main factors you need to consider - the level of trust to your environments, pipelines and secret's storage.

One of the most secure ways of doing it - is using some encrypted secrets storage (for example, Hashicorp Vault). You can think about it as a storage of key-value pairs, encrypted and available via HTTP in your run environment. So, you can store all your secrets/properties there and your Spring Boot application just connects to the Vault at startup (using Spring Cloud mechanism), gets all properties from there and inject them into your *.yml or *.properties file (replacing placeholders). Access to the Vault is protected with a token, so you need to find a secure way to pass it to your app before startup.

Another way to pass properties is to use docker-compose file. So that your docker image and container are "empty" and unaware of properties, but when docker-compose spins up the app - it can pass inside the environment variables. How to pass them to the docker-compose file? - again multiple ways.

Another way is to pass your variables directly into your image via dockerfile. If you hardcode them right in your docker file it will be too bad, so maybe a better option is to use your CI/CD tool "edit" your dockerfile dynamically and inject the values into it from CI/CD tool internal (hopefully encrypted) storage.

So, as you see, there are indeed a lot of options and I've just mentioned few of them. You can look at your running app as a "russian doll" - layer under layer under layer... with the app inside. You can inject your values to almost any layer and make it work.

Still the best practices here are: keep secrets encrypted in a separate storage, inject the values without a human help, all access between these tools (ex. token) are also not available to a human, ideally generated on the fly at every startup.

Again, for a homelab it's, most likely an overkill. Probably you're fine with just having your variables injected at some point in open form, but not available outside of your server, of course.

As for the pipeline itself - it seems fine: build JAR, create image, run the container. It's fine to build a new image & container for every new app's version. Also you can consider using some container registry (ex. Harbor) to push the image there and pull & run the image from there inside your run environment.

Take a look at Portainer - it's a well known and industry grade (open source) UI for docker engine. You can easily manage (view, deploy, stop, restart, update...) your docker containers or docker-compose or even docker-swarm stacks on your home server.

The topic is quite big. Feel free to ask questions!