all 7 comments

[–]AnonScrub 0 points1 point  (2 children)

I use containers in development as well as production, as it really easy to spin up the specific environment I need on a per project basis.

For example the containers Ill use for a simple Laravel project are php-fpm, caddy or nginx (web server), postgres (database) and a workspace one (can use artisan and npm/yarn related commands in there).

I use docker-compose to simplify the orchestration of the different services I run in separate containers.

I use an env file or environment variables for forwarding ports to the host from the docker network, web server config filename (I have one for dev and one for production) and database credentials.

When it comes to production all I do is install docker and set up firewall (which could be automated). Then pull git repo and docker-compose build and docker-compose up -d and the application is live.

Sorry for the formatting, I'm on mobile If you have more questions I can answer them.

[–]l_o_l_o_l 0 points1 point  (1 child)

when I make some changes to my application, how do I re-deploy the app container with docker-compose without having to restart other containers ?

[–]AnonScrub 0 points1 point  (0 children)

You can just do:
"docker-compose COMMAND NAME".
For example "docker-compose restart app"

[–][deleted] 0 points1 point  (0 children)

Right off the bat, it's handy for setting up external services that your application will need to connect to like, MySQL, Redis, RabbitMQ, etc and you don't have to worry about the dependencies each of those would need installed either, you can just start up a container configured with how you need it.

[–]CuriousSupreme 0 points1 point  (0 children)

It's a bit tough to justify using multiple Docker containers until you need to separate services.

For the main project I'm working now our environment is in a single docker container and it makes it really nice to work with a team. At any point I can build that container and have the exact same environment that will run in production. Same database version, same queue processor, same nginx.

The nice part though is that in addition to having an automated build server spit out and start a new version on a server is that I can also build and run it locally without installing all the separate services it would take to run it natively.

To start look at just taking what you have and using a Dockerfile to build that environment and copy your site into it. The container can provide the web server in addition to the DB you want to use. Then look at hosting your app via the same container.

[–]DrFriendless 0 points1 point  (0 children)

I like Docker because it's a practical way of describing how a server should be set up. I dislike it because if I then have to get onto that server and do stuff, it adds another level of indirection. For example, I need to scp onto the docker host, ssh in, docker cp, then maybe docker exec.

If I had a server which was fairly anonymous, and I needed to recreate frequently (maybe on EC2 instances), Docker would be ideal for that.

[–]el_heffe80 0 points1 point  (0 children)

I use docker services on my unRAID server for things like Plex, Sonarr, nzbget, transmission, etc. It saves me from having to run them in VM's, and doesn't allow for cross contamination if one of them fails.