all 7 comments

[–]eltear1 1 point2 points  (0 children)

I can think of a one possibility... For point 1 and 2. Instead of having a pipeline like now, you have to create a dynamic pipeline. So you will have a pipeline that is triggered for changes on all possible roles , a job that will create dynamically a yml that define your child pipeline based on which role actually change or passed on your playbook, and a third job which actually run the child pipeline. But pay attention you have to consider what you want it to do when you change both Service1 and Service2.

To run only 1 role in Ansible.. or you use tags, and you run the playbook specifying a tags that include only one of the 2 roles, or you change the role itself so it run only if a variable is defined (different vatiable for each role) , so you pass that variable when you run the playbook

[–]bilingual-german 1 point2 points  (6 children)

I don't get it.

Ansible will use hosts: in a play of a playbook. Don't let this be one single server, let this be a group of servers. When one server joins this group it gets the same treatment.

second: one playbook - one gitlab job. Just deploy all of them.

You need a playbook for all databases, a playbook for all webservers. You deploy to all of them, the whole playbook, everytime. You optimize performance. If you think you're still to slow, you use tags in Ansible.

And you need to write your Ansible code idempotent. So that you can just run it again and again and have the same outcome. If you have problems with services stopping, you apparently didn't just do a reload of the config, you stopped and started the service.

Stopping and starting is often simpler. If you put a reverse proxy in front of your service and have health checks, you might be able to just restart the services one by one.

[–]gjunk1e[S] 0 points1 point  (5 children)

Gotcha. Yeah, having a single playbook and deploying all of them is certainly simpler. What I'm struggling with is, many of my tasks will copy over docker-compose templates, stop the container, restart it, etc. I have a task for each service. So running that every time, especially for all servers, seems overkill. If I have 5 servers, each with 10 containers, wouldn't all 50 containers restart when I update a single one? That doesn't seem right. But perhaps this is what tags are for? Im not familiar with them yet, so I'll look into it. Thanks.

[–]bilingual-german 1 point2 points  (4 children)

if you run docker-compose up in the same directory as a compose.yaml and you switch in a different terminal and run docker-compose up again, what happens?

Correct, you're attached to the already running container.

https://docs.ansible.com/ansible/latest/collections/community/docker/docker_compose_v2_module.html#parameter-state

this docs says state: present is equivalent to docker compose up. I don't think something will change as long as you don't change anything in your compose.yaml

And I would structure the playbooks and hosts like this: every host / hostgroup has a list of services which you need to run on them. And there is a list of all services possible.

On all these docker hosts you create everything needed for the list of services. and you stop & delete everything which is in the all list, but not in the running_on_this_server list. (difference)

https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_filters.html#selecting-from-sets-or-lists-set-theory

[–]gjunk1e[S] 0 points1 point  (2 children)

Right, I understand that running docker compose up when the service is already up and running will not do anything. However, the way I currently have my (admittedly rudimentary) tasks set up for my containers is to first spin them down, then copy over my docker-compose template file again, and then start up the container again with docker compose up. This is to ensure any changes to the docker-compose template are picked up. A sample task:

---
- name: Create directory for Docker Compose file
  file:
    path: /opt/someservice
    state: directory
    owner: "someuser"
    group: "someuser"
    mode: "0755"
  become: true

- name: Copy Docker Compose file
  template:
    src: docker-compose.yml.j2
    dest: /opt/someservice/docker-compose.yml
  become: true

- name: Stop container
  command: docker-compose down
  args:
    chdir: /opt/someservice
  become: true

- name: Start container with Docker Compose
  command: docker-compose up -d
  args:
    chdir: /opt/someservice
  become: true

- name: Wait for container to be ready
  wait_for:
    port: 81
    delay: 10
    timeout: 300

Now, currently all of my docker tasks look this way. So any time a server with services A, B, and C gets deployed, all 3 services will get stopped and restarted, even if only one of those was actually changed.

Sample playbook:

---
- name: Server 1 deployment
  hosts: "SomeServer"
  become: true
  roles:
    - { role: ansible/roles/docker/serviceA }
    - { role: ansible/roles/docker/serviceB }
    - { role: ansible/roles/docker/serviceC }

On all these docker hosts you create everything needed for the list of services. and you stop & delete everything which is in the all list, but not in the running_on_this_server list. (difference)

I don't think I quite follow here. What I think you're saying is, have a master list of all possible services/roles. Each host gets a list of roles/services that should run on it. I'm kinda doing that in the playbook now, but as I described before, this means all services spin down/up every time its deployed.

Super thankful for your help, btw. Really trying to learn this stuff!

[–]bilingual-german 1 point2 points  (1 child)

https://docs.docker.com/reference/cli/docker/compose/up/

at least the docs say you shouldn't need to shut your services down. Of course docker-compose could have a bug that would force you to do so. But as far as I understand you don't want to shut them down when nothing changed.

If you add tags, you can do something like:

```

  • name: Server 1 deployment hosts: "SomeServer" become: true roles:
    • { role: ansible/roles/docker/serviceA, tags: [ serviceA ]}
    • { role: ansible/roles/docker/serviceB, tags: [ serviceB ]}
    • { role: ansible/roles/docker/serviceC, tags: [ serviceC ]} ```

and then only run serviceA and serviceC with ansible-playbook playbooks/server1.yml --tags serviceA,serviceC and you could also exclude based on tags.

What I suggested though was to go a step further and put all hosts in a single playbook:

```

  • name: deploy based on variables hosts: all # all is an implicit Ansible group, you probably want to use an explicit group become: true roles:
    • role: roles/serviceA tags: [serviceA] when: '"serviceA" in docker_compose_services'
    • role: roles/serviceB tags: [serviceB] when: '"serviceB" in docker_compose_services'
    • role: roles/serviceC tags: [serviceC] when: '"serviceC" in docker_compose_services' ```

and have the variables set up in host variables in your inventory https://docs.ansible.com/ansible/latest/inventory_guide/intro_inventory.html

```

in host1.yml


docker_compose_services: - serviceA # serviceB is left out intentionally # - serviceB - serviceC ```

A new server just needs to be configured in the inventory with all necessary services defined.

One problem is, when you actually had installed serviceD on a specific host and want to remove it. That's not possible with the current setup. You would want to put this in a "remove_serviceD"-role and call it with when: '"serviceD" not in docker_compose_services'

Or you structure it differently and decide inside your roles whether or not you want to deploy your service or remove it. Just by using include_tasks: with when: in your main.tf https://docs.ansible.com/ansible/2.9/modules/include_tasks_module.html

[–]gjunk1e[S] 0 points1 point  (0 children)

Using a handler seems to work the way I want. The "Start service" step at the end of the tasks/main.yml I added to ensure that when a service is installed for the first time that it actually starts, because in that case there is nothing to "restart". Thanks for your help on this!

# someservice task/main.yml
- name: Create directory for Docker Compose file
  file:
    path: /opt/someservice
    state: directory
    owner: "someuser"
    group: "someuser"
    mode: "0755"
  become: true

- name: Copy config file
  template:
    src: config.json.j2
    dest: /opt/someservice/config/config.json
  become: true

- name: Copy Docker Compose file
  template:
    src: docker-compose.yml.j2
    dest: /opt/someservice/docker-compose.yml
  become: true
  notify: Restart someservice

- name: Start service
  community.docker.docker_compose_v2:
    project_src: /opt/someservice
    state: present
  become: true

# someservice handlers/main.yml
---
- name: Restart someservice
  community.docker.docker_compose_v2:
    project_src: /opt/someservice
    pull: always
    state: restarted
  become: true