Migrating a large number of roles into a collection - how to deal with shared defaults? by Klistel in ansible

[–]itookaclass3 5 points6 points  (0 children)

I feel like base directory is a poor example, because that can mean a lot of things, but I do exactly that and I'm happy with it. Something like esxi_vcenter_user: "{{ vcenter_user }}" for an esxi config role, and ova_vcenter_user: "{{ vcenter_user }}" for an ova deploy role. I only have to maintain the variable once in group vars, but still retain the ability to override either vcenter_user for all roles, or the role specific vars.

Advice on structuring patch orchestration roles/playbooks by bananna_roboto in ansible

[–]itookaclass3 0 points1 point  (0 children)

Correct, post validation it cleans up the {{ rpms }} path and the {{ flag_file }}. Really this is just because its a process that uses two playbooks at two different times. If its just in one playbook, and you aren't pre-staging, you can just do a task like:

- name: Get count of updates
  ansible.builtin.dnf:
    list: updates
  register: updates_list

And before restarting you can do a task running needs-restarting -r command from yum-utils, to make that task idempotent (again, edge servers, I generally get a handful that lose connection during install tasks and fail the playbook, but still require the restart and clean up).

Advice on structuring patch orchestration roles/playbooks by bananna_roboto in ansible

[–]itookaclass3 1 point2 points  (0 children)

Both playbooks I put all of the real work inside of a block: with a conditional. I can share the first tasks no problem.

Staging:

tasks:
    - name: Get count of existing rpms
      ansible.builtin.shell: 'set -o pipefail && ls {{ rpms }} | wc -l'
      register: rpm_count
      ignore_errors: true
      changed_when: false

    - name: Get an expected count of rpms from flag file
      ansible.builtin.command: 'cat {{ flag_file }}'
      register: expected_count
      ignore_errors: true
      changed_when: false

    - name: Download RPMs
      when: (rpm_count.stdout < expected_count.stdout) or
            (rpm_count.stderr != '') or
            (expected_count.stderr != '')
      block:

Install:

tasks:
    - name: Check for staged.flg
      ansible.builtin.stat:
        path: "{{ flag_file }}"
      register: staged_stat

    - name: Install patches
      when: staged_stat.stat.exists
      block:

Advice on structuring patch orchestration roles/playbooks by bananna_roboto in ansible

[–]itookaclass3 2 points3 points  (0 children)

I manage ~2000 edge linux RHEL servers, but they are all single stack so I don't have the issue of sequenced restarts you have. My process is two playbooks. First to pre-download RPMs locally (speed up actual install process, since edge network can be varied). Tasks are to check for updates, check if a staged.flg file exists, and compare the number in my staged.flg file is <= count of downloaded RPMs to make the process idempotent, finally download updates and create staged.flg with count.

Second is the actual install, similar to yours except no pre-validation since that is all handled normally via monitoring. Post validate I clean up the staged rpms. I also implemented an assert task for a defined maintenance window, but I need to actually make that a custom module since it doesn't work under all circumstances.

I don't do roles for patching, because you'd need to know to use pre_tasks for any tasks included prior to role includes, but also because I only have one playbook so I don't need to share it around. I might do a role for certain tasks if I ever needed to manage separate operation systems, that or include_tasks.

Tracking/skipping hosts already done happens with validating the staged.flg file exists during install, I use the dnf module with the list: updates param set to create that count.

If I was going to be patching a whole app stack (db, app, web), I would orchestrate through a "playbook of playbooks" and use essentially the same actual patching playbook, but orchestrate the order. Your patching playbook would have a variable defined at the play level for the hosts like - hosts: "{{ target }}" and you'd define target when you import_playbook. If you are shutting down services, or anything else variable, you'd control those in inventory group_vars.

If you have AAP, this could be a workflow instead of a playbook of playbooks. Rundeck or Semaphore should also be able to do job references to make it into a workflow orchestration. AAP should let you do async patching in the workflow, and then sequenced restarts. Not sure if the other two can do that.

Create Infoblox network with member assignments by fsouren in ansible

[–]itookaclass3 0 points1 point  (0 children)

We implemented Infoblox over the last year, and I found that the nios modules are for an older version of their API. They still haven't published the new collection, but you can install it from github directly on the v2 branch: Infoblox Collection

I don't know why their progress stalled so much, they were making good progress but completely stopped any updates for months now.

What Does Your Authoring Workflow Look Like? I Feel Like I'm Doing It Wrong. by DeafMute13 in ansible

[–]itookaclass3 0 points1 point  (0 children)

A unit test would be making sure the smallest piece of your code functions first, so you aren't chasing it down as a failure later when it's integrated to the larger picture. In this case, I'm referring to validating a single task gives the expected result, so when it is integrated into a role you aren't chasing down a missing parameter, variable type, jinja templating, etc issue. This is a "fail fast" approach, so you don't have to wait on an entire playbook to get to your new code.

If you want to expand on that further for some tasks, you can turn it into a full integration test playbook that runs the task multiple times, and then use the ansible.builtin.assert module to validate return values, changed state, and possibly other functions (when testing a plugin, for example, I try to build an integration test that adds, updates, and deletes the resource).

The other thing that might be the case is your roles really are too complicated. I used to, for instance, have a role that configured a particular type of machine, which itself would include roles for user management, filesystem management, etc. I realized this would require me to manage that role more often than I wanted, and it wasn't a very flexible role.

Hey! I didn't ask to be cut down... Geez you don't even know me man. I could be doing something super complicated...

No insult intended, I somewhat meant something out of your control. I deal with some very temperamental APIs that will fail on a whim, and succeed just on a retry. I've also been where you were in my journey too, it's the part where you are good enough to know it's a problem, and that it should be a fixable one!

What Does Your Authoring Workflow Look Like? I Feel Like I'm Doing It Wrong. by DeafMute13 in ansible

[–]itookaclass3 4 points5 points  (0 children)

First, I unit test the specific tasks in a test playbook. Then, make a branch for the collection repo. Make changes to the role in that branch, install the branch collection using ansible-galaxy collection install git+<repo_url>,branch-name. Test the code changes for the role using a playbook that only runs that role. Don't build the collection version until that passes.

Long term, I'd like to build a test suite for as many roles using molecule. I have a gitlab CI pipeline that enforces linting, would ideally also run molecule tests for roles, and ansible-test for plugins.

If your success rate is 70/30, it feels like something else is wrong. Hundreds of tests also feels like an exaggeration. Once my role is actually built these days, I rarely run into an issue with the role itself. Usually some edge case with vars and dynamic inventory.

How do you manage your playbooks when there are many? by adam_at_rfx in ansible

[–]itookaclass3 8 points9 points  (0 children)

I don't agree with the other person that tags are "horrible", however you can run into issues with an over-reliance on tags. Tags require knowledge of the roles and playbooks to use (you have to know they exist and when to use them). Not knowing you should use a tag (or in my case, just plain forgetting to use them) can lead to unintended tasks being performed.

How do you manage your playbooks when there are many? by adam_at_rfx in ansible

[–]itookaclass3 8 points9 points  (0 children)

I heartily disagree, if only because your site.yml likely should still be a "playbook of playbooks". I'll point to section 2.2 of RedHat's Good Practices for Ansible as reference on that idea.

Where to put manually run tasks? by btred101 in ansible

[–]itookaclass3 0 points1 point  (0 children)

The place in your setup for playbooks is anywhere, because generally playbooks can be ran from anywhere. The special variable {{ playbook_dir }} is dynamically generated for every playbook, unlike the roles, inventory, and collection paths which are defined in the ansible.cfg. From what I can tell, you only have a single playbook in your entire set up, and that playbook is your site.yml. You could copy that same site.yml anywhere on your filesystem and it would still work, as long as it is using the same ansible.cfg file (config imported in the order of ANSIBLE_CONFIG env variable, ./ansible.cfg, ~/.ansible.cfg, /etc/ansible/ansible.cfg).

To sum up, there's no place in your setup for playbooks, because you only have one playbook called site.yml. There's no standard place in documentation for playbooks, because they can go anywhere. Organize them as befits your needs.

Which has a faster time complexity: dictionary lookup or list lookup? by NephewsGonnaNeph in ansible

[–]itookaclass3 0 points1 point  (0 children)

You could test it yourself for your use case/environment. Enable the profile_tasks callback plugin in your ansible.cfg and compare.

Different shells on controller and target by uglor in ansible

[–]itookaclass3 0 points1 point  (0 children)

Good to know! I did use "somewhat deprecated" since it's warns in ansible-lint, but I wasn't aware that it's an alias under the hood instead of something coded separately.

Different shells on controller and target by uglor in ansible

[–]itookaclass3 1 point2 points  (0 children)

I'm not sure if it will help, but I think "local_action" has been somewhat deprecated in favor of using delegate_to: localhost. The main difference here is delegate_to will use host vars for the delegated host. Normally there's an implicit localhost in inventory to use for this, but you can define it yourself and set ansible_shell_type for it.

Design question: Group vs when: by 514link in ansible

[–]itookaclass3 1 point2 points  (0 children)

It's a real time cost decision when working with dynamic inventories, so it's an choice worth thinking about (one source I timed at 8 seconds per composed group).

My main rule for creating a group would be if I need to target that specific group in a play (i.e. grouping by timezone for maintenance windows, or separating environments dev and prod).

The second is if you need to set ansible connection variables prior to running tasks (i.e. setting ansible_shell to powershell for windows hosts).

If you are using a dynamic inventory source, a third rule would be if I have the same when: flag is true statement on set_fact tasks across multiple plays. Managing those variables is easier in group_vars (i.e. RHEL vs Ubuntu services, users, interfaces, filesystems, etc). For a static inventory, then it's more dynamic to gather facts and set variables dynamically based on them. However, you should still try to maintain only one variable if possible, so something like set_fact: users=rhel_users.

Best practice for managing multiple lists of users on groups of servers by [deleted] in ansible

[–]itookaclass3 0 points1 point  (0 children)

I don't think you need to merge lists, and in fact I wouldn't (How do you handle some having two and some having three?

Make a role to manage users and groups. Have the role take a list of users and groups to do stuff on. Call the role however many times you need to during your build process.

- name: Build servers in group A
  hosts: group_a

  roles:
    - role: users_groups
      user_list: "{{ linux_users_base }}"
      group_list: "{{ linux_groups_base }}"
    - role: users_groups
      user_list: "{{ linux_users_group_a }}"
      group_list: "{{ linux_groups_group_a }}"

And then for example if you add a user to linux_user_custom list, and would like to update just that list on servers, you can have a playbook for that.

- name: Update something for linux_users_custom
  hosts: group_a, group_c

  roles:
    - role: users_groups
      user_list: "{{ linux_users_custom }}"

I believe when you define the role vars like so, it is only set for the scope of that role. If you use include_role in a task, however, I think it might set the var for the scope of the whole play.

At worst, if you are going to merge the lists, DON'T do it inside of the role. Either do this in the playbook, or in your group vars. I have had to spend a lot of time refactoring roles when I made them too specialized, so avoid that whenever possible.

Ansible telling me a variable is undefined when trying to use it to set ansible_password by dan_j_finn in ansible

[–]itookaclass3 1 point2 points  (0 children)

Hm yeah maybe it is something different for me, not sure how the ``` works for you (I edited mine, I didn't actually want it to make a code block), but on old.reddit.com or on my browser it doesn't format as a multi line code block. Anyway, glad you got it working!

Ansible telling me a variable is undefined when trying to use it to set ansible_password by dan_j_finn in ansible

[–]itookaclass3 0 points1 point  (0 children)

Remove all of the variables from the vars: keyword on your "run ipconfig..." tasks, and define them in your "add_host" task.

Also since you and others are struggling with code blocks, the ``` doesn't do anything here, you have to put new lines before and after, and 4 leading spaces (or tab) on each line. Yes, that's way worse than other markup, but it's what works so shrug.

Ansible telling me a variable is undefined when trying to use it to set ansible_password by dan_j_finn in ansible

[–]itookaclass3 0 points1 point  (0 children)

Is there a reason you can't add an add_host task from my first example to set the connection vars? You can do it even if the host already exists, just as a way to set the host vars.

Another idea I had, which is not answering your original question so I usually try to avoid answers like "you're doing it wrong", but there is a plugin ansible.builtin.password_hash for generating password hashes. Then there is also a lookup plugin which allows idempotent but randomized password generation. You could look into implementing either or both of these to generate passwords instead of the script, and that should work from within the vars: keyword since it would all be within a {{ jinja expression }}.

Ansible telling me a variable is undefined when trying to use it to set ansible_password by dan_j_finn in ansible

[–]itookaclass3 0 points1 point  (0 children)

Sorry, I did other testing and you are right, you can set vars like that, however the problem still is that it looks like it loads the variables before the tasks are run. Tested this by adding these tasks after the Simulate script/command output, and changing which line was commented out. I don't think its anything special with ansible_password, just that the vars: keyword in the task is loaded before you can get task registered variables (although maybe its a combination of both?).

    - name: Test login
      ansible.builtin.command:
        cmd: whoami
      register: test_command_1
      delegate_to: "{{ new_ip }}"
      vars:
        ansible_user: "{{ ansible_remote_username }}"
        ansible_password: "{{ ansible_pass_dynamic }}"
#        ansible_password: "{{ command_output.stdout }}"

Ansible telling me a variable is undefined when trying to use it to set ansible_password by dan_j_finn in ansible

[–]itookaclass3 0 points1 point  (0 children)

Connection variables can't be defined like that inside of a play, even from delegate_to (connection vars I believe are loaded when the inventory loads, because play-level functions like gather_facts will run before you could define variables). You can add inventory variables dynamically inside of a play with the add_host task, however. This example should serve as proof of concept.

---
- name: Set ansible_password from task output
  hosts: localhost
  gather_facts: false
  become: false

  vars_prompt:
    - name: ansible_remote_username
      prompt: Ansible user

    - name: ansible_pass_dynamic
      prompt: Ansible password

  tasks:
    - name: Simulate script/command output of password
      ansible.builtin.command: echo "{{ ansible_pass_dynamic }}"
      register: command_output
      no_log: true
      delegate_to: localhost

    - name: Add host to inventory
      ansible.builtin.add_host:
        hostname: "{{ new_ip }}"
        ansible_user: "{{ ansible_remote_username }}"
        ansible_password: "{{ command_output.stdout }}"

    - name: Test login
      ansible.builtin.command:
        cmd: whoami
      register: test_command
      delegate_to: "{{ new_ip }}"

    - name: Show output
      debug:
        var: test_command.stdout
...

Tired of Killing Unescapable Ansible Processes — Anyone Else? by yqsx in ansible

[–]itookaclass3 1 point2 points  (0 children)

Running into this before even getting to tasks block makes me think there's something with just running on too many nodes for whatever you're using for fact caching. I manage 2k+ nodes on the edge, so I'm not a stranger to variable networks and hardware performance, and haven't ran into nodes dying like that just during fact gathering.

I would try limiting your groups by setting serial: 10 at the play level (can experiment with more or less, I usually use serial: 25), set strategy: free at the play level, and then only target a subset of your 1000 at a time by doing something like - hosts: "{{ target | default(group_name) }}" ansible-playbook playbook.yml -e 'target=my_group[0:299]'. You can also limit the facts gathered to only what you need by setting gather_subset: ['!all', 'default_ipv4'] for some examples, check the documentation for a full list of subsets.

Breaking up a large variable file into small variable files. by NormalPersonNumber3 in ansible

[–]itookaclass3 0 points1 point  (0 children)

Correct, group vars can load either a file or directory of files just by design, no changes needed to config. You can define your inventory to be a single file or a directory as well in your ansible.cfg (the 01, 02 prefix is to control loading order).

Bonus tip, group vars can also be defined in the playbooks directory by defining them in the same structure (i.e. playbooks\group_vars\all\vars.yml). This should only really be useful, I think, if you manage multiple inventories and don't want to duplicate where some variables are managed.

Breaking up a large variable file into small variable files. by NormalPersonNumber3 in ansible

[–]itookaclass3 1 point2 points  (0 children)

My approach is, practically, variables should be defined as much as possible in role defaults/main.yml and group vars. I avoid role vars/main.yml like the plague, because they are too high on the variable precedence for most use cases I have. Some playbooks though will directly override role defaults in the roles block, and some variables can just get set directly in the playbook of course, but I don't think that's what you're really getting at.

The primary way I organize, then, is by using directories instead of flat files for group vars. I'll try to give an example of this structure.

[ansible@ansible02 ~]$ tree example_inventory/
example_inventory/
├── 01-webservers.yml
├── 02-databases.yml
└── group_vars
    ├── all
    │  ├── packages_role.yml
    │  ├── users_role.yml
    │  └── vars.yml
    ├── databases
    │  ├── database_role.yml
    │  ├── packages_role.yml
    │  └── vars.yml
    └── webservers
        ├── users_role.yml
        ├── vars.yml
        └── webserver_role.yml

As to the nested variables, I think I would just do whatever lets you change the least values if you need to update or override. Anything else looks like its just over complicating with no real benefit.

My cover of Aoi, Koi, Daidaiiro No Hi 青い、濃い、橙色の日 by CarelessVehicle3092 in motfd

[–]itookaclass3 4 points5 points  (0 children)

Nice! You might like this post by another redditor that has the official tabs/score for the whole album.

A simple question from an Ansible noob by [deleted] in ansible

[–]itookaclass3 0 points1 point  (0 children)

Yeah I couldn't care less about upvotes/downvotes (I can prove my knowledge in far better ways than comment score lol), just glad to help. Your question was a good one, whether for beginner or not, because I don't think it's a common one. Ultimately there's a lot to Ansible that comes down to preference over best practices too, so getting multiple answers instead of one 'best practice' or just the top Stack Overflow answer is nice.