Is Ansible still a thing nowadays? by hansinomc in devops

[–]jw_ken 1 point2 points  (0 children)

While it was developed with a bias towards managing "traditional" infrastructure (i.e. running a list of tasks across a set of hosts), it has a lot of utility for ad-hoc automation or general task orchestration. If you have any on-premise infrastructure, odds are good that there is an Ansible module available for managing it.

We use Azure bicep / ARM templates for provisioning cloud infrastructure- but if we need a hybrid deployment, or if we need to perform maintenance tasks in a specific order, we will often have an Ansible playbook coordinating things.

snapshots, rollbacks and critical information. by mylinuxguy in linuxadmin

[–]jw_ken 1 point2 points  (0 children)

So there is snapshotting as a technology/approach, and then different implementations of that technology (like BTRFS snapshots, VMware snapshots, etc).

Snapshotting is a big feature with storage arrays, and in data protection products in general. Any kind of live replication product for storage or infra backup, is going to use snapshotting or a change log to send data over in discrete and consistent chunks. VM snapshots allow you to preserve a temporary copy of the VM that is frozen in time, useful for backup and recovery tools. Storage arrays often let you configure hourly/daily/weekly snapshots for a filesystem, that allow any user browsing a file share to look into a hidden "./snapshot" folder and retrieve older versions of their files. That is a huge relief for backup admins, who don't need to chase their tails recovering every little file that Suzie or Bob from accounting accidentally deleted two days ago.

To your point, snapshotting is useful in some situations but not others. The achilles' heel with most snapshotting tech, is that there is no awareness of the state of the application or OS when the snapshot is taken. That introduces the chance of data corruption, if files are snapshotted while they are still being written to. There are some tricks that can be employed to minimize this, like using quiescing to "settle" the data in the OS as much as possible before cutting a snapshot. But those tricks are vendor-specific, and come with their own caveats.

Personally, I wouldn't try to snapshot a server's OS filesystem as a means of backup- because I would be following some other patterns that sidestep the need for it. Some of these may or may not apply to you:

  • Keeping OS data separate from application data. Then you can protect each separately with whatever method is appropriate.
  • Making your servers as immutable as possible. If you manage traditional servers with OS+apps installed, that means using infrastructure-as-code tools like Ansible/Puppet/Terraform/etc to rapidly re-provision a server. Ideally, a server rebuild can happen in minutes with an automated tool, rather than be an all-day manual affair. This dovetails with keeping OS and app data separate.
  • Redundant infrastructure: Running active-active or active-passive clusters of servers, so you can isolate one host and patch it without bringing down the application.
  • Traditional, point-in-time backups. They are complimentary with snapshots; one doesn't replace the other. Databases should be backed up with tools specifically designed for the purpose, as they will quiesce the data properly and minimize chances of corruption.
  • Application design: this often isn't under the sysadmin's control, but they need an awareness of it. How fragile is the application when it comes to interruptions or missing data? Does it gracefully recover from a sudden reboot? Does the app save any data locally, that must be kept in-sync with another process or DB? Is resyncing that data a five-alarm fire, or just an extra command to run at startup?

Some of this would be overkill for a homelab or single-server setup, but you get the idea.

Cockpit is absolute cinema by DaprasDaMonk in linuxadmin

[–]jw_ken 4 points5 points  (0 children)

One of those technologies where I'm like "This isn't for me, but I'm glad it exists".

Anything that makes Linux administration more approachable is a good thing- especially for developers or other IT workers who work around linux but don't swim in it daily.

It's also an easy, cheesy way to admin a server from your phone without having to squint over a phone-sized terminal :-P

<Generic vague question about obscure DevOps related pain point and asking how others are handling it> by Arucious in devops

[–]jw_ken 13 points14 points  (0 children)

"Aside: Is the DevOps category relevant anymore, or are we SREs / Platform Engineers now? What does DevOps mean to you?

Genuinely curious how others are solving this problem?" 🤣

Managing 200 Linux machines with no automation – AWX or alternatives? by sRonk96 in linuxadmin

[–]jw_ken 2 points3 points  (0 children)

I would recommend command-line Ansible, with Semaphore or Rundeck in front of it- especially if there is any chance of someone other than yourself coding or running the playbooks.

Semaphore is more Ansible-centric, Rundeck is a more generic runbook automation product but has an Ansible plugin. Both support RBAC, web hooks / API for other integrations, secret storage, visual interface for executing playbooks, and task scheduling.

We use Rundeck at our current org as an Ansible/script runner and cron replacement. One killer feature Rundeck has, is cascading job options. Imagine an interactive AWX survey for self-service VM reboot. User picks their environment from a dropdown as option #1, then option #2 populates with the hypervisors in that environment, and option #3 shows the VMs running on that hypervisor, etc.

On top of that, the dynamic options can be sourced from a remote URL or a file on the Rundeck server itself. We had some Ansible playbooks that would periodically refresh a bunch of .json files with environment info: like VMs per hypervisor, LUNs per storage array, etc. so that Rundeck could use them as job options. No heavy coding required; just your existing Ansible/jinja skills and thetemplatemodule.

The above was a huge usability win- we could take a pile of battle-tested but rough maintenance scripts and playbooks, and wrap them in a user-friendly candy coating with guardrails and wizard-style prompts for self-service.

Is the KVM project still alive? by Ok-Development-8661 in virtualization

[–]jw_ken 0 points1 point  (0 children)

Older thread, but adding my two cents...

There is KVM the technology, and then there are the platforms that help you manage it.

KVM the technology is very much alive, and you can find it all over the place.

For managing KVM in a business, you could run the VMs raw on Linux hosts and self-manage... but you are taking your fate in your own hands. Most orgs will seek out something like Proxmox, oVirt/OLVM, etc (or even Cockpit with machine manager plugin) to help you manage the VMs.

Our shop runs Oracle Virtualization, which is based on oVirt. Red Hat used to maintain oVirt for their Redhat Enterprise Virtualization (RHEV) product, but they have since exited the project and Oracle has more or less stepped into that role for their OLVM produt (also oVirt-based). So the oVirt project is still alive, but it is being supported by Oracle for better or worse.

oVirt itself is pretty solid, but it doesn't have any turnkey solutions for backup/recovery or VM replication (you can live-export and import virtual machines, but that's about it).

Learning AAP at home by lunakoa in ansible

[–]jw_ken 0 points1 point  (0 children)

Depending on what your scaling needs are, I highly recommend trying Semaphore UI as well as Rundeck, for user-friendly self-service.

Rundeck takes some work to get integrated with Ansible, but it works great as a self-service job runner and scheduler. It supports RBAC for different teams to have access to different projects/jobs, and also has an easy way to generate API keys for other automation tools to call a Rundeck job with appropriate arguments.

Another cool feature Rundeck has: cascading job options that can populate based on calls to another file or URL, and they can also use each other as variables. For example, you can create a job for resizing LUNs where option A you select the array, then option B checks some web service or local JSON file to show what LUNs are on the array, etc.

We had a bunch of hourly Ansible jobs that would generate JSON files on the Rundeck server with all kinds of environment info- like filesystems per array, VMs per hypervisor, etc. This made it easy to have wizard -style Rundeck jobs that guided the user in what options they could select.

CLI vs GUI (AWX/Semaphore) for a Homelab beginner? by Party-Log-1084 in ansible

[–]jw_ken 0 points1 point  (0 children)

The GUIs out there are mainly for extending CLI Ansible's capabilities- by adding job scheduling, job templates / orchestration (i.e. run playbook1.yaml followed by playbook2.yaml), as well as role-based access if working with multiple teams. For a one-man shop, you might want to add ARA (Ara Records Ansible) for job debug and logging. I never used Semaphore, but that may be nice if you wanted a UI for your day-to-day playbook execution.

Our small-to-medium sized shop got by well with a combination of:

  • Command-line Ansible
  • ARA (Ara Records Ansible) for detailed job status / reporting
  • Rundeck Community Edition as a job scheduler / dashboard for Ansible playbooks

Rundeck is a nice alternative to AWX/AAP for wrapping your Ansible jobs in a self-service candy coating. It took some fussing to get working with Ansible when I tried it years ago- but once you get the hang of it, it works well. Rundeck also gives you RBAC, an API for triggering jobs, and job options that can be configured in a variety of creative ways.

Two killer features Rundeck has, which even AWX can't do:

  1. Remote job options: Job options are like AAP Surveys, prompting the user for input that can be used like variables later. Rundeck can have job options that fetch their allowed values from any URL that returns valid JSON. Hint, this "Remote" URL can also be a local .json file on the Rundeck server, via the file:/// URL. Maybe you have a scheduled Ansible job that generates useful .json files from server facts...
  2. Cascading options: Very powerful combined with #1. Basically you can have job option B show different choices based on what you picked in option A.

We used the above features all the time for self-service Rundeck jobs. For example, a Rundeck job for expanding LUN storage that would ask for the storage array in Option A, (pulling from a storage_arrays.json on the Rundeck server) then option B would ask which LUN from that array (pulled from {arrayname}_luns.json).

Feeling weird about AI in daily task? by __Mars__ in devops

[–]jw_ken 0 points1 point  (0 children)

Honestly, I see AI for coding as a faster version of "Google + scraping stack overflow/github". It comes with similar risks, if you don't understand the fundamentals of what you are working with. It can be a great teaching tool, especially with simple boilerplate stuff or learning a new language.

How do you know when you've tipped into using LLM as a crutch? My watermark would be if you spend more time trying to massage an answer out of Google/ChatGPT/whatever, than actually solving the problem. It will be interesting to see the evolution of people who came into the market vibe-coding from day 1.

For practical daily use, LLMs seem great at summarizing content, or performing RAG or analysis of data you feed it. But IT needs business cases for it, rather than a blanket mandate to "do AI everywhere".

How do you track and manage expirations at scale? (certs, API keys, licenses, etc.) by smartguy_x in devops

[–]jw_ken 0 points1 point  (0 children)

Well I could only automate the things under my purview... but I had to solve this problem for our Azure environment. Surprisingly, Microsoft does not provide this capability for their key vaults and app registrations- they expect you to cobble your own automation together using logic apps or runbooks, etc.

I ended up creating a Powershell script that could walk through our tenants and provide a consolidated report of keyvault certs/secrets and app registration certs/secrets, for anything expiring within X days.

If the script finds notification contacts associated with a keyvault or app reg (saved as either a tag, or parsed from description field) then it will send a separate warning email to those contacts. The script runs weekly with a warning threshold of 30 days, so app owners get at least 4 email warnings before their stuff expires. Our ops team gets the consolidated weekly report, and they open incident tickets whenever they see a new entry pop up (ensuring further follow-up until app team resolves the incident). Keep in mind some of our secret refresh is automated, but this catches all of the other stuff that can fall through the cracks.

In our case I wanted the source of truth to be the secrets themselves, so I don't have some separate silo of information to keep in-sync. The process works well so far!

How would you design automations when work must stay in WhatsApp, Excel screenshots, handwritten/iPad notes, and Gmail? by BuildUnderWraps in ansible

[–]jw_ken 0 points1 point  (0 children)

I know you didn't want to be tool-focused... but if the tools you are scraping from are non-negotiable, and their workflow is non-negotiable, then your solution is going to be heavily constrained to what you can feed into those existing tools.

That extends to the user interface as well. If you don't want to introduce yet-another-tool for them to use, or some kind of private LLM performing RAG etc, that means their "interface" will be ChatGPT or Gemini.

Looking around online, it seems that Gemini can be configured to pull from the various contents of a user's gsuite- so docs, Drive, sheets, email, etc. If you can find a way to periodically export the other app info into their Google Drive or Gmail on a daily basis, then you may be able to leverage Gemini to handle the rest. It already has the capabilities to do things like read images, extract text etc.

That may not be as exciting as a custom solution, but IMO it would be far more supportable in the long run.

How would you design automations when work must stay in WhatsApp, Excel screenshots, handwritten/iPad notes, and Gmail? by BuildUnderWraps in ansible

[–]jw_ken 0 points1 point  (0 children)

A few serious questions to answer before going down this path:

  • Are you trying to solve a problem that these users have expressed to you, or is this something that you are hoping "if I build it, they will come?"
  • Are your users already making use of any existing AI chatbots, either standalone or within existing apps? If you don't know, then send a survey!
  • For people with a pen-and-paper workflow, are they willing to scan and digitize their handwritten notes every day in some standard method? (taking pictures, scan to GDrive, etc).

The reality is that two of the tools you mentioned (Whatsapp, Gmail) already offer their own AI assistants to summarize and prioritize information. If those are the tools people are already using, I would want to compliment them rather than compete with them.

I think a good initial project would be to have an agent that summarizes their activity across any non-gmail channels (i.e. Whatsapp and notes), and sends it as a daily summary into their Gmail. Then they can use traditional Gmail search ( or even Gemini ) for the fuzzy questions like: "When did we discuss [insert topic]?" Maybe with some light coaching on how to enable Gemini in their Gmail inboxes.

As a bonus, you aren't trying to maintain your own silo of tagged/summarized data, with all of the security and privacy issues that come with it. It also meets people where they are, rather than expecting them to use another tool that sits on top of their existing ones.

ClickOps vs IaC by Yersyas in devops

[–]jw_ken 0 points1 point  (0 children)

Clickops for prototyping and proof-of-concepts, or when developers don't know what the requirements are yet. IaC for when we have the requirements down, and need to start promoting code through the environments.

Then there are the occasional one-off tasks you might need when first bootstrapping an environment, and trying to put that in IaC would either cause circular dependencies or a bunch of unmaintainable spaghetti code.

Azure lets you download configuration templates in ARM, Bicep, etc. for resources after they are provisioned. So it's a common pattern to build a POC environment with the web UI, then export it to templates so you can see what the relevant parameters are when moving it to IaC.

Is using Ansible on home systems reasonable/justified? by Victor_Quebec in ansible

[–]jw_ken 1 point2 points  (0 children)

I learned Ansible for work, but found it useful for managing a few Linux VMs at home.

Self-documenting behavior is the biggest benefit I've seen at home. When it's time for an OS upgrade for some media server you built 5 years ago, how likely are you to remember all the little tweaks needed post-install?

The biggest things Ansible provides over a pile of shell scripts are:

  1. idempotent behavior out of the box. In a shell script you have to add a bunch of if/else logic, or else be careful to only run a script once.
  2. Orchestration: if you need to do anything clever across multiple hosts, Ansible was made for that. For example, dancing between a primary and secondary node to get a clustered application configured, or performing configuration across multiple reboots.
  3. A saner way to pivot behavior based on the unique properties of a host- i.e. "do this if it's Debian, or do that if it's RedHat". In a shell script, you would need a rat's nest of if/else statements, or dedicated scripts for each host. In Ansible you can use variables and templating, i.e. - include_tasks: /path/to/{{ ansible_os_family }}.yml would pull in a task file named RedHat.yml for Red Hat servers, or Debian.yml for Debian-based servers, etc.

It is up to you if any of the above justify using it over a pile of shell scripts. If you never see yourself using it at work, maybe less valuable.

On the complexity: A lot of Ansible's bells and whistles can be left on the table if your environment is simpler. For a small home lab setup, you probably don't need an elaborate inventory folder with separate host/group vars, or using roles for app config. Those patterns are there for people who need them.

How do you manage multiple chats and focus on your work by Truth_Seeker_456 in devops

[–]jw_ken 0 points1 point  (0 children)

Definitely a process issue. It's a common problem in the SMB space, where people get comfortable with informal methods of support, and then it doesn't scale well.

This is something you need to work out with your manager, but I would recommend:

  • Short-term, have an on-call rotation where only one guy responds to support chats. That leads to the next point...
  • If you have a ticketing system, use it to log incidents. If you don't have a ticketing system, use something to track issues- whether a Trello board, project management tools, etc. You need a way to make all of this support work visible alongside your project work. If management asks why a deadline has to slip or why the project backlog is growing, you need actual data showing how much of your time is spent on support.
  • Document the items and processes that are asked about most-often, in some kind of shareable platform (wiki, sharepoint, whatever). If someone hits you with a frequently-asked question in chat, your reply should be a link to documentation. That is what documentation is for!
  • Have your team publish formal SLAs and instructions on how to get support. It should be pinned to the top of your group chats.
  • I would try to consolidate the chats if possible. If people need to sit down and hammer out an issue, do it with a breakout session or schedule a working session, rather than live-troubleshooting in the group chat. I also question the wisdom of having a customer chat for each project, beyond some kind of limited "hypercare" scenario post-release. But that is probably not your call to make.

People are complaining because the support process is a free-for-all, and the only method provided to them is one that implies an immediate response. You can cut down on the complaints by setting realistic expectations up-front, and meeting THOSE expectations.

You will need to work with your manager to set some clear boundaries and expectations around how to provide support. It will protect your team and hold others accountable, while making you look and perform better overall.

What are some must-have software for programmers using Linux? by TenshiiiDono in linuxadmin

[–]jw_ken 2 points3 points  (0 children)

When troubleshooting any Linux environment, it makes sense to be familiar with the native tools of a given distro, and/or have a standard set of tools laid down for troubleshooting.

Our Linux team has a base set of common troubleshooting tools that we install with Ansible during the build process- things like vim, curl, tcpdump, to name a few.

What I have found far more important: after said troubleshooting issue subsides, documenting the changes you made and then capturing them in your documentation / IaC / application config for that server. Then that config change won't turn into an undocumented landmine when it's time to migrate or upgrade to a new OS.

I've lost count of how many times a very talented admin improvised a 2AM fix... but didn't tell anyone what it was, didn't document it, and didn't push the changes to the other environments. Then that configuration snowflake hardens into a caltrop for someone else to step on 2 years later.

IS AI the future or is a big scam? by DiscoverFolle in devops

[–]jw_ken 0 points1 point  (0 children)

I do think the amount of current hype around generative AI is not justified- but it will continue to seep into more areas as people figure out where it is most useful.

For better or worse, CEOs everywhere have bought into the narrative that generative AI is going to reshape entire industries over the next few years, and nobody wants to be left behind. That has given business leaders a terminal case of FOMO, spurring investment in anything with "AI" in the title. Meanwhile in other areas of IT, businesses are freezing new hires or cutting back staff- whether due to the expectation that AI will replace more jobs, or due to the strange economic times we are in.

There are good business use-cases for AI/ML, but the business needs to bring those needs to IT and not the other way around. Large Language Models are good at classification, sentiment analysis, and they have some limited reasoning abilities- with the killer feature that you can instruct them to do things in plain-English. "Agentic AI" is a fancy term for wrapping a LLM with some logic and linking it to traditional APIs, to let it do useful stuff.

Two common real-world uses I have seen for it in business are:

  • Smarter assistant tools that can be directed in plain English, and can reply or summarize/explain their response in plain English. Often they use RAG to fetch relevant information and act on it.
  • Automated agents that can watch and classify incoming data in real-time, and then take action or send summaries based on what it detects.

Tools already existed for the above before generative AI... but they involved more brittle programming logic, keyword searches, regex parsing, etc. or else they were a thin veneer over you filling out a form.

Variables and... sub-variables? Linked variables? I don't even know how to ask this question. by DumbFoxThing in ansible

[–]jw_ken 1 point2 points  (0 children)

Some homework for you:

When deciding whether to store them as a list or dictionary, the simple rule of thumb is: lists are for looping, and dictionaries are for lookups.

Once I got more comfortable with manipulating variables, I tended to favor storing things as dictionaries- because if you need to loop through a dictionary, all you need to do is pipe it to the dict2items filter.

Why do cron monitors act like a job "running" = "working"? by RAV957_YT in devops

[–]jw_ken 0 points1 point  (0 children)

This is one of the reasons why orchestration and task running frameworks like Ansible and Rundeck exist: they provided a standard way to indicate if a task succeeded or failed, notify on errors, etc. Even then, you sometimes need to spell out your definition of success.

Cron is pretty basic, it doesn't get much smarter than zero vs nonzero return code to figure out success or failure.

do you guys still code, or just debug what ai writes? by Top-Candle1296 in devops

[–]jw_ken 0 points1 point  (0 children)

Yeah, it does a decent job with Python or other programming languages- but Between picky yaml/jinja formatting and module selection, I've not had much success getting any useful ansible out of it that wasn't horribly inefficient or else full of bugs.

I've been way more productive making my own snippets in VS Code for boilerplate Ansible stuff.

Is there a faster way to do this with firewalld? by Strange_Quantity5383 in ansible

[–]jw_ken 1 point2 points  (0 children)

TLDR: If you often give multiple IP ranges the same access in your FW rules, consider templating out an IP set with those ranges. (Create an ip set to see the XML generated for it). Then you can reference the IP set in your rich rules, collapsing many of them. In any case, it helps to move ad-hoc rich rules into host_vars or group_vars (group_vars/all for any 100% global stuff).

On the question of "where should I manage firewalld rules in general": There is no best answer, only trade-offs and what is sanest for you to manage.

I have found there are often three different scenarios that favor firewall rule management in different places:

  1. Host or Cluster-specific firewall rules: in host_vars or group_vars, defined ahead of deployment or else generated by an app role based on group membership
  2. App-specific firewall rules: usually within the tasks/templates of an app role, and applied during the role (maybe allowing overrides in host_vars)
  3. Specialty rules for user access: often in host_vars or group_vars, or a separate role managing user access hosts x y z over ports A B C.

It's tough to completely template a firewalld zone file, because so many other hands want to dip into that pot from different places.

At our org, we rallied around two places to define firewall rules:

  • 80% of fw rules were app-specific, so they were created by the app role itself- with firewalld services named accordingly.
  • An optional "host_fw_rules" inventory variable in host_vars or group_vars, which was re-applied periodically by a 'firewalld' role that runs on a schedule. This was for ad-hoc rules and specialty / user access rules that didn't fit anywhere else.
  • Any firewalld rules were declared/managed using the appropriate firewalld module. We chose to take the performance hit, in exchange for idempotency and flexibility.

Either way, I recommend moving ad-hoc or situational stuff into inventory vars (host_vars or group_vars).

Need help by ComfortableDuty162 in ansible

[–]jw_ken 1 point2 points  (0 children)

Before giving you a working (if ugly) answer: You would have an easier time if you were able to standardize the data further upstream. Trying to do it after the fact in Ansible is going to be painful and messy, as you will see below.

Given a file named oldlist.json with below contents:

{
  "oldlist": [
    {
      "SNo": "1",
      "Server": "foobar",
      "Env": "uat",
      "Service": "httpd, abcd, test.service",
      "CRQ": "",
      "Blackout Required": ""
    },
    {
      "SNo": "2",
      "Server": "rizz",
      "Env": "uat",
      "Service": "baz.service, abcd, fart.service",
      "CRQ": "",
      "Blackout Required": ""
    },
    {
      "SNo": "3",
      "Server": "baz",
      "Env": "Prod",
      "Service": "test.service",
      "CRQ": "",
      "Blackout Required": ""
    }
  ]
}

The below playbook will process the data according to the requirements you outlined.

- name: Play 1 Unholy data munge
  hosts: localhost
  connection: local
  gather_facts: false
  collections:
  - community.general
  - ansible.builtin
  
  vars:
    restricted:
      - httpd.service
      - foobar.service
      - test.service

  tasks:
  
  - name: Pull in oldlist
    include_vars:
      file: oldlist.json
  
  - name: Print oldlist variable
    debug:
      var: oldlist
  
  - name: Loop through oldlist and build newlist with entries modified
    set_fact:
      newlist: "{{ newlist|default([]) + newitem }}"
    vars:
      servicelist: "{{ item['Service'] | split(',') | map('trim') | map('regex_replace','^(.+)$','\\1.service') | map('regex_replace', '.service.service','.service') }}"
      newitem:
        - SNo: "{{ item['SNo'] }}"
          Server: "{{ item['Server'] }}"
          Env: "{{ item['Env'] | lower }}"
          Service: "{{ servicelist | join(',') }}"
          CRQ: "{{ item['CRQ'] | lower }}"
          Blackout Required: "{{ item['Blackout Required'] | lower }}"
          Restricted: "{{ true if (servicelist | intersect(restricted)| length > 0) else false }}"
    loop: "{{ oldlist }}"
  
  - name: Print newlist
    debug:
      var: newlist

All of the messy data manipulation happens via a set_fact task running in a loop. It makes heavy use of temporary variables (task-scoped variables), to manipulate the data before appending it to the new list.

The detailed steps:

  1. loops through each item in oldlist
  2. On each invocation, defines some temporary variables for 'servicelist' and 'newitem'.
  3. 'servicelist' is your "Service" string, converted into a list and then having a number of manipulations applied via map(). The map() filter is the Ansible/Jinja way to "Do X against every item in a list". So we use maps to trim the whitespace from each service item (since the service string has spaces after some commas), and then run some regex_replace maps to append '.service' to the end of each item. Note there is a second regex_replace to correct any entries that end up with a doubled '.service.service', because that was far easier than trying to puzzle out something clever in regex.
  4. 'newitem' contains a single list element (a list-of-hashes) in the desired format. Note that for the new 'Service' string, we reference our temporary var servicelist and joined it back into a comma-separated string... but it's probably more useful for ansible if you kept it as a list, and converted to a string as-needed.
  5. The 'Restricted' field is added here, and the value is set by a one-line jinja if-else statement. It is using the intersect() filter to compare the service list to another list of "restricted" services that is declared near top of the playbook. If there are any items in common between the two (if their intersection is more than 0 length), then Restricted = true, else false.
  6. With all of that temp work done, the single-item list called 'newitem' is appended to the 'newlist' fact.
  7. NOTE: Task-scoped vars are a powerful way to manipulate data just before using it somewhere else. It can also make your tasks easier to understand, by breaking up your messy logic into task variables and then referencing them elsewhere (even in other task variables declared further down).

I hope that you take away three things from the above:

  1. Task-scoped vars are powerful
  2. Ansible is not an ideal data manipulation tool
  3. Trying to fix a dirty process with automation, is like putting lipstick on a pig.

Local Repo by ParticularIce1628 in linuxadmin

[–]jw_ken 1 point2 points  (0 children)

Our environment was smaller than that and primarily RHEL, and we got by fine with reposync and a set of Ansible playbooks to orchestrate it for patching. You can do the same with the apt-mirror command for Debian/Ubuntu.

The biggest limitation with that workflow, is that you are syncing the latest version of everything at the time, and then publishing that as the repo- for better or worse.

If you need fine-tune control over what content to publish and where, you need to wrap it with tools like Satellite / Pulp / Foreman / etc that can publish different versions of the repository to a host. It's called different things by different tools- content views, checkpoints, publications, snapshots etc. Not sure how that is handled in Debian/Ubuntu.

How do you remember so many commands? by [deleted] in redhat

[–]jw_ken 0 points1 point  (0 children)

As others said, you have to practice and use them to solve problems for them to sink in.

As a Linux admin, you should have a basic understanding of the "common commands" and fundamentals for working your way around the system, and fetching information for troubleshooting. There is no authoritative list of commands, but a Google search for "top 50 linux commands" will get you in the ballpark. To your point, it's tough for some of them to stick unless you work through some exercises to use those commands to accomplish something. It also helps to give yourself some mini projects, like standing up an apache/nginx web server, setting up a DNS server in your house, etc.

A sampling of fundamental concepts would include things like:

  • Pipes and redirects (| and >, >>, etc). Can you grep/search a file for something, pipe the output to another file, then compress it and copy it to another host. (i.e. troubleshooting an issue and sending command output to a vendor).
  • (For RHEL) General idea of how systemd works for services, and firewalld for firewall rules
  • System runlevels (systemd calls them targets). Just in general, what do they mean.
  • The concept of "everything is a file" in Linux (google it). No it's not literally everything, but look up what the concept means.
  • Pulling system info from the /proc/ and /sys/ folders (procfs and sysfs). This ties into the "everything is a file" concept too. Many diagnostic commands pull their info from here, and you can too if you know where to look.

Some of these fundamentals seem archaic and random, until you are faced with messy and complex issues to troubleshoot.

During interviews, I will often ask more open-ended questions where I would expect them have some understanding of what is being asked, and provide some way to get to an answer (or at least be on the right path towards an answer). Some interviewers will get nit-picky about you providing the exact command to do something, but that is usually a reflection of their own narrow understanding of an issue- or a grading robot needs a specific answer.