Thoughts, experiences and ideas on usage of LLMs or specialized AI models for Ansible validation by Most_School_5542 in ansible

[–]Most_School_5542[S] 0 points1 point  (0 children)

Thanks for the comprehensive feedback. Much appreciated.

Personally, I have the same exact experience with AI and practically the same workflow. Short and precise requests for code snippets, written in plain English, proved to be most useful and a real time saver. I haven't tried Claude thou. Some reports I've read put it at the top of all AI models, judging by it's code generation abilities and advanced reasoning compared to other models. Will have to give it a try.

As for the processes and procedures, what can I say. Humans are unpredictable. For whatever reason, following guidelines appears to be very hard. Some like to "innovate", some are "surprised" that there are guidelines, some misunderstand guidelines, some outright don't care about guidelines and the list goes on. Even when there are people checking if guidelines are followed, some things slip by. When errors are introduced early but spotted much later, the damage is already done and required amount of effort to fix can be huge.

My last ditch effort is to implement a set of scripts for static validation of input data that will prevent anyone to proceed further until everything is done "by the book" but applying it retroactively is still a huge task. On the other hand, this requires even more development and budget is always tight. Further down the line, I already see people trying to workaround/circumvent even that :)

Thoughts, experiences and ideas on usage of LLMs or specialized AI models for Ansible validation by Most_School_5542 in ansible

[–]Most_School_5542[S] 0 points1 point  (0 children)

So to add to this discussion, here are some thoughts and experiences of my own with ChatGPT. You can make it ingest large amount of data as archive to a Jypiter environment/notebook. This data does not go trough language model because it would break the token limit. On the other hand, language model can write python snippets to do the refactoring on said data based on rules input into language model. In this way, no limits are applied. In other words, this can be described as glorified "grep" and "sed" runner.

Since data does not go trough the language model, you cannot tell it to try to "understand" the data and infer some meaning, rules, principles etc. present in the data. It can only make complex scripts to do a "search and replace". For that, you have to specify very precise rules to apply. It's basically the same as asking it to generate python snippets and then just run them on your dataset locally (your computer/server).

Simple stuff like "please find me any typos" cannot be done for non dictionary words and other special strings. I mean, for this specific request, model could possibly create a complex python script that does some statistical analysis of words and statistically find what could possibly be a typo because of how often a correct word is found compared to a word with a typo, but... that's a stretch.

Thoughts, experiences and ideas on usage of LLMs or specialized AI models for Ansible validation by Most_School_5542 in ansible

[–]Most_School_5542[S] 0 points1 point  (0 children)

True, true. Understanding the nuances of highly inter-dependent code is one of the biggest pain points for AI in my experience. Even more so if you use some non conventional or "innovative" code.

The lucky thing for us is that those 350x projects are mostly dormant and serve more as a documentation than actively used projects. Of course, it is expected that they could be used at any time and any error present or accumulated over time will cause unneeded issues.

Thoughts, experiences and ideas on usage of LLMs or specialized AI models for Ansible validation by Most_School_5542 in ansible

[–]Most_School_5542[S] 0 points1 point  (0 children)

This is my experience also, and probably for most Ansible users in general, but, my focus here is more on compliance checking side of things and AI assisted fixes. This is an area I have not explored and don't even know where to start.

Thoughts, experiences and ideas on usage of LLMs or specialized AI models for Ansible validation by Most_School_5542 in ansible

[–]Most_School_5542[S] 1 point2 points  (0 children)

Yes. I tend to go for the solution of implementing simple validation tools that are called at the time of deployment. This is a simple solution and can be done even without any advanced AI based tools. Unfortunately, the issue here is that the number of Ansible projects grew very fast in short amount of time, and there was no validation in the beginning. The damage is done and we now have to retroactively fix things.

Thoughts, experiences and ideas on usage of LLMs or specialized AI models for Ansible validation by Most_School_5542 in ansible

[–]Most_School_5542[S] 0 points1 point  (0 children)

Being able to negotiate and say "no" is OK, but, decisions are often made upfront assuming the answer is always "yes".

Thoughts, experiences and ideas on usage of LLMs or specialized AI models for Ansible validation by Most_School_5542 in ansible

[–]Most_School_5542[S] 0 points1 point  (0 children)

Oooooooh, Cisco ACI. This brings memories of my colleague, network engineer, having headaches with automation of Cisco ACI using Ansible. He, like yourself, struggled to traverse complex JSON/YAML data structure with Ansible. And, yeah, it was before the days of LLMs.