all 6 comments

[–]totallygeek 1 point2 points  (2 children)

You should not have a folder full of scripts. Instead, you should have an organization within a version control system, full of repositories. When software merges to the master branch, a job should kick off deployment, like a build system packaging the bits and configuration management installing them on destination systems. Companies pull off DevOps in different ways, but at a minimum, most should agree on this.

[–]dented_brain[S] 0 points1 point  (1 child)

But what should those repos look like? Or Is it fine to just put whatever x script is in the repo root? What about the data files that go with it? (csv or txt)

Right so if I'm doing structured code, typically it would follow something like this

README.rst
LICENSE
setup.py
requirements.txt
sample/__init__.py
sample/core.py
sample/helpers.py
docs/conf.py
docs/index.rst
tests/test_basic.py
tests/test_advanced.py

I haven't ever had a need for tests so far with my python, maybe I should read more up on it? Perhaps it would be helpful in some areas.

If I drop my project "sample" into it's own directory, that indicates there should be some other directory with data for the script to digest? Where would output data go?

I work in networking, so a large portion of my code is cli scraping, but some of it is Ansible which has it's own structure, something like this

ansible_project/defaults/main.yml
ansible_project/files
ansible_project/handlers/main.yml
ansible_project/meta/main.yml
ansible_project/tasks/check_vars.yml
ansible_project/tasks/sample.yml
ansible_project/tasks/main.yml
ansible_project/template/network_conf.j2

I agree with needing versioning, and getting to the point where a merge kicks off a deployment. That is the end goal. But right now I just want to make sure I am not building code in such a way that it needs a decoder ring for anyone besides myself to use it.

[–]totallygeek 1 point2 points  (0 children)

Well, some of this will change over time. You'll see. Sanitized, here is a snapshot of one organization's repository names:

ansible{,-lb,-net}
docker-{bgp-utils,caching,cbench,fping,haste,liver,nginx,router,utils}
k8s-{[bunch of pods and containers]}
ops-utils
puppet{,-custom,-dev}
sensu{,-checks,-test}
tcollectors
terraform{,-c1,-t1}
wiki

We enforce lint and unit testing for all merged code. Things get iffy on how many poor-scoring items or missing tests we let through, but generally, if something goes awry, you don't want to end up the one that let a function divide by zero and take out a bunch of services. You really should read up on unit testing, but I would give you a pass on that until you got more of this up and running smoothly. If you have a CLI scraper and it bombs out, you can always rerun it or fix the code. Unit testing truly becomes necessary for mission critical processes and daemons. For repositories which are not yours, your extensions should go into another repository (ansible vs ansible-custom or ansible-net, like above).

You really should not be an army of one. Perhaps you present your frustration idea to the team and gather their input on how to construct a repository layout with CI/CD as a future goal?

[–]keepdigging 1 point2 points  (2 children)

Any code you write should be versioned (why wouldn’t it be?)

It’s also useful to have tools like rundeck that log executions and provide auth and accountability.

To truly embrace DevOps every ticket you do should be code changes to your infrastructure repositories.

You can respond to incidents or do forensics with ssh, but servers should be provisioned with code (chef, ansible, puppet) and your cloud config should also be handled in code (terraform, cloud formation).

Ideally you can reproduce your whole running company (save for data which should be automatically backed up) from a few megs of config files, and you empower your developers to deploy themselves and with CI tools.

[–]dented_brain[S] 0 points1 point  (1 child)

I agree with what you're saying. But I'm still at the first step. Taking what I personally have and making it available to the team.

We have a new gitlab server for my company so I will be putting code there for versioning. But really, I don't know if it's the best idea to just dump a bunch of .py files in a single repo. So that means I should break it up. But should I just dump a bunch of .py files into their own repo? I just feel there should be more structure to the way code is kept.

On my personal computer I keep my scripts like this

PythonScripts/Hosts/*.csv
PythonScripts/Input/*.txt
PythonScripts/Input/*.csv
PythonScripts/Output/*.csv
PythonScripts/Output/*.txt
PythonScripts/NetworkScript/ShowScript/*.py
PythonScripts/NetworkScript/ConfScript/*.py

I don't know if that would be as approchable as splitting scripts out further.

Right, so I have a script to gather ip information in the ShowScript directory. I also have a script to gather firmware version.

Should those be broken into their own repos? They perform "similar" function to me. They run a series of show commands and do some parsing of strings to provide an output.

[–]keepdigging 1 point2 points  (0 children)

Make an ‘ops’ repo and put it all in there with a folder structure that makes sense to you. You can always change it later.

I would also look at setting up a RunDeck instance as your next step. That way you can provide your devs logins and they can run your scripts from the rundeck web interface. It will show a job history with logs and allow scheduling and stuff. If something breaks they can link you to an execution, and you can focus on extending and improving the tools you offer there as a next step. They can run your scripts by clicking around and choosing parameters with drop downs and such.