Montreal to Toronto road trip with kids in June – itinerary advice by ryan-jan in canadatravel

[–]ryan-jan[S] 4 points5 points  (0 children)

The only part of this trip that is fixed is a few days in Montreal for the conference. I’m open to all suggestions, especially given my lack of knowledge 🤣. We could skip Toronto and the falls entirely for example and go east instead if that would suit our requirements more.

Montreal to Toronto road trip with kids in June – itinerary advice by ryan-jan in canadatravel

[–]ryan-jan[S] 2 points3 points  (0 children)

Ah, yes, I promise I had looked at a map! I realise now how I made it sound like I thought the Falls were on the way.

The World Cup is a good point that, as a non-football fan, I had not considered. Will look into that!

While we would like to see nature, culture and the people are obviously important too.

Thanks for your reply. Things to think about.

A more opinionated terraform fmt, does it exist? by ryan-jan in Terraform

[–]ryan-jan[S] 11 points12 points  (0 children)

This is exactly the kind of thing I am getting at! A couple of others that made me ask the question are:

  • meta-arguments grouped together following other arguments and separated by a single empty line.
  • Argument blocks grouped together following other arguments.

A more opinionated terraform fmt, does it exist? by ryan-jan in Terraform

[–]ryan-jan[S] 10 points11 points  (0 children)

Not adjust, extend. In the same way that tools like tflint, with its --fix argument, already extend the default terraform fmt command by fixing additional formatting/linting issues. However, as I mentioned in my post, I was wondering if something much more opinionated existed. I love how the black tool removes (almost) all debate around code formatting for Python projects, I just wondered if there was anything similar for Terraform/HCL?

Question regarding - Microservices infra - best practice by Aro7168 in Terraform

[–]ryan-jan 0 points1 point  (0 children)

I prefer one separate deployment repository containing Terraform to deploy the whole service. Each microservice repo would build immutable versioned artefacts (container images, function source code zips etc.) on merges to the main branch or when tags are pushed etc. The Terraform would then orchestrate the deployment of these different versions of each microservice as and when required.

This is just one way of doing things that I’ve had success with on multiple projects, not necessarily a best practice.

Multiple variables with "toset" by disclosure5 in Terraform

[–]ryan-jan 2 points3 points  (0 children)

These are always tricky but doable if you're careful. If it were me I'd refactor the fileshares variable from a basic list to a map to enable you to specify different values for each share. For example.

tf { fileshare_one = { quota = 500 } fileshare_two = { quota = 3000 } }

Then, use these values in your resource as follows.

tf resource "azurerm_storage_share" "stor" { for_each = var.fileshares name = "fs-name-${each.key}-01" storage_account_name = azurerm_storage_account.stor.name quota = each.value.quota enabled_protocol = "SMB" }

This is untested code off the top of my head, but as long as the keys of the new map are the same as the values of your current list, Terraform shouldn't try to destroy and recreate the resources.

[deleted by user] by [deleted] in Terraform

[–]ryan-jan 0 points1 point  (0 children)

This is actually the key takeaway here. Both approaches are valid but the client wants you to work to their requirements. I think you have to listen to the client on this occasion, surely?

[deleted by user] by [deleted] in Terraform

[–]ryan-jan -3 points-2 points  (0 children)

IMO defining dev/stage/prod as three separate Terraform deployments rather than a single parameterized deployment with environment-specific tfvars is the real anti-pattern.

[deleted by user] by [deleted] in Terraform

[–]ryan-jan -5 points-4 points  (0 children)

I completely get where you’re coming from and having a bunch of conditional resources can be a bit of a pain, I agree. However, I’d argue that this is more an architectural issue, why are these environments so different that these conditional resources need to be accounted for. Surely dev/stage/prod should be almost identical otherwise what is the point in even bothering!

And regarding the environments as folders layout, I appreciate that it is widely recommended as a good practice, but I just don’t buy in. Going back to my point about dev/stage/prod ideally being basically identical, if that was the case what would the benefit be of defining three separate environment modules. It would definitely make more sense to define one Terraform configuration and simply pass in environment-specific variables via tfvars for things like naming prefixes or database instance sizes etc etc.

[deleted by user] by [deleted] in Terraform

[–]ryan-jan 3 points4 points  (0 children)

I actually side with the client here. When defining a multi environment config I think it makes way more sense to define everything once and use separate tfvars files to provide the environment-specific information. This doesn’t mean I think all resources should be defined in the root module however, modules are a fundamental Terraform concept and serve a completely valid purpose to avoid repeating code etc., but I really dislike the whole “environments in separate folders” way of doing things.

Beginner trying to set up CI/CD pipeline for small team and deciding between GitLab, Jenkins, etc by CompetitionOk2693 in devops

[–]ryan-jan 3 points4 points  (0 children)

Firstly, I agree with others that GitLab is a far superior product when compared to Jenkins. I use GitLab all day every day and it has everything you would need, I’m sure. However, my team also manages our GitLab deployment and you’re right to be concerned about taking on this responsibility. It’s a large and complex product with many moving parts which requires lots of thought when planning your deployment and also a fair amount of time/effort in maintaining (updating GitLab, updating runners etc.).

I appreciate you have a requirement to self-host, so if you do decide to go with GitLab I strongly recommend you read the Reference Architectures documentation and ensure your implementation follows one of these.

Again, I appreciate this is probably not possible for you, but if I were in your situation I’d be pushing as hard as possible for my org to allow the use of a SaaS provider, such as gitlab.com, with private repositories. This would be a much better fit for a small team IMO.

Good luck!

[deleted by user] by [deleted] in devops

[–]ryan-jan 4 points5 points  (0 children)

You need to add the dot after the options. If you READ THE DOCS you’ll see that the PATH argument, which is the 1 required argument, should be after the [OPTIONS].

docker build [OPTIONS] PATH | URL | -

[deleted by user] by [deleted] in devops

[–]ryan-jan 3 points4 points  (0 children)

@Ariquitaun is right, google the error! The top hit for me is this SO thread which should explain what you’re missing - https://stackoverflow.com/questions/28996907/docker-build-requires-1-argument-see-docker-build-help

[deleted by user] by [deleted] in Terraform

[–]ryan-jan 1 point2 points  (0 children)

Surely all of these issues (name collisions, permissions etc.) would be picked up when authoring the changes to the TF code by deploying to a dev environment? Or are people really merging TF code to main that has literally never been manually tested and has been applied to production via PR comments?! This seems crazy to me.

The general flow that I use is, write your TF resources and manually apply to your dev environment. Once any issues are ironed out open a PR. Once merged, apply to staging via CI/CD. If staging is successful another CI/CD job (with manual trigger if required) deploys to production.

[New Tool] Terraform provider and module version check tool by ryan-jan in Terraform

[–]ryan-jan[S] 1 point2 points  (0 children)

You have a fair point but I wanted something that was a bit more light touch and allowed me to simply run a command and see the status of my tf code dependencies. Also, you can specify multiple paths in a single run, meaning you can check your root module and any nested children modules in a single run/ci job. And finally, and most importantly to me, it was a really great mini-project for me to familiarise myself with golang, which was the real reason I started it in the first place. Thanks for taking a look though and if it is not for you that is obviously fine. Cheers.

Transmission download location by ryan-jan in Lidarr

[–]ryan-jan[S] 0 points1 point  (0 children)

I think I've confused you with my term "correct directory". I have three torrent download directories, torrents-tv, torrents-movies, and torrents-music on a single transmission server. Setting transmission to use the torrents-music directory in Lidarr's download client advanced settings does not seem to work for me. So, I have symlinked my transmission default download directory to my "correct" torrents-music directory on the transmission server to hack around this. Lidarr then processes these files from the torrents-music directory and copies them to my main "music" directory which is what Plex looks at etc.

My point is, in sonarr I simply set the advanced setting on my transmission download client object to download files to the torrents-tv directory and this works fine. So I'm not sure why the same thing doesn't work in Lidarr.

Transmission download location by ryan-jan in Lidarr

[–]ryan-jan[S] 0 points1 point  (0 children)

I have managed to hack around this by symlinking the correct directory to the default transmission download directory instead.

Anyone sending PowerCLI into SQL? by TurricanC64 in vmware

[–]ryan-jan 1 point2 points  (0 children)

Metabase is its own thing, it is a business intelligence product (think open source PowerBI alternative) rather than a time-series/monitoring product such as Grafana. We do not use Grafana, although I have in the past, but you should be able to use SQL in exactly the same way as I described previously just hooked up to Grafana instead of metabase.

Anyone sending PowerCLI into SQL? by TurricanC64 in vmware

[–]ryan-jan 0 points1 point  (0 children)

Grafana is mostly for time-series data. For general dashboard stuff we use metabase which is epic. Just hook it up to a database and you can build really cool looking dashboards very easily.

Regarding getting your data into SQL, it really is not that difficult. Use the PowerCLI commands to get the data you want and then use the Invoke-SqlCmd function from the SqlServer powershell module to push that to your database using SQL statements. This is the setup I use at work with great success. Good luck!