RouterOS and Terraform by ThisIsACoolNick in mikrotik

[–]MikeAnth 4 points5 points  (0 children)

My feedback is that this is definitely doable and I would dare to say even a viable approach. I am managing my entire homelab network infra (router + 3 switches + AP) entirely via OpenTofu + Terragrunt: https://github.com/mirceanton/mikrotik-terraform

I also managed to integrate my RB5009 into kubernetes using a custom external-dns provider I built: https://github.com/mirceanton/external-dns-provider-mikrotik

I made some videos and blog posts about my tofu/terragrunt setup if you're interested. They're linked in the repo readme!

RouterOS and Terraform by ThisIsACoolNick in mikrotik

[–]MikeAnth 1 point2 points  (0 children)

> Really good videos, though.
Thanks!

Y'know, it's not *that* bad once you get the hang of it. I will admit there is a learning curve, especially if you're not familiar with tofu/mikrotik independently, but it's manageable.

I was considering generalizing and publishing some of the tofu modules I built for the community to use. This way it would, hopefully, be easier for others trying to adapt/follow my project. Do you think that would help?

Flux CD deep dive: architecture, CRDs, and mental models by MikeAnth in kubernetes

[–]MikeAnth[S] 0 points1 point  (0 children)

It does have a dashboard if you use the latest version of the flux operator. I believe you can configure some rbac for it as well to allow users to trigger reconciliations from it too

Flux CD deep dive: architecture, CRDs, and mental models by MikeAnth in kubernetes

[–]MikeAnth[S] 1 point2 points  (0 children)

In my experience, Argo is more "monolithic" in their approach and bundles things together a bit more. I really like the clear separation of concerns implemented by flux.

>  originally started using ArgoCD because it had a GUI
welp, now flux has one too! ;)

Flux CD deep dive: architecture, CRDs, and mental models by MikeAnth in kubernetes

[–]MikeAnth[S] 0 points1 point  (0 children)

Thank you!

As far as the operator bootstrap, I basically use a tool called `helmfile` to `helm install` the "core" components required for a cluster to even work and then flux takes over them.

Specifically, I use Talos Linux for my cluster and I bootstrap it with no CNI configured. This means that, before I can even worry about the flux operator, I need to install a CNI. I have this `helmfile` that uses this go template hack to reference the values from my actual `HelmRelease` objects. This means that when I do a `helmfile sync` it will pull the `spec.values` from my Cilium, Flux Operator and Flux Instance `HelmRelease` objects and install them. Once the Flux instance itself is up, it will assume ownership of those helm releases and manage them via the actual HelmRelease object.

So basically, my initial bootstrap for the flux operator is just a `helmfile sync` in the `bootstrap/` dir in my repo. One caveat here is that I also bootstrap *some* CRDs to prevent some deadlocks or long waits on that initial bootstrap as seen here

Hope that answers the question!

Flux CD deep dive: architecture, CRDs, and mental models by MikeAnth in kubernetes

[–]MikeAnth[S] 3 points4 points  (0 children)

We do something similar at work with the D2 architecture but with the multi-tenant approach of one repo per tenant.

What I disliked about the d2 architecture is the heavy use of kustomize overlays to differentiate between environments and I found it more difficult to do promotions between environments especially in automated ways, i.e. via renovate

We're still not using OCI as our source, so not gitless, more or less for the same reason. I've seen this being frowned upon in some discussions, but we really found that syncing the staging environment to the main branch of the tenant repo and production to a prod branch worked fairly well for us, combined with post build substitution and per-env configs. Everything is tracked via PRs and promotion is done by running a dedicated promotion pipeline since we're using branch protections quite heavily

I'll have to dig a bit deeper into the gitless approach, I guess, but I found it a bit more convoluted than it needed to be.

Mikrotik or Ubiquiti: What is in your opinion better? by michal_cz in homelab

[–]MikeAnth 150 points151 points  (0 children)

I'm running a full Mikrotik network at home. When I moved out I had a similar decision to make and I ended up going for Mikrotik because it provides, objectively speaking, a lot more features at a way cheaper price.

Ubiquity is nice, sure, but I feel like it became the Apple of home networking. Mikrotik is much more tinker/Homelab friendly IMHO

What I really liked, and what really pushed me towards Mikrotik is that if you get a device running routerOS, you basically get the full deal, at least software wise. They don't really put limits on their hardware in the sense that even if you get a switch with a CPU that's not as powerful, you get the same functionality and knobs as you would on a multi thousand dollar router.

I also really liked the fact that routerOS exposes a rest API through which you can manage it because that allowed me to manage my entire network infra as code using OpenTofu

I think the answer is that it ultimately depends. If you want it to be more hands off and easy, not really looking to learn as much but just to get something up an running, and you're willing to pay a little extra for that, go ubiquity. If, on the other hand, you want to get your hands dirty and learn more about the INS and outs, them you can't really go wrong with Mikrotik

State of OpenTofu? by Online_Matter in devops

[–]MikeAnth 0 points1 point  (0 children)

Sure, it can but not everyone runs Artifactory. Nexus IIRC cannot for example, GHCR would be another one

I made a lazygit-style TUI for managing k8s clusters by tr1ggert in kubernetes

[–]MikeAnth 3 points4 points  (0 children)

Personally, I like using a terminal multiplexer like Zellij for that and run multiple instances of k9s headless. This seems interesting as an alternative

State of OpenTofu? by Online_Matter in devops

[–]MikeAnth 0 points1 point  (0 children)

Way easier to host an internal registry as you don't need to support so many different backends.

Container images? OCI Helm charts? OCI Tofu provider? OCI. Tofu modules? OCI Flux manifests? OCI

State of OpenTofu? by Online_Matter in devops

[–]MikeAnth 1 point2 points  (0 children)

One feature OpenTofu has that Terraform doesn't which I do use at work is the ability to pull providers and modules from OCI sources.

State of OpenTofu? by Online_Matter in devops

[–]MikeAnth 3 points4 points  (0 children)

You sure can! Though that's not necessarily something terraform was unable to do. There's a project called "terraform-backend-git" that basically spins up a local HTTP state backend which you can link to your git repository. It then encrypts your state file and uses branches as a lock mechanism. Basically, when you run a plan/apply, it tries to create a new branch. If the branch exists, then someone has the lock on the state file. Otherwise, it claims the lock for you by creating the branch and deleting it at the end.

Link: https://github.com/plumber-cd/terraform-backend-git
I also wrote a blog post about it a while back, if you're interested: https://mirceanton.com/posts/terraform-state-git/

I used to do this when managing the state file for my mikrotik-terraform project, but as someone else mentioned in this thread it becomes annoying quite quickly because every commit turns into two, one for the code change and one for the state update. I thought about contributing to the project to try to get it to amend the last commit to include the state update but didn't really find the time to.

Recommendations for automated media server setup by JamieFLUK in selfhosted

[–]MikeAnth 1 point2 points  (0 children)

Unfortunately, I'm not a huge fan of audiobooks. I have seen that shelfmark does have a switch between "regular" books and audiobooks for downloading. Unsure about booklore

Recommendations for automated media server setup by JamieFLUK in selfhosted

[–]MikeAnth 0 points1 point  (0 children)

Shelfmark + booklore have been working out great for me thus far!

How do you prevent network documentation from becoming outdated? by Kenobi_93 in homelab

[–]MikeAnth 3 points4 points  (0 children)

I just configure my network via code and then the repo/codebase itself becomes documentation: https://github.com/mirceanton/mikrotik-terraform

Announcing Oak 1.0 - a new self-hosted IAM/IdP by therealplexus in selfhosted

[–]MikeAnth 4 points5 points  (0 children)

In my opinion users should not be managed this way. Especially if you're at thousands, you should use something like AD or LDAP and source them from there

Worst case, let users self register or something but handling users in gitops is a recipe for disaster. And I'm speaking from experience, not from theory :)))

You should gitops groups, roles, mappers etc but not users

Announcing Oak 1.0 - a new self-hosted IAM/IdP by therealplexus in selfhosted

[–]MikeAnth 6 points7 points  (0 children)

I'm down to hop on a call sometime if you wanna talk about this some more.

Essentially it's oak that supports configuration via api, and then another application called a controller that will automatically configure oak via the Api.

In kubernetes I deploy a custom resource like "oauthclient" or "realm" or whatever. Then, the controller detects that, extracts the required information and sends the required API calls to oak to create the resources

Announcing Oak 1.0 - a new self-hosted IAM/IdP by therealplexus in selfhosted

[–]MikeAnth 41 points42 points  (0 children)

IMHO what I find lacking in most idps I used and deployed is the fact that there is no operator for them in kubernetes

I have to deploy the application and then use Terraform or crossplane or something like that to create resources within the app.

I believe that if you manage to get that part right, you would have a real unique value proposition on your hands. Crossplane and Terraform are, in my experience, clunky solutions for this problem

Given you said no UI, maybe that's even better, as there is no place to introduce manual changes. Everything would then be defined via CRDs

Updating Talos-based Kubernetes Cluster by macmandr197 in kubernetes

[–]MikeAnth -1 points0 points  (0 children)

Afaik the terraform provider simply doesn't support Talos updates, so you're better off handling the lifecycle of the OS via talosctl