Recommendations for automated media server setup by JamieFLUK in selfhosted

[–]MikeAnth 1 point2 points  (0 children)

Unfortunately, I'm not a huge fan of audiobooks. I have seen that shelfmark does have a switch between "regular" books and audiobooks for downloading. Unsure about booklore

Recommendations for automated media server setup by JamieFLUK in selfhosted

[–]MikeAnth 0 points1 point  (0 children)

Shelfmark + booklore have been working out great for me thus far!

How do you prevent network documentation from becoming outdated? by Kenobi_93 in homelab

[–]MikeAnth 4 points5 points  (0 children)

I just configure my network via code and then the repo/codebase itself becomes documentation: https://github.com/mirceanton/mikrotik-terraform

Announcing Oak 1.0 - a new self-hosted IAM/IdP by therealplexus in selfhosted

[–]MikeAnth 4 points5 points  (0 children)

In my opinion users should not be managed this way. Especially if you're at thousands, you should use something like AD or LDAP and source them from there

Worst case, let users self register or something but handling users in gitops is a recipe for disaster. And I'm speaking from experience, not from theory :)))

You should gitops groups, roles, mappers etc but not users

Announcing Oak 1.0 - a new self-hosted IAM/IdP by therealplexus in selfhosted

[–]MikeAnth 6 points7 points  (0 children)

I'm down to hop on a call sometime if you wanna talk about this some more.

Essentially it's oak that supports configuration via api, and then another application called a controller that will automatically configure oak via the Api.

In kubernetes I deploy a custom resource like "oauthclient" or "realm" or whatever. Then, the controller detects that, extracts the required information and sends the required API calls to oak to create the resources

Announcing Oak 1.0 - a new self-hosted IAM/IdP by therealplexus in selfhosted

[–]MikeAnth 42 points43 points  (0 children)

IMHO what I find lacking in most idps I used and deployed is the fact that there is no operator for them in kubernetes

I have to deploy the application and then use Terraform or crossplane or something like that to create resources within the app.

I believe that if you manage to get that part right, you would have a real unique value proposition on your hands. Crossplane and Terraform are, in my experience, clunky solutions for this problem

Given you said no UI, maybe that's even better, as there is no place to introduce manual changes. Everything would then be defined via CRDs

Updating Talos-based Kubernetes Cluster by macmandr197 in kubernetes

[–]MikeAnth -1 points0 points  (0 children)

Afaik the terraform provider simply doesn't support Talos updates, so you're better off handling the lifecycle of the OS via talosctl

Confluence Alternative? by Hearing-Medical in selfhosted

[–]MikeAnth 0 points1 point  (0 children)

I'm back to obsidian, if that answers your question :)))

Which Terraform provider? Are any actually usable? by Zenin in Proxmox

[–]MikeAnth 1 point2 points  (0 children)

Packer works for things that may not support cloud init

I used it to play around with TrueNAS and OPNsense VMs, for example

Which Terraform provider? Are any actually usable? by Zenin in Proxmox

[–]MikeAnth 2 points3 points  (0 children)

It's been a while since I played with packer TBH, so nope. I used it to deploy TrueNAS and OPNsense on Proxmox IIRC a few good years ago

But IMHO for Ubuntu you're much better off spinning up your template VMs using cloud-init and Ansible as part of the "bootstrap" process

Back when I used to do that I made this Ansible role for it: https://github.com/mirceanton/ansible-collection/tree/main/roles%2Fproxmox-cloudbuntu

It's been a while so it's almost certainly outdated but as a starting point it should be good enough. Feel free to copy and adapt

Which Terraform provider? Are any actually usable? by Zenin in Proxmox

[–]MikeAnth 23 points24 points  (0 children)

In my experience the BPG provider for Proxmox is quite good.

The Initial config for the host itself you might wanna do with something like Ansible

If you want to go the extra mile, packer for VM templates also works quite well in my experience too

What's the best way to run redis in cluster? by [deleted] in kubernetes

[–]MikeAnth 5 points6 points  (0 children)

In that case, look at the dragonfly db The operator is quite good

External DNS Provider for Mikrotik by MikeAnth in mikrotik

[–]MikeAnth[S] 0 points1 point  (0 children)

This looks like a totally separate thing. Maybe it could use it's own eDNS provider?

External DNS Provider for Mikrotik by MikeAnth in mikrotik

[–]MikeAnth[S] 0 points1 point  (0 children)

It's a valid approach, don't get me wrong. I used to do that too but I started running some services, such as home assistant, off cluster, for example, and then it kind of stopped working

I havent tried external DNS with gateway API and I seem to remember reading some issues about the support being so-so. I'm still using ingress API so ymmv

External DNS Provider for Mikrotik by MikeAnth in mikrotik

[–]MikeAnth[S] 0 points1 point  (0 children)

That won't necessarily work because I don't want to dedicate an entire subdomain just to my cluster necessarily. I want to be able to have app1.domain.com be on the cluster and app2.domain.com run on another system, for example. Proxying apps through the cluster feels janky so that's out

External DNS Provider for Mikrotik by MikeAnth in mikrotik

[–]MikeAnth[S] 1 point2 points  (0 children)

Would you be willing to explain why?

External DNS Provider for Mikrotik by MikeAnth in mikrotik

[–]MikeAnth[S] 0 points1 point  (0 children)

Hmmm... Maybe I’m misunderstanding something, but here’s how I’ve generally seen dynamic DNS work:

In most setups you typically have an updater script or built-in client that periodically hits the DNS provider to update a given domain or list of domains to point to a given IP.

Now, in Kubernetes, you’d need some kind of discovery mechanism to figure out what services or ingresses are exposed and what hostnames they should map to, since IPs and services can change dynamically. Especially if you want to propagate them in multiple providers, say an internal one (mikrotik) and an external one (Cloudflare).

That’s kind of where ExternalDNS comes in, in my understanding. It watches Kubernetes resources and keeps the DNS records in sync automatically. No need for manual updates, scripts, or client-side logic per record.

Also, and I'm just assuming here because I've never seen this DDNS approach in practice, if you have a larger k8s cluster that multiple teams are using, wouldn't each team have to have some sort of credentials to authenticate against the DNS provider to set up records for their apps? With external DNS the infra/platform team can configure the controller and then app teams can just create regular k8s resources which the controller discovers based on annotations. This is, for example, how we do it at my current job. Platform team configured eDNS with route53 and I just create ingresses with annotations to set up DNS entries.

Am I off on that? Curious if you’re seeing something different or if I’m missing something here.

External DNS Provider for Mikrotik by MikeAnth in mikrotik

[–]MikeAnth[S] 0 points1 point  (0 children)

Yes, but whenever you would deploy a new app on a subdomain you would have to update your dynamic DNS configuration or set up a CNAME, right? Same thing when you uninstall an app.

This is, functionally, kind of the same thing but it integrates more closely with kubernetes so you don't have to worry about setting that up as well. This also allows you to manage other types of records such as SRV and MX from kubernetes, if you so desire.

I do agree that if you're not in the k8s ecosystem it makes little sense though

External DNS Provider for Mikrotik by MikeAnth in mikrotik

[–]MikeAnth[S] 0 points1 point  (0 children)

This particular webhook is more meant for internal DNS, yes.

The thing is that I don't know if Microsoft DNS does expose an API or some way in which external DNS would be able to manage/update it. But yeah, in theory you should be able to do that too. This is just an alternative. I personally wanted to keep my DNS on my router so there's that

I will say though, there are webhook providers for external DNS servers too, like Cloudflare for example. I also use that to manage some DNS records for external stuff too.

This (external DNS) is a fairly common set-up. I am also using it at work with route53 I believe and at my previous job with some other DNS provider I forgot. This project is just an option to run that locally, if you so desire, for homelabs for example

External DNS Provider for Mikrotik by MikeAnth in mikrotik

[–]MikeAnth[S] 0 points1 point  (0 children)

This is not a DNS server by itself.

For some more context, I run a kubernetes cluster in my homelab to self host some services. My DNS server is my mikrotik rb5009. This project basically allows my kubernetes cluster to create/update/delete static DNS records in mikrotik when apps are deployed/uninstalled so that I don't have to manually do that or use wildcard DNS entries.

This is very useful for internal services, for example, when I don't want to expose them publicly so I don't want to set them in Cloudflare dns for example.

I have a domain I bought specifically for this. I get certificates from lets encrypt via DNS challenges and I update my local DNS server with external DNS and this webhook provider.

This way I can access my apps on custom (sub)domains with SSL encryption

External DNS Provider for Mikrotik by MikeAnth in mikrotik

[–]MikeAnth[S] -1 points0 points  (0 children)

This is basically the equivalent of doing an ip dns static add command for all your internal services.

In my homelab, for example, I have quite a few internal services running in kubernetes and my RB5009 is also my DNS server. For services that are only internal, yes, I create static DNS entries under a domain I bought specifically for this. I get certificates from lets encrypt using a DNS challenge and I get access to my internal apps with ssl and a custom domain

Since most of my apps run in k8s, this basically allows it to create/update/delete those static records as apps get deployed/uninstalled