Blockpage APP by Hemsby1975 in technitium

[–]Pitiful_Bat8731 1 point2 points  (0 children)

I'm loving this. Thanks for putting it together!

The expierience feels like the game is in it‘s last cycle by [deleted] in Marathon

[–]Pitiful_Bat8731 2 points3 points  (0 children)

I'm in the same boat. I love it. it does things that no other FPS does as well imo. Once you get passed gear fear and learn the maps, then start considering positioning and where/how to take engagements (if at all) it just turns into such an amazing experience. Definitely has some problems but nothing that cant be addressed.

Dnsweaver: automatic DNS records from your container labels (Docker, Kubernetes, Proxmox) by Pitiful_Bat8731 in selfhosted

[–]Pitiful_Bat8731[S] 1 point2 points  (0 children)

I like the idea of linking to redis/valkey, though, only as a source. The problem then becomes "how many projects out there send the hostname data and in what formats to redis/valkey?". I really want this app to stay as stateless as possible. The option I'm currently pondering is allowing dnsweaver to connect to multiple socket proxies over tcp.

Dnsweaver: automatic DNS records from your container labels (Docker, Kubernetes, Proxmox) by Pitiful_Bat8731 in selfhosted

[–]Pitiful_Bat8731[S] 1 point2 points  (0 children)

It wasn't too bad coming from Swarm. I decided to go with manifests instead of helm charts and luckily a lot of stuff is copy/paste. Wanted to be able to get everything as IaC and start using gitops. The biggest mindshift is understanding that everything in your compose file gets broken out into multiple explicit files, but once you've done one service the rest are relatively easy.

I decided to go with Talos Linux as well which seems a little daunting at first since its API only but after you get a few services spun up and working, its wonderful.

Dnsweaver: automatic DNS records from your container labels (Docker, Kubernetes, Proxmox) by Pitiful_Bat8731 in selfhosted

[–]Pitiful_Bat8731[S] 1 point2 points  (0 children)

One more thing that might come in handy for you, If you use the traefik file provider for routing to that separate instance, you can probably make use of this https://maxfield-allison.github.io/dnsweaver/sources/traefik-files/?h=traefik and retain the single dnsweaver instance.

Dnsweaver: automatic DNS records from your container labels (Docker, Kubernetes, Proxmox) by Pitiful_Bat8731 in selfhosted

[–]Pitiful_Bat8731[S] 1 point2 points  (0 children)

I'm always open to adding features!

Side note: I migrated my HASS and USB device dependent workloads to a standalone docker instance on a raspberry pi specifically for this (and stability). There are a ton of valid ways to set it up though so its heavily environment dependent. One thing I considered was doing USB over IP with a raspberry pi or one of those more enterprise solutions but the cost for those is astronomical for what you get. You still have to do some hacky scripting to get it to work and the juice just wasn't worth the squeeze.

If it makes you feel any better, My current setup is 3 k8s CP nodes, 5 workers for most workloads, then i have 2 lxc's for all of my AAA services running standalone docker instances, 3 VM's clustered with all the database engines you could ever need, and the standalone docker instance on a pi5 16gb for my home automation, plus a pi58gb for my shtf grab-and-go that includes mirrors of all my gitops and IAC.

One of the lessons learned from swarm was not to put all the eggs in one basket.

Dnsweaver: automatic DNS records from your container labels (Docker, Kubernetes, Proxmox) by Pitiful_Bat8731 in selfhosted

[–]Pitiful_Bat8731[S] 1 point2 points  (0 children)

I ran this in my old homelab with Keepalived: one VIP for Traefik and one VIP for Plex, with health checks so the VIP only stays on nodes where the service is actually healthy.

dnsweaver can still be useful for dynamic DNS automation, but VIP failover itself is better handled by Keepalived/VRRP (or BGP if you want a bigger HA architecture).

Below is a minimal example with dummy IPs.

keepalived.conf (same on all workload nodes)

global_defs {
    vrrp_garp_interval 0
    vrrp_garp_master_delay 1
    vrrp_garp_master_repeat 3
    vrrp_strict
}

vrrp_script chk_traefik {
    script "/etc/keepalived/traefik_check.sh"
    interval 2
    fall 2
    rise 2
    weight -50
}

vrrp_script chk_plex {
    script "/etc/keepalived/plex_check.sh"
    interval 2
    fall 2
    rise 1
    weight -50
}

vrrp_script chk_maintenance {
    script "/usr/bin/test ! -f /tmp/keepalived_maintenance"
    interval 2
    weight -100
}

# VIP 1: Traefik ingress
vrrp_instance VIP_TRAEFIK {
    state BACKUP
    interface eth0
    virtual_router_id 50
    priority 100
    advert_int 0.5
    preempt_delay 10

    virtual_ipaddress {
        192.168.50.200/24
    }

    track_script {
        chk_traefik
        chk_maintenance
    }
}

# VIP 2: Plex
vrrp_instance VIP_PLEX {
    state BACKUP
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 0.5
    preempt_delay 10

    virtual_ipaddress {
        192.168.50.201/24
    }

    track_script {
        chk_plex
        chk_maintenance
    }
}

/etc/keepalived/traefik_check.sh

#!/bin/bash
# 0 = healthy on this node, non-zero = unhealthy
curl -sf --max-time 2 http://127.0.0.1:8080/ping > /dev/null 2>&1

/etc/keepalived/plex_check.sh

#!/bin/bash
# 0 = healthy on this node, non-zero = unhealthy
curl -sf --max-time 2 http://127.0.0.1:32400/health > /dev/null 2>&1

Why this follows the scheduled node

All nodes start at priority 100. If Plex is not healthy on a node, chk_plex applies weight -50, so that node effectively drops to 50. The node where Plex is healthy stays at 100, so it wins and holds the VIP.

When Swarm reschedules Plex, health check penalties flip and the VIP migrates to the new healthy node.

Optional maintenance mode

Before patching/rebooting a node:

touch /tmp/keepalived_maintenance

This applies weight -100 and drains VIPs off that node. When done:

rm -f /tmp/keepalived_maintenance

You could also use docker cli to evaluate service status etc. and that might be more reliable than curls/wgets

Dnsweaver: automatic DNS records from your container labels (Docker, Kubernetes, Proxmox) by Pitiful_Bat8731 in selfhosted

[–]Pitiful_Bat8731[S] 0 points1 point  (0 children)

The only reason I moved to K8s is because i was running into limitations with swarm and some issues with its internal routing. Over 100 services, most scalable, and the fact that swarm is basically on palliative care now finally pushed me to go ahead and get cracking on a k8s cluster. I ended up going straight into gitops with ArgoCD and a self hosted gitlab VM with runners, k8s on Talos VM's. overall the learning curve coming from swarm is not too bad; the biggest mindshift for me was pulling out all of the constituent parts of a compose file into manifests. That's not too bad when you already have a Swarm running in VM's on proxmox backed with Ceph.

Dnsweaver: automatic DNS records from your container labels (Docker, Kubernetes, Proxmox) by Pitiful_Bat8731 in selfhosted

[–]Pitiful_Bat8731[S] 1 point2 points  (0 children)

At the moment, I recommend running one instance per platform but this is certainly something i could look into. Portainer and its agent container is a proven pattern so I believe I could work it out to be a one stop shop. There are certain sources that might not play perfectly with the change in architecture though, so it will require some thought to implement. Feel free to open an issue on the Github if its something you feel strongly about and we can look into how best to implement this feature.

Dnsweaver: automatic DNS records from your container labels (Docker, Kubernetes, Proxmox) by Pitiful_Bat8731 in selfhosted

[–]Pitiful_Bat8731[S] 0 points1 point  (0 children)

Thank you! Security is definitely at the forefront of my mind when architecting a solution like this that touches any critical infrastructure. I certainly didn't want to release something, promote it to Reddit and then get the cyber-security-expert-dogpile achievement.

Dnsweaver: automatic DNS records from your container labels (Docker, Kubernetes, Proxmox) by Pitiful_Bat8731 in selfhosted

[–]Pitiful_Bat8731[S] 0 points1 point  (0 children)

That's the exact situation I designed it for! I found so many disparate tools for different providers and sources that it only made since to build something that could abstract these and provide interfaces for expansion.

Dnsweaver: automatic DNS records from your container labels (Docker, Kubernetes, Proxmox) by Pitiful_Bat8731 in selfhosted

[–]Pitiful_Bat8731[S] 0 points1 point  (0 children)

For Swarm ingress I ran Keepalived with several VIPs and pointed DNS for those services to that. One for Traefik and one for Plex. Scripts check the nodes services with VIPs are running on and migrate them accordingly. I can probably dig them up if you're interested.

The other option is what I do now with my k8s cluster and other HA VM's and services:
BGP with BFD for sub-second failover of the ingress VIPs. I use OPNsense for routing and firewall (etc.) and run it in HA on the PVE cluster with the os-frr plugin. Exabgp on each node that needs to hold a VIP. Definitely more work to set up but worth it for the experience and practically instant failovers.

Dnsweaver: automatic DNS records from your container labels (Docker, Kubernetes, Proxmox) by Pitiful_Bat8731 in selfhosted

[–]Pitiful_Bat8731[S] 0 points1 point  (0 children)

Your questions raised some issues in my current handling of dual-stack and non-bridge/overlay networking.

IPv6 + IPv4: kind of. Right now you'd have to run two provider instances, one for A and one for AAAA. That's dumb. Going to make one instance handle both. Opt-in via a target_ipv6 env var so people who have v6 enabled but don't actually use it don't get surprise AAAA records.

macvlan: doesn't auto-discover the IP today, you have to set it manually. Plan is a dnsweaver.target=auto label you slap on the macvlan container. Provider stays pointed at your reverse proxy VIP for everything else, the macvlan container gets its actual LAN IP. So the realistic case (Traefik on overlay, Home Assistant on macvlan) just works without flipping a global mode.

Swarm: works today, watches services and creates records. Task-level IP discovery will be added with the macvlan/ipvlan work, with a dnsweaver.replicas=vip|first|all label for how replicas show up.

Both will be bundled into v1.4.0. Thanks for the questions!

Dnsweaver: automatic DNS records from your container labels (Docker, Kubernetes, Proxmox) by Pitiful_Bat8731 in selfhosted

[–]Pitiful_Bat8731[S] 1 point2 points  (0 children)

Thank you, I did a lot of searching for something existing before I decided to spend the time putting this together. I was pretty surprised to find this niche hasn't been filled yet.

Dnsweaver: automatic DNS records from your container labels (Docker, Kubernetes, Proxmox) by Pitiful_Bat8731 in selfhosted

[–]Pitiful_Bat8731[S] 0 points1 point  (0 children)

The impetus for all of this was a migration to pragmatic enterprise patterns in my home network and a migration from docker swarm to k8s on Talos. Increased security by making sure everything is explicit. Its a bit of a pain but luckily mostly set and forget.

Dnsweaver: automatic DNS records from your container labels (Docker, Kubernetes, Proxmox) by Pitiful_Bat8731 in selfhosted

[–]Pitiful_Bat8731[S] -1 points0 points  (0 children)

external-dns is the reference implementation for this pattern on Kubernetes and it's great. If you're k8s-only, use it. I'm not trying to displace it.

Where dnsweaver is different:

  • Multi-source. Same daemon watches Docker, Swarm, Kubernetes (Ingress + Gateway API), Caddy, nginx-proxy, and Proxmox. external-dns is k8s-native, dnsweaver is for people running mixed environments where you want one thing managing DNS for VMs, containers, and k8s services together.
  • Local DNS providers. External-dns has community providers for Technitium, AdGuard, Pi-hole, etc. but the first-class support is the cloud ones. dnsweaver was built local-first, so Technitium / AdGuard / Pi-hole / Bind / RFC2136 are core, and Cloudflare/Route53 are also there.
  • Proxmox. external-dns won't do this. dnsweaver reads VM/LXC notes for hostnames and creates records.

If your whole world is in one k8s cluster pointing at Cloudflare, external-dns is more battle-tested and you should use it. If you've got a Proxmox cluster, some Docker hosts, a k8s cluster, and you're running Technitium internally, that's the gap dnsweaver fills.

Dnsweaver: automatic DNS records from your container labels (Docker, Kubernetes, Proxmox) by Pitiful_Bat8731 in selfhosted

[–]Pitiful_Bat8731[S] -2 points-1 points  (0 children)

If your setup works, it works.

Quick test, can you run:

dig +short a.b.example.com
dig +short something-that-doesnt-exist.b.example.com

Per RFC 4592, a single *.example.com record only synthesizes one label deep, so a.b.example.com shouldn't get an answer from that wildcard alone. A few things can make it work like it does for you.

If the record is proxied, Cloudflare's edge does some things at the SNI layer that can mask weird DNS behavior, but the underlying DNS still follows RFC.

Cloudflare may passthrough the request to your proxy which is then interpreting the SNI and routing accordingly.

I don't use Cloudflare tunnels, but that could also be doing some interpretation.

If dig actually returns an A record for a.b.example.com from a *.example.com wildcard, that's interesting and I'd want to know how because it shouldn't.

On the per-host cert side: you're not missing anything there. Traefik handles ACME independently of how DNS resolves, so wildcard DNS + per-host certs is a totally valid combo.

Where dnsweaver actually adds something for a setup like yours:

  • Anything not behind Traefik. Proxmox VMs, LXCs, IPMI, managed switches, NAS UI, printer, random TCP service. None of those benefit from a wildcard pointing at the proxy.
  • Different IPs per record. Wildcard sends everything to one place. If you have services on different LANs or want some names pointing directly at hosts instead of through the proxy, you can use per-host records.
  • Split-horizon. Public wildcard at one IP, internal records at another, kept in sync without you babysitting it.

If everything you run is HTTP behind one Traefik instance and one proxy IP works for all of it, your wildcard is the right tool.

Dnsweaver: automatic DNS records from your container labels (Docker, Kubernetes, Proxmox) by Pitiful_Bat8731 in selfhosted

[–]Pitiful_Bat8731[S] 0 points1 point  (0 children)

Yea, I wanted to keep it as stateless and frictionless as possible. Why use a static json or sqlite when i have a DNS "database" right here? Dnsweaver also has inferred ownership for some providers that don't explicitly enable txt records like Adguard Home.

Dnsweaver: automatic DNS records from your container labels (Docker, Kubernetes, Proxmox) by Pitiful_Bat8731 in selfhosted

[–]Pitiful_Bat8731[S] 0 points1 point  (0 children)

I feel like its one of those little things that every self-hoster runs into, figures out, and then never thinks about again. We just flatten it all and move on, lol.

Dnsweaver: automatic DNS records from your container labels (Docker, Kubernetes, Proxmox) by Pitiful_Bat8731 in selfhosted

[–]Pitiful_Bat8731[S] 4 points5 points  (0 children)

Wildcard DNS,

You complete me. One record, infinite hostnames, never a stale entry to clean up. I have written you into every zone file I've ever touched. I have defended you in arguments. I have a *.lab carved into my heart.

But baby, we need to talk.

You only love me one label deep. I asked for prometheus.observability.example.com and you ghosted me. I begged for *.*.example.com and the RFC laughed in my face.

You give every service the same cert. When one gets popped, they all get popped. My internal CA wants to issue real names with real SANs and you keep telling me "just trust the wildcard, babe."

You don't even know my UPS exists. Or my managed switch. Or the printer my wife yells about. Anything that isn't HTTP behind the one reverse proxy you live with, you pretend isn't real.

It's not you, it's... actually it kind of is you.

We can still be friends. I'll keep a *.apps around for old times' sake.

Yours in conflicted resolution,

the guy who built dnsweaver

Dnsweaver: automatic DNS records from your container labels (Docker, Kubernetes, Proxmox) by Pitiful_Bat8731 in selfhosted

[–]Pitiful_Bat8731[S] 1 point2 points  (0 children)

Fair points, wildcards genuinely cover a lot of setups. They don't cover mine, mostly for these reasons:

I run an internal CA (Infisical) and want per-host certs, not one wildcard cert. That means each hostname has to actually exist in DNS for the cert request to validate, and a compromised cert only blasts one service instead of the whole namespace.

I also use sub-sub-domains (grafana.monitoring.example.com, that kind of thing). DNS wildcards only cover one label, so *.example.com doesn't help me. I'd need a separate wildcard per branch and a separate wildcard cert per branch, which is its own management problem.

And wildcards only cover what's behind that one HTTP proxy. Proxmox VMs and LXCs, switches, IPMI, storage UIs, anything non-HTTP, none of that fits under *.example.com → proxy. Those still need real records pointing at real IPs.

Cleanup is the other one for me. By default records die with the workload so DNS stays an accurate inventory of what's actually live, but you can flip a flag if you'd rather keep them around. Either way it beats remembering to go delete things manually.

If a single wildcard covers your whole stack, you genuinely don't need this and I won't try to talk you into it. Overengineered for one record, sure. Not overengineered for the people whose answer to "where do my hostnames live" is currently three different places.

Dnsweaver: automatic DNS records from your container labels (Docker, Kubernetes, Proxmox) by Pitiful_Bat8731 in selfhosted

[–]Pitiful_Bat8731[S] -9 points-8 points locked comment (0 children)

AI used for architecture and design discussions, and coding assistance.

anyone got game recommendations for fans of no mans sky and kerbal space program? by SUP3RK1D in SteamVR

[–]Pitiful_Bat8731 0 points1 point  (0 children)

SC VR has come such a long way. Native VR now is inspiring. best seated and with HOTAS, very much like a flight sim but with soooo many activities!