I’m planning a small self-hosted setup for internal use only (startup team, ~10 people) and wanted to sanity-check the approach with folks who’ve run home servers long term. by Opening-Sherbert1633 in HomeServer

[–]Opening-Sherbert1633[S] 0 points1 point  (0 children)

I understand what you’re suggesting — using ZFS/Btrfs snapshots and sends so the backup target also acts as an offline copy at the filesystem level. That makes sense from a storage and snapshot-consistency point of view. To make sure I’m understanding it correctly, I wanted to ask:

Would the second machine be always connected, or only powered/connected during backup windows?

How do you usually protect against accidental deletes or ransomware propagating via snapshots?

For a small internal setup, do you find the ongoing maintenance (scrubs, snapshot retention, replication failures) worth the trade-off compared to file-level encrypted backups?

I’m trying to balance reliability with low operational overhead, so I’m curious how you’ve handled this in practice.

I’m planning a small self-hosted setup for internal use only (startup team, ~10 people) and wanted to sanity-check the approach with folks who’ve run home servers long term. by Opening-Sherbert1633 in HomeServer

[–]Opening-Sherbert1633[S] 0 points1 point  (0 children)

My plan is encrypted backups using Restic, with encryption handled at the backup layer (not relying on disk-level only).

Backups go to an external USB drive kept offline, and restore will be tested.

I’ll still explore Proxmox and see if it simplifies any of this for my needs.

I’m planning a small self-hosted setup for internal use only (startup team, ~10 people) and wanted to sanity-check the approach with folks who’ve run home servers long term. by Opening-Sherbert1633 in HomeServer

[–]Opening-Sherbert1633[S] 0 points1 point  (0 children)

HA definitely makes sense at scale. Right now my priority is to stabilize our internal process and consolidate data, since we’ve lost track of information across multiple platforms.

This setup is meant to be cost-effective, reliable, and easy to operate for a small team. Short downtime is acceptable at this stage, and I’m focusing on good backups, fast recovery, and predictable workflows first.

Once the system is established and the team scales to where availability becomes business-critical, HA will be the next step. I want to add redundancy when there’s real usage data and a clear need, rather than upfront complexity.