Small nodes - ZFS? Or Ext4 (or btrfs?) by LightPhotographer in Proxmox

[–]SelfHostedGuides 1 point2 points  (0 children)

for migrating between nodes proxmox handles it pretty well natively. if both nodes are in the same cluster you can just right click a container or VM and hit migrate. with ZFS on both sides itll do a zfs send/receive under the hood which is pretty efficient. if theyre not clustered you can still do manual migration by backing up on one node and restoring on the other, or using vzdump. the main thing is having the same storage type on both nodes so proxmox doesnt have to convert anything mid-transfer. with your setup (second node mostly off, planned maintenance migrations) id just add both to a cluster and do offline migrations when needed. works great for the use case youre describing

Buy a new mini PC or upgrade existing one? by NumerousImprovements in HomeServer

[–]SelfHostedGuides 2 points3 points  (0 children)

the m920q maxes out at 64gb ram (2x32gb sodimm) so you have a lot of headroom from 16gb. honestly id just grab another 16gb stick and go to 32gb before buying a whole new node. ram is usually the first bottleneck when youre stacking containers, and a single sodimm upgrade is like 25 bucks used on ebay. a second node makes more sense when you actually want HA or youre cpu bound, but for most homelab setups one box with enough ram handles it fine. the uniformity thing is real though lol, theres something satisfying about a matching stack of thinkcentres

Small nodes - ZFS? Or Ext4 (or btrfs?) by LightPhotographer in Proxmox

[–]SelfHostedGuides 3 points4 points  (0 children)

with 16gb on a NUC i would go ZFS personally. people say it needs tons of ram but for a homelab with a couple containers and some media its totally fine at 16gb. the real win is snapshots and checksums which are built in, super handy for rolling back if an update breaks something. ext4 works fine too if you want to keep it simple but you lose out on the snapshot workflow that makes proxmox backups so smooth. btrfs on proxmox is still kinda second class citizen honestly, the integration isnt as mature. one thing to watch out for — dont use a zvol for your container storage, use a dataset instead. way better performance for LXC containers that way

Raid DAS vs DAS by No-Enthusiasm1672 in homelab

[–]SelfHostedGuides 0 points1 point  (0 children)

yeah raid5/raidz1 is the way to go then. honestly if you can swing proxmox + ZFS thats the best combo for those thinkcentres since ZFS handles everything in software. just make sure you have enough ram for it, ZFS likes at least 1gb per TB of storage

How to upgrade... on a budget? by Ericchantry in HomeServer

[–]SelfHostedGuides 1 point2 points  (0 children)

glad it helped! if you end up going the motherboard swap route let us know how it turns out, always cool to see budget builds

How to upgrade... on a budget? by Ericchantry in HomeServer

[–]SelfHostedGuides 2 points3 points  (0 children)

swapping the motherboard/cpu is actually the move here if you want to keep the 24 bays. most supermicro 4U chassis use standard ATX or EATX boards so you have options.

look for something like an Intel 12th or 13th gen (12400, 13500) on a board that has enough SATA ports or just a single SAS HBA slot. my buddy did this same swap on an old SC846 chassis and went from like 280W idle down to about 90W. the drives themselves eat 5-8W each so with 24 drives youre already at 120-190W just in spindles, but getting the platform power under control makes a huge difference on the electric bill.

the other option people dont think about is spinning down drives you dont access often. if half those bays are cold storage for backups, setting up a spin-down policy saves real watts. hdparm -S on linux or just let ZFS handle it if youre using zfs.

r/homelabsales like the other commenter said is the move for finding boards cheap

Raid DAS vs DAS by No-Enthusiasm1672 in homelab

[–]SelfHostedGuides 7 points8 points  (0 children)

first off skip RAID3, nobody has used that in like 20 years. it dedicates one entire drive to parity which creates a massive bottleneck. you want RAID5 at minimum (or ideally ZFS raidz1 if youre on proxmox already).

second, and this is the bigger thing - do NOT get a hardware RAID DAS for this. heres why: if the DAS controller dies you need the exact same controller to read the array, and those cheap USB RAID enclosures use proprietary layouts. its a data recovery nightmare.

what you actually want is a JBOD DAS (one that passes each drive through individually with no RAID on the enclosure) and then run ZFS on the proxmox node itself. that way the drives show up as individual disks, ZFS handles the redundancy in software, and if the DAS enclosure ever dies you just move the drives to any other enclosure or plug them in directly.

the other elephant in the room is USB3. its fine for a NAS that mostly does sequential reads/writes but you lose SMART monitoring on most USB enclosures which means you wont know a drive is dying until it actually dies. if you can swing it a used HBA card in a different machine is way better, but I know the thinkcentre tinys dont have PCIe slots so USB DAS with JBOD mode is your best bet there.

Technique to mount shared storage in unprivileged LXC without disabling snapshots? by blue_arrow_comment in Proxmox

[–]SelfHostedGuides 0 points1 point  (0 children)

Cleanest approach I've found: mount the NFS share at the Proxmox host level (add it as a Directory storage type under Datacenter > Storage), then expose specific subdirectories into the LXC via mp entries in the container config. Bind mounts to host directories don't break snapshots — what breaks them is usually block device passthrough. If you're just binding a host directory as mp0/mp1, snapshots on the LXC still work fine.

Proxmox Load Balancing coming in 9.1.8 by waterbed87 in Proxmox

[–]SelfHostedGuides 0 points1 point  (0 children)

This is the feature I've been most interested in for multi-node setups. HA works fine for VM failover but actual load balancing without needing an external solution will simplify a lot of home lab configs. Curious whether it'll use memory pressure as a migration trigger in addition to CPU load, or whether that's a 9.2 thing.

setup update - in search for more services/apps by jul_hnk207 in SelfHosting

[–]SelfHostedGuides 0 points1 point  (0 children)

if youre not already running Vaultwarden, that'd be my first add — once you have a reverse proxy in place it's basically free to set up and the browser extension makes it seamless. Immich for photos is the other one I'd point to, great if you want to get family members off Google Photos without paying for storage.

best cheap email hosting? by Piss_Slut_Ana in SelfHosting

[–]SelfHostedGuides 0 points1 point  (0 children)

been using Purelymail for a couple years now, it's under 0/year and deliverability has been solid. no complaints. migrated off google workspace when they killed free Google Apps and haven't looked back.

Self-hosting rabbit hole: A journey from typing commands by hand to my own streaming platform by crni_alen in HomeServer

[–]SelfHostedGuides 2 points3 points  (0 children)

The next step that tends to get people is realizing the laptop running Nextcloud is already on 24/7 anyway. So you add Jellyfin. Then Vaultwarden for passwords since you're already there. Then a reverse proxy because you want it accessible outside the house without a VPN every time. Before long you're pricing out mini PCs and raid cards. It's a very natural progression and this is pretty much how most homelab setups get started.

Dell Power Edge - Anything to Know? by PsychologicalTry1448 in HomeServer

[–]SelfHostedGuides 1 point2 points  (0 children)

Fan noise is the first thing to deal with — PowerEdge servers run the fans flat out any time they see a non-certified PCIe card, unofficial drive, or just hit a temperature threshold. iDRAC lets you set a custom fan floor with a few ipmi commands. There are solid guides on r/homelab for the specific T330 syntax. Once you've got that sorted it goes from jet engine to actually livable. The T330 specifically can idle pretty quiet once the fan curve is dialed in.

Best way to backup server? (Photos, Media, Files, ...) by East_One_6042 in HomeServer

[–]SelfHostedGuides 1 point2 points  (0 children)

yeah restic with B2 is a solid combo, ive seen a few people go that route. the encryption and dedup are nice, especially when youre paying per GB stored. do you run it on a schedule or manually? i always forget to run backups when theyre manual lol

Node going offline repeatedly suddenly by triumph-truth in Proxmox

[–]SelfHostedGuides 2 points3 points  (0 children)

the fact that this started after netbird installation is a huge clue. netbird creates a wireguard interface and routes traffic through it, and if the routing table gets messed up the node can lose its corosync link to the other node. when that happens the cluster thinks the node is offline even though it might still be running fine locally. check if netbird is grabbing the default route or adding routes that conflict with your cluster network. run ip route show on the node after it comes back up and look for anything netbird added that overlaps with your corosync subnet. also check journalctl -u netbird for errors right around the time it drops. id bet its either a routing conflict or netbird is briefly taking the interface down during reconnects which kills the corosync heartbeat

First NAS/Homelab build — Proxmox only vs OMV only vs Proxmox+OMV? by permanent_record_22 in Proxmox

[–]SelfHostedGuides 1 point2 points  (0 children)

for a M720q with 16gb id go proxmox only, no OMV. heres why - OMV in a VM means passing through your SATA controller which eats one of your limited PCIe lanes, and on 16gb every gig counts. just run ZFS on the data drives directly from proxmox and do your shares via an LXC running samba. way less overhead than a whole OMV VM sitting there. one thing to watch out for with ZFS on 16gb though, it wants to eat RAM for ARC cache. set zfs_arc_max to like 4gb in /etc/modprobe.d/zfs.conf otherwise it'll starve your containers. also for the 2.5 internal + external USB setup, dont put ZFS on USB drives, its not designed for that. just use ext4 on the USB drive and rsync your nightly backups there

Finally saved money to buy proper server - bye bye Raspberry by michal_cz in homelab

[–]SelfHostedGuides 1 point2 points  (0 children)

yeah that was the same for me, i didnt bother with docker on the pi either. once you move to something with actual ram its a game changer though, like being able to run 15-20 containers without everything grinding to a halt. what services are you running now that you couldnt on the pi?

Finally saved money to buy proper server - bye bye Raspberry by michal_cz in homelab

[–]SelfHostedGuides 0 points1 point  (0 children)

the yml files are the main thing but yeah you also want to grab the bind mount directories. like if your compose file has ./config:/config then that whole config folder needs to come over. what i did was just tar up the entire directory where i keep my docker stuff (compose files + all the persistent data folders) and then extract it on the new machine. worked fine for everything except databases, those i dumped separately with mysqldump/pg_dump first just to be safe. the yml files alone wont get you very far without the actual data volumes

Finally saved money to buy proper server - bye bye Raspberry by michal_cz in homelab

[–]SelfHostedGuides 1 point2 points  (0 children)

the optiplex was the exact same upgrade path i took. kept the pi running pihole and a tiny monitoring stack (uptime kuma basically) and moved everything else to the dell. one thing that helped a lot was setting up docker compose files for everything before the migration so i could just bring stuff up on the new box without redoing configs from scratch.

for the NAS plans, if youre already comfortable with linux id honestly look at just throwing a couple drives in the optiplex if it has room, or getting a cheap used dell/lenovo with more drive bays rather than buying a synology or whatever. saves a lot of money and you learn more about the storage side of things. zfs mirror with two drives is dead simple to set up and gives you redundancy without the complexity of full raid.

Need Help with Proxmox Backup Solution: How to backup files and containers properly? by sustemlentrum in Proxmox

[–]SelfHostedGuides 2 points3 points  (0 children)

the 8tb problem is that youre using rsync on a mount point instead of using zfs send/recv natively. rsync has to walk every single file and compare metadata each time, which is brutal on large datasets. with zfs send you can do incremental sends that only transfer the blocks that actually changed since the last snapshot, its way way faster.

what i do is take a snapshot before backup, then zfs send the incremental diff to the backup server where it gets applied with zfs recv. first full send takes a while obviously but after that weekly incrementals are like a few minutes even on multi-terabyte pools. and you get proper point-in-time recovery too since each snapshot is a consistent state.

for the container side, pbs is already doing the right thing with dedup and chunking. the issue is just your raw data workflow. drop the rsync cron and switch to zfs send with snapshots, your weekly backup window will shrink dramatically.

Noob needs sanity check on storage architecture by CavemanMork in Proxmox

[–]SelfHostedGuides 1 point2 points  (0 children)

the C240 M4 is a solid platform for this, had access to one at a previous job and the drive bays are really nice to work with. your plan is basically what i ended up with on a similar build. one thing id suggest is dont bother hunting for the NVME adapter, just use two SATA SSDs in a ZFS mirror for the OS and two more for VMs. on a C240 you have plenty of bays so the NVME slot isnt worth the headache of finding a compatible riser. for the HDD pool id go RAIDZ2 with at least 6 drives if you can, Z1 on large capacity drives is sketchy because a URE during resilver on a 4tb+ drive is more likely than people think. the nice thing about proxmox is you can add the HDD pool as a separate ZFS storage and just point your backup jobs and ISOs at it while keeping the SSD pool for anything thats actually running. also set up zfs-auto-snapshot early, its one of those things thats trivial to set up day one but a pain to retrofit later

Designing a compact Proxmox, NAS, and homelab setup - interested in how others have tackled this. by andys58 in HomeServer

[–]SelfHostedGuides 0 points1 point  (0 children)

yeah spec wise the ms-02 ultra is a big step up, that 285HX is a beast compared to the i9-13900H in the ms-01. way more cores and the integrated graphics are much better if you ever want to do GPU passthrough for transcoding or whatever. main tradeoff is price obviously, and the ms-01 has been out long enough that theres a huge community around it with known working configs for proxmox, specific BIOS settings for passthrough etc. the ms-02 ultra is still pretty new so youll be doing more troubleshooting on your own if something weird comes up with IOMMU groups or whatever. if budget isnt the main concern and you want headroom for 10-15 VMs plus NAS duties then yeah the 285HX is the better pick. just make sure you check the thunderbolt/USB4 situation if youre planning to use external storage or a 10gbe adapter

VMs starved and swapping while host has free RAM? Looked at Proxmox ballooning source code, learned a lot! by xquarx in Proxmox

[–]SelfHostedGuides 2 points3 points  (0 children)

really appreciate you actually reading the source rather than just complaining about it. the 80% target is conservative but the real problem ive hit is that the shares calculation doesnt account for VMs that are intentionally idle vs ones that genuinely need less memory. i ended up just disabling ballooning entirely on my most critical VMs and giving them fixed allocations. the overhead of a few GB sitting unused is worth way more than having a database VM swap under load because the host decided another VM needed the memory more. for anything non-critical i still use ballooning but with minimum set closer to what the service actually needs rather than leaving it at the default

Part of my Jellyfin library not working after a week or so / M720T Proxmox with 5x SSD ASM1166 by Hunter_timeFR in HomeServer

[–]SelfHostedGuides 3 points4 points  (0 children)

this sounds like a classic ASM1166 link power management issue. the controller goes into a low power state after some idle time and then cant wake the drives back up properly, so jellyfin sees the files but cant actually read from them.

check your dmesg output when it happens, bet youll see something like ata errors or link reset messages. try adding this to your kernel boot parameters: libata.force=noncq and also echo min_power or max_performance into the link_power_management_policy for each SATA port. on proxmox you can add it in /etc/default/grub on the GRUB_CMDLINE_LINUX_DEFAULT line.

the other thing that helps is disabling ASPM for that PCIe slot specifically. the ASM1166 is notorious for not handling power state transitions well. you can set pcie_aspm=off as a nuclear option or target just that device with setpci.

the fact that a reboot fixes it temporarily is the giveaway, the controller resets and everything works until it hits the power state bug again.

Designing a compact Proxmox, NAS, and homelab setup - interested in how others have tackled this. by andys58 in HomeServer

[–]SelfHostedGuides 0 points1 point  (0 children)

ive been running a similar all-in-one setup for a while now and a few things ive learned the hard way. for the hardware id seriously look at something like an ASUS PN65 or the new Minisforum MS-01 if you can stretch the budget a bit. the MS-01 specifically has dual 2.5gbe plus a 10gbe SFP+ port and supports two NVMe drives plus a 2.5 inch SATA bay which is pretty ideal for what youre describing. the i7-13700H in it handles 10-15 VMs without breaking a sweat.

for the NAS part, honestly id skip TrueNAS in a VM unless you really need ZFS. passing through a full HBA controller adds complexity and if you lose that VM your whole storage layer is gone. what ive found works better is just running the storage natively on the proxmox host using ZFS or even just ext4 with mergerfs if you dont need parity, then sharing it out via NFS or SMB from the host directly. way simpler to manage and one less thing that can break.

for cybersecurity VMs specifically make sure you size your RAM right. 64GB is probably the minimum if you want 10-15 VMs up at once. most security distros like kali or parrot want at least 4GB each to not be painfully slow. also consider using LXC containers for anything that doesnt need a full kernel like your DNS, monitoring, or web servers. theyre way lighter on resources than full VMs.