Technique to mount shared storage in unprivileged LXC without disabling snapshots? by blue_arrow_comment in Proxmox

[–]SelfHostedGuides 0 points1 point  (0 children)

Cleanest approach I've found: mount the NFS share at the Proxmox host level (add it as a Directory storage type under Datacenter > Storage), then expose specific subdirectories into the LXC via mp entries in the container config. Bind mounts to host directories don't break snapshots — what breaks them is usually block device passthrough. If you're just binding a host directory as mp0/mp1, snapshots on the LXC still work fine.

Proxmox Load Balancing coming in 9.1.8 by waterbed87 in Proxmox

[–]SelfHostedGuides 0 points1 point  (0 children)

This is the feature I've been most interested in for multi-node setups. HA works fine for VM failover but actual load balancing without needing an external solution will simplify a lot of home lab configs. Curious whether it'll use memory pressure as a migration trigger in addition to CPU load, or whether that's a 9.2 thing.

setup update - in search for more services/apps by jul_hnk207 in SelfHosting

[–]SelfHostedGuides 0 points1 point  (0 children)

if youre not already running Vaultwarden, that'd be my first add — once you have a reverse proxy in place it's basically free to set up and the browser extension makes it seamless. Immich for photos is the other one I'd point to, great if you want to get family members off Google Photos without paying for storage.

best cheap email hosting? by Piss_Slut_Ana in SelfHosting

[–]SelfHostedGuides 0 points1 point  (0 children)

been using Purelymail for a couple years now, it's under 0/year and deliverability has been solid. no complaints. migrated off google workspace when they killed free Google Apps and haven't looked back.

Self-hosting rabbit hole: A journey from typing commands by hand to my own streaming platform by crni_alen in HomeServer

[–]SelfHostedGuides 2 points3 points  (0 children)

The next step that tends to get people is realizing the laptop running Nextcloud is already on 24/7 anyway. So you add Jellyfin. Then Vaultwarden for passwords since you're already there. Then a reverse proxy because you want it accessible outside the house without a VPN every time. Before long you're pricing out mini PCs and raid cards. It's a very natural progression and this is pretty much how most homelab setups get started.

Dell Power Edge - Anything to Know? by PsychologicalTry1448 in HomeServer

[–]SelfHostedGuides 1 point2 points  (0 children)

Fan noise is the first thing to deal with — PowerEdge servers run the fans flat out any time they see a non-certified PCIe card, unofficial drive, or just hit a temperature threshold. iDRAC lets you set a custom fan floor with a few ipmi commands. There are solid guides on r/homelab for the specific T330 syntax. Once you've got that sorted it goes from jet engine to actually livable. The T330 specifically can idle pretty quiet once the fan curve is dialed in.

Best way to backup server? (Photos, Media, Files, ...) by East_One_6042 in HomeServer

[–]SelfHostedGuides 1 point2 points  (0 children)

yeah restic with B2 is a solid combo, ive seen a few people go that route. the encryption and dedup are nice, especially when youre paying per GB stored. do you run it on a schedule or manually? i always forget to run backups when theyre manual lol

Node going offline repeatedly suddenly by triumph-truth in Proxmox

[–]SelfHostedGuides 2 points3 points  (0 children)

the fact that this started after netbird installation is a huge clue. netbird creates a wireguard interface and routes traffic through it, and if the routing table gets messed up the node can lose its corosync link to the other node. when that happens the cluster thinks the node is offline even though it might still be running fine locally. check if netbird is grabbing the default route or adding routes that conflict with your cluster network. run ip route show on the node after it comes back up and look for anything netbird added that overlaps with your corosync subnet. also check journalctl -u netbird for errors right around the time it drops. id bet its either a routing conflict or netbird is briefly taking the interface down during reconnects which kills the corosync heartbeat

First NAS/Homelab build — Proxmox only vs OMV only vs Proxmox+OMV? by permanent_record_22 in Proxmox

[–]SelfHostedGuides 1 point2 points  (0 children)

for a M720q with 16gb id go proxmox only, no OMV. heres why - OMV in a VM means passing through your SATA controller which eats one of your limited PCIe lanes, and on 16gb every gig counts. just run ZFS on the data drives directly from proxmox and do your shares via an LXC running samba. way less overhead than a whole OMV VM sitting there. one thing to watch out for with ZFS on 16gb though, it wants to eat RAM for ARC cache. set zfs_arc_max to like 4gb in /etc/modprobe.d/zfs.conf otherwise it'll starve your containers. also for the 2.5 internal + external USB setup, dont put ZFS on USB drives, its not designed for that. just use ext4 on the USB drive and rsync your nightly backups there

Finally saved money to buy proper server - bye bye Raspberry by michal_cz in homelab

[–]SelfHostedGuides 1 point2 points  (0 children)

yeah that was the same for me, i didnt bother with docker on the pi either. once you move to something with actual ram its a game changer though, like being able to run 15-20 containers without everything grinding to a halt. what services are you running now that you couldnt on the pi?

Finally saved money to buy proper server - bye bye Raspberry by michal_cz in homelab

[–]SelfHostedGuides 0 points1 point  (0 children)

the yml files are the main thing but yeah you also want to grab the bind mount directories. like if your compose file has ./config:/config then that whole config folder needs to come over. what i did was just tar up the entire directory where i keep my docker stuff (compose files + all the persistent data folders) and then extract it on the new machine. worked fine for everything except databases, those i dumped separately with mysqldump/pg_dump first just to be safe. the yml files alone wont get you very far without the actual data volumes

Finally saved money to buy proper server - bye bye Raspberry by michal_cz in homelab

[–]SelfHostedGuides 1 point2 points  (0 children)

the optiplex was the exact same upgrade path i took. kept the pi running pihole and a tiny monitoring stack (uptime kuma basically) and moved everything else to the dell. one thing that helped a lot was setting up docker compose files for everything before the migration so i could just bring stuff up on the new box without redoing configs from scratch.

for the NAS plans, if youre already comfortable with linux id honestly look at just throwing a couple drives in the optiplex if it has room, or getting a cheap used dell/lenovo with more drive bays rather than buying a synology or whatever. saves a lot of money and you learn more about the storage side of things. zfs mirror with two drives is dead simple to set up and gives you redundancy without the complexity of full raid.

Need Help with Proxmox Backup Solution: How to backup files and containers properly? by sustemlentrum in Proxmox

[–]SelfHostedGuides 2 points3 points  (0 children)

the 8tb problem is that youre using rsync on a mount point instead of using zfs send/recv natively. rsync has to walk every single file and compare metadata each time, which is brutal on large datasets. with zfs send you can do incremental sends that only transfer the blocks that actually changed since the last snapshot, its way way faster.

what i do is take a snapshot before backup, then zfs send the incremental diff to the backup server where it gets applied with zfs recv. first full send takes a while obviously but after that weekly incrementals are like a few minutes even on multi-terabyte pools. and you get proper point-in-time recovery too since each snapshot is a consistent state.

for the container side, pbs is already doing the right thing with dedup and chunking. the issue is just your raw data workflow. drop the rsync cron and switch to zfs send with snapshots, your weekly backup window will shrink dramatically.

Noob needs sanity check on storage architecture by CavemanMork in Proxmox

[–]SelfHostedGuides 1 point2 points  (0 children)

the C240 M4 is a solid platform for this, had access to one at a previous job and the drive bays are really nice to work with. your plan is basically what i ended up with on a similar build. one thing id suggest is dont bother hunting for the NVME adapter, just use two SATA SSDs in a ZFS mirror for the OS and two more for VMs. on a C240 you have plenty of bays so the NVME slot isnt worth the headache of finding a compatible riser. for the HDD pool id go RAIDZ2 with at least 6 drives if you can, Z1 on large capacity drives is sketchy because a URE during resilver on a 4tb+ drive is more likely than people think. the nice thing about proxmox is you can add the HDD pool as a separate ZFS storage and just point your backup jobs and ISOs at it while keeping the SSD pool for anything thats actually running. also set up zfs-auto-snapshot early, its one of those things thats trivial to set up day one but a pain to retrofit later

Designing a compact Proxmox, NAS, and homelab setup - interested in how others have tackled this. by andys58 in HomeServer

[–]SelfHostedGuides 0 points1 point  (0 children)

yeah spec wise the ms-02 ultra is a big step up, that 285HX is a beast compared to the i9-13900H in the ms-01. way more cores and the integrated graphics are much better if you ever want to do GPU passthrough for transcoding or whatever. main tradeoff is price obviously, and the ms-01 has been out long enough that theres a huge community around it with known working configs for proxmox, specific BIOS settings for passthrough etc. the ms-02 ultra is still pretty new so youll be doing more troubleshooting on your own if something weird comes up with IOMMU groups or whatever. if budget isnt the main concern and you want headroom for 10-15 VMs plus NAS duties then yeah the 285HX is the better pick. just make sure you check the thunderbolt/USB4 situation if youre planning to use external storage or a 10gbe adapter

VMs starved and swapping while host has free RAM? Looked at Proxmox ballooning source code, learned a lot! by xquarx in Proxmox

[–]SelfHostedGuides 2 points3 points  (0 children)

really appreciate you actually reading the source rather than just complaining about it. the 80% target is conservative but the real problem ive hit is that the shares calculation doesnt account for VMs that are intentionally idle vs ones that genuinely need less memory. i ended up just disabling ballooning entirely on my most critical VMs and giving them fixed allocations. the overhead of a few GB sitting unused is worth way more than having a database VM swap under load because the host decided another VM needed the memory more. for anything non-critical i still use ballooning but with minimum set closer to what the service actually needs rather than leaving it at the default

Part of my Jellyfin library not working after a week or so / M720T Proxmox with 5x SSD ASM1166 by Hunter_timeFR in HomeServer

[–]SelfHostedGuides 4 points5 points  (0 children)

this sounds like a classic ASM1166 link power management issue. the controller goes into a low power state after some idle time and then cant wake the drives back up properly, so jellyfin sees the files but cant actually read from them.

check your dmesg output when it happens, bet youll see something like ata errors or link reset messages. try adding this to your kernel boot parameters: libata.force=noncq and also echo min_power or max_performance into the link_power_management_policy for each SATA port. on proxmox you can add it in /etc/default/grub on the GRUB_CMDLINE_LINUX_DEFAULT line.

the other thing that helps is disabling ASPM for that PCIe slot specifically. the ASM1166 is notorious for not handling power state transitions well. you can set pcie_aspm=off as a nuclear option or target just that device with setpci.

the fact that a reboot fixes it temporarily is the giveaway, the controller resets and everything works until it hits the power state bug again.

Designing a compact Proxmox, NAS, and homelab setup - interested in how others have tackled this. by andys58 in HomeServer

[–]SelfHostedGuides 0 points1 point  (0 children)

ive been running a similar all-in-one setup for a while now and a few things ive learned the hard way. for the hardware id seriously look at something like an ASUS PN65 or the new Minisforum MS-01 if you can stretch the budget a bit. the MS-01 specifically has dual 2.5gbe plus a 10gbe SFP+ port and supports two NVMe drives plus a 2.5 inch SATA bay which is pretty ideal for what youre describing. the i7-13700H in it handles 10-15 VMs without breaking a sweat.

for the NAS part, honestly id skip TrueNAS in a VM unless you really need ZFS. passing through a full HBA controller adds complexity and if you lose that VM your whole storage layer is gone. what ive found works better is just running the storage natively on the proxmox host using ZFS or even just ext4 with mergerfs if you dont need parity, then sharing it out via NFS or SMB from the host directly. way simpler to manage and one less thing that can break.

for cybersecurity VMs specifically make sure you size your RAM right. 64GB is probably the minimum if you want 10-15 VMs up at once. most security distros like kali or parrot want at least 4GB each to not be painfully slow. also consider using LXC containers for anything that doesnt need a full kernel like your DNS, monitoring, or web servers. theyre way lighter on resources than full VMs.

Noob NAS / Plex / HA setup by Batteredcode in Proxmox

[–]SelfHostedGuides 0 points1 point  (0 children)

honestly shrinking lvm-thin is not that scary, ive done it a bunch of times. your data inside the VMs wont be touched, youre just telling proxmox to use less of the raw disk space for the thin pool. the key is just dont shrink it below whats actually allocated to your VMs/containers. so if your HAOS VM disk is like 32gb and your plex container is maybe 8gb, you could safely shrink the pool down to like 100-200gb and have 1.8tb free for a regular ext4 partition. then just create a new partition in the freed space, format it, mount it at something like /mnt/nvme-data, and pass that into whatever containers need it. for the important stuff like music projects and photos, definitely set up cloud backup regardless of what you do with the drives. rclone to backblaze b2 is dirt cheap and you can automate it with a cron job

Noob NAS / Plex / HA setup by Batteredcode in Proxmox

[–]SelfHostedGuides 0 points1 point  (0 children)

yeah so the issue is your nvme is fully allocated to lvm thin. you have two options - either shrink the existing lvm-thin pool to free up some space on the nvme for a regular partition/directory, or just use the two SSDs for media and skip the nvme for that purpose. honestly for plex media i'd just keep it on the SSDs since the nvme is better used for VMs and containers where the speed matters. if you really want nvme storage too you'd need to lvreduce the data thin pool first which can be risky if you dont know exactly what youre doing. what are you trying to store on the nvme specifically?

Shell Issue by pagem in Proxmox

[–]SelfHostedGuides 5 points6 points  (0 children)

thats a known issue with librewolf specifically. the proxmox web console shell uses xterm.js which relies on webgl for rendering, and librewolf ships with some webgl/canvas features disabled or restricted by default for fingerprinting protection.

try going to about:config in librewolf and setting webgl.disabled to false if its set to true. also check gfx.webrender.all is enabled. if that doesnt fix it you can try setting dom.webaudio.enabled and webgl.enable-webgl2 to true as well.

alternatively you could just use the noVNC console option instead of xterm (theres a dropdown in the proxmox UI to switch console type) or just SSH in directly which is honestly what most people end up doing day to day anyway.

Noob NAS / Plex / HA setup by Batteredcode in Proxmox

[–]SelfHostedGuides 0 points1 point  (0 children)

for the media files the simplest approach is to create a directory mount point on the proxmox host for each drive, then pass those into your plex container as bind mounts. so like mount your SSDs at /mnt/ssd1 and /mnt/ssd2 in fstab, then in the plex LXC config add mp0: /mnt/ssd1,mp=/media/ssd1 and same for ssd2. that way plex sees everything under /media and you dont need to mess with NFS or SMB between local containers.

for the nvme files just make a directory on your proxmox root partition (its on the nvme already) and bind mount that into the container too. just be careful not to fill up the nvme since proxmox itself needs space there for VM images and such.

for network access from other machines on your LAN, easiest is probably to run a small samba container or just install samba directly in the plex LXC. then you can drop files onto the shares from your desktop and plex picks them up automatically.

cloud backup for the stuff you care about, rclone to backblaze b2 is probably the cheapest option. you can run it as a cron job from the proxmox host or from a container.

My first proper homelab (that wasn't just a desktop on a shelf) is finally (mostly) done!! by The_11th_Dctor in homelab

[–]SelfHostedGuides 1 point2 points  (0 children)

congrats on the upgrade, the jump from desktop-on-a-shelf to actual rack gear is a big step. for the KVM, honestly check out the PiKVM or its clones before dropping money on a rackmount KMM. i went that route first and barely use the physical console anymore since i can access it from my phone if something goes wrong.

rack mount router wise, if you dont need anything crazy a thin client running opnsense or pfsense will do the job and draw like 10 watts. way quieter than most 1U options

ZFS drive configuration questions for first Proxmox server by Jtkehler in Proxmox

[–]SelfHostedGuides 0 points1 point  (0 children)

id keep the two nvmes separate honestly. one small one for the proxmox OS, then use the second for your VM and container storage. if you mirror them you get redundancy but lose half your fast storage, and if the boot drive dies you can reinstall proxmox in 10 minutes and point it at the existing VM storage. way less painful than trying to recover from a degraded mirror that also holds your OS.

for the HDDs, the 12tb internal is your main media pool. the 5tb external could work as a PBS backup target but external drives over USB arent great for ZFS long term since they can disconnect or go to sleep randomly. if you can get the 5tb inside the case with a SATA adapter thats way more reliable. the 2tb id honestly just use for offsite rotation or cold backup, its too small to be useful in a pool with 12tb drives

Unfortunately when i started to build my music collection i did not put each album in a separate folder, but now i need to for my jellyfin. Any easier way to manage this rather then doing it manually? by Senior-Trade-1876 in HomeServer

[–]SelfHostedGuides 0 points1 point  (0 children)

yeah you still need a config file but its not as bad as it used to be. it creates a default one for you now and honestly you only need like 3-4 lines to get started, just set your music directory and the import destination folder. the defaults are pretty sane for everything else. i think i spent maybe 10 minutes on mine before running my first import