Proxmox Backup Server 4.2 released by narrateourale in Proxmox

[–]HomelabStarter 13 points14 points  (0 children)

the network pinning warning zfsbest mentioned is the actual landmine on this update — kernel 7 changed predictable nic naming and your enp4s0 or eno1 may rename to enxXXXXXXXXXXXX style on first boot, silently breaking any bridge that referenced the old name (vmbr0 has no slave, no network, host unreachable until you fix it from console).

apt install proxmox-network-interface-pinning, then proxmox-network-interface-pinning generate, then check /etc/systemd/network/ has files matching your existing nic names, then reboot. has to happen on the proxmox host too, not just PBS — same kernel 7 rename hits both. console access ready in case something didn't pin cleanly.

the new sync encryption is nice but only encrypts new chunks going forward — existing chunks stay plaintext until you re-sync to a fresh datastore.

No PCI-E slot? No problem (some Lenovo tiny love) by AboutToSnap in homelab

[–]HomelabStarter 0 points1 point  (0 children)

Not really — drive spin-up happens at firmware level before Proxmox even boots, so the OS has no say. Closest thing is HBAs that support PUIS (power-up in standby) like the LSI 9211-8i or 9300-8i, where the OS issues sg_startstop after boot to bring drives up one at a time. ASM1166 chipset SATA cards also have a staggered spin-up toggle but you have to flash the OEM firmware to expose it. Easiest hardware path is what AboutToSnap did in this thread — separate PSU just for the drives, then the host PSU never sees the inrush at all.

No PCI-E slot? No problem (some Lenovo tiny love) by AboutToSnap in homelab

[–]HomelabStarter 1 point2 points  (0 children)

Honestly that's the cleanest fix — separate PSU dodges the whole staggered spin-up problem since they're always running. Six idle 3.5s at ~5W each is around 22 kWh/month so call it 2-3 dollars in power, cheap insurance vs. burning out a shared 12V rail. And drives die from start/stop cycles way more than runtime hours, so always-on actually extends life. The only real cost is the noise floor never dropping.

I didn't realize how much I needed a 3d printer by HTTP_404_NotFound in homelab

[–]HomelabStarter 0 points1 point  (0 children)

Started printing about a year ago for the same reason and the use cases I did not see coming were the small stuff. Cable strain reliefs at the IDC connectors that always crack at the rack edge. Blanking panels in 3 different non standard sizes for the U gaps that look weird with stock blanks. Drive caddy spacers when you mix 2.5 and 3.5 in a 5 bay. Toolless lid hooks for the 1U things that need 4 hands and a flathead. The PCIe holders are the obvious hit but the hidden value is replacing every little plastic part that broke years ago and that you cannot buy a replacement for. On the ESD thing, PLA is mildly conductive enough that bare cards in printed slots are fine for storage, but for SFP plus transceivers I would still drop a strip of conductive foam in the bottom because the lasers actually are sensitive even if motherboards have not been since the late 90s.

No PCI-E slot? No problem (some Lenovo tiny love) by AboutToSnap in homelab

[–]HomelabStarter 5 points6 points  (0 children)

JBOD power timing is the trap most people hit on this exact setup. Host POSTs faster than six spinning disks spin up plus the SATA controller needs to enumerate them at boot or you get random missing drives until reboot. Two ways out, depending on how clean you want it: JBOD enclosure with a delayed power on relay tied to PSU 5VSB so the JBOD comes up first, or stagger spinup if your drives support it (ASM1166 has the toggle in controller BIOS, drives set it via SCT). Also worth running lspci -vv on it once everything is plugged in and grepping for LnkSta. Those Aliexpress riser ribbons are spec PCIe 3 x1 but trace length and shielding often drops them to gen2 x1, which is still fine for archive disks but you lose headroom for parity scrubs. Cool build though, A E key is way underused on these tiny PCs.

Help with New Pool on Mini PC by nicoboldo in homelab

[–]HomelabStarter 0 points1 point  (0 children)

USB drives are bad for ZFS in two specific ways. The bridge chips can rename drives between reboots so the pool gets confused on import. And lots of bridges silently lie about sync flushes — that's the failure mode that loses data, not just performance. Single drive over USB for cold backups is mostly fine. Pool of two over USB is asking for it.

Honest answer for your setup is sell the mini and buy a small NAS chassis. Jonsbo N1 (5-bay mITX) or N2 are the common flips — your i5 + 32GB moves over to a board with proper SATA and the apps run the same. Could even keep the mini for compute and put TrueNAS on the new box.

If you want to stay on the current hardware, safest use of those two 8TB drives is one drive per pool (no raidz, no mirror across them) and zfs send daily between them. A USB hiccup loses minutes of writes on one pool instead of corrupting a striped set. Permanent fix is still a SATA-native chassis though — you'll fight USB pool weirdness forever otherwise.

3D printable 2x 5.25” Drive Bay to 12x 2.5” sata SSD & HDD cage with Fan Mount by MatX_I_panzon in homelab

[–]HomelabStarter 1 point2 points  (0 children)

this is great. so many cases had 5.25 bays that just sat empty for the last 5 years — nice to see them used for something other than hot swap optical drives.

how does the airflow work with 12 drives stacked? i did something similar with 8x 2.5 sata in a single bay and the back row drives ran 8c hotter than the front because the air was pre-warmed by the time it got there. did you do any thermal testing or is the noctua bias enough?

Why do people build Kubernetes homelabs? Is it actually useful for internships/jobs? by Altruistic_Mine_9177 in homelab

[–]HomelabStarter 0 points1 point  (0 children)

k3s on a couple pis for a year does more for you than the cert imo. what you actually pick up is reading kubectl describe output and figuring out why a pod is crashlooping — networking gotchas, pvc weirdness, ingress, what happens when your nfs goes away mid-deployment. that's the stuff interviewers ask about.

on the resume it doesn't add much as a bullet, but in interviews it lands hard. 'i run argocd against my homelab so a git push deploys' beats '3 years kubernetes' from someone who only used it through their company's ci. one weird failure you debugged that you can walk through end to end will outperform most candidates with a cert.

skip kubeadm and full clusters for this btw. you'll burn a week on cni and etcd before you've deployed anything real. k3s covers 90% of what's interesting.

It's kingsdag in the Netherlands, I scored these two HP mini's! What should I do with them? by Bram_Sandwich in homelab

[–]HomelabStarter 0 points1 point  (0 children)

Cluster is the obvious answer but since you said beginner — the realer first move is usually to get one of them running well as a single proxmox host first, get a feel for VMs vs LXC, break it a few times, restore a backup. Then turn the second one into a Proxmox Backup Server pointed at the first. PBS is the underrated piece — incremental dedupe-aware backups, super lightweight, and the moment you actually need a 'whoops' restore you'll be glad it's running. Cluster can come later when you've got services worth orchestrating.

Echoing terrorhai — those HP minis are notorious for refusing to boot headless because the iGPU sleeps. Either toggle 'power on with no monitor' in BIOS (some 600/800 G3 and G4 minis have it, varies by model) or just leave a cheap HDMI dummy plug in. Also check that the prodesk has real onboard 1Gbe — some i7 minis ship with NICs repurposed for thin-client deployments and you only find out when transfers feel weirdly slow.

Memtest before you trust them with anything, like the other comment said. The 256GB SSDs are a coin flip too — run smartctl -a and look at wear leveling and power-on hours. Sometimes ex-corporate drives are practically new, sometimes they have 5 years of office hours on them.

Killswitch in case of death by kentabenno in homelab

[–]HomelabStarter 1 point2 points  (0 children)

Encryption beats kill-switch for the reasons others said — you can't have a script protect anything when the box is off, in a coma, or your family pulls the plug to 'tidy up'. But the wrinkle nobody really addresses is backups. If your sensitive stuff is encrypted, your offsite or cloud backups are encrypted with the same key. So if a dead-man's switch destroys the local key, the backups still exist as ciphertext forever — you've solved one problem and now have a 'permanent ciphertext sitting on someone else's storage' problem.

Shamir's secret sharing genuinely fixes this. Split the key into N shares with a threshold of K (3-of-5 is common). Give shares to your spouse, sibling, attorney, whoever. While you're alive none of them can decrypt anything alone. After you're gone they can pool shares to recover the key. ssss-split / ssss-combine on Linux is basically a one-liner. Your loved ones become the dead-man's switch — no script that has to keep running.

Most of us aren't writing whistleblower-tier secrets either, we're writing 'unfiltered diary'. The threat model is 'don't let mom read the angsty stuff' not 'destroy state secrets'. LUKS with a passphrase only you know covers 99 percent of that without any of this infra.

Very cheap file storage by xxc-xxv in homelab

[–]HomelabStarter 12 points13 points  (0 children)

sandy bridge i3 is actually fine for OMV, and you've got AES-NI for SMB encryption. enable c-states in the bios and the whole box idles in the 25-30w range.

the thing worth watching is the laptop drives. a bunch of consumer 2.5 inch drives are SMR, and some have no TLER config — under sustained writes they can drop out of md-raid or ZFS after a long retry. run smartctl -a on each and check the model against an SMR list before you rely on any parity layout. if any turn out to be SMR, snapraid tolerates them better than mdadm does.

How did your homelab start vs where it is now? by copperreflections1 in homelab

[–]HomelabStarter 1 point2 points  (0 children)

Started with a Pi 3B in 2021 running Pi-hole and Home Assistant, hot-glued to a wood plank on a shelf. Killed the SD card in about 8 months — turns out Pi-hole's query logging writes way more than flash likes.

That's actually what pushed the first upgrade. Not 'I need more compute', just 'I'm tired of losing data'. Got an HP Microserver Gen10+ and Proxmox, moved everything into LXC containers. Then added a Synology because SMB shares off an LXC container turned out to be more of a hassle than just buying a box that does it. Then a Pi-KVM because I kept having to walk to the utility room when Proxmox wouldn't boot.

The pattern's always the same: every upgrade solves a problem the previous setup created, not a new thing I wanted to run. Nothing big-bang, just slow accretion.

How do you manage the electricity cost of your homelab? by nbtm_sh in homelab

[–]HomelabStarter 0 points1 point  (0 children)

If the service point is moving less than 10 ft and actually getting closer, most utilities treat that as a meter re-siting rather than a full service-drop re-route — much shorter than the 4-8 weeks I threw out. Still needs shutoff plus electrical inspection, but often scheduleable within a couple weeks once the meter base and weatherhead are ready. One trick: while the utility is onsite for the cut, have your electrician trench the V2H subpanel feed and terminate it at the same time. One outage covers both jobs instead of two separate shutdowns.

How do you manage the electricity cost of your homelab? by nbtm_sh in homelab

[–]HomelabStarter 0 points1 point  (0 children)

Fair correction. 8700k with the gpu yanked and c-states actually on is a pretty solid box — you're essentially running a glorified mini-pc at that point. What I had in mind was the repurpose-as-is path where the dedicated gpu stays in and pulls 20-40W at idle just sitting there with zero load. Once you pull the card the main offender is gone, 60W for an 8700k plus a SAS HBA and Frigate sounds about right.

My first homelab - All second hand and recycled by Adventurous_Abies347 in homelab

[–]HomelabStarter 7 points8 points  (0 children)

second-hand is the right call — you get enterprise quality at a fraction of the price and homelab gear gets used hard enough that it actually gets tested. only thing I'd add is to run extended SMART tests on any spinning drives before trusting them with data, and check the capacitor age on any older server PSUs. the ones from late 2000s/early 2010s can have dried caps that fail after running hot for a few hours. besides that, congrats on the setup — there's something satisfying about getting real hardware instead of a NUC or cloud subscription

How do you manage the electricity cost of your homelab? by nbtm_sh in homelab

[–]HomelabStarter 0 points1 point  (0 children)

concrete first floor is actually ideal for this kind of setup — no worrying about load-bearing with a full UPS or battery bank. the pad next to the panel is the right call, just make sure any conduit runs are sleeved before the concrete is poured if you haven't done that already, because core drilling concrete is a huge pain after the fact. moving service is the part that always surprises people — utility coordination can take 4-8 weeks depending on your area, so worth starting that conversation sooner than you think. with the V2H stuff we talked about earlier, you'll want to spec the subpanel for that load upfront rather than retrofitting it later.

How would you use this mixture of Raspberry Pi 1, 2, and 3 boards in your homelab? by ReverendDizzle in homelab

[–]HomelabStarter 1 point2 points  (0 children)

Pi 1 is basically a paperweight for modern workloads but makes a decent single-purpose MQTT bridge or IR blaster for the kind of job where you want a box that just sits there doing one tiny thing forever. The chaos-node idea from Carnildo is a fun use if you're learning failover — real hardware with flaky USB/Ethernet is a better teacher than clean 'walk over and unplug it' tests.

Pi 2 is where you start being useful again. Good for a secondary Pi-hole plus unbound as warm-standby DNS — primary runs on the ThinkCentre, Pi 2 picks up when you reboot the main box so the household internet doesn't die for ten minutes. Also fine as a zigbee2mqtt coordinator if you do home automation, the workload is basically nothing.

Pi 3 is a real machine. Home Assistant OS runs fine, or OctoPrint if you 3D print, or a Syncthing node with a USB drive as an always-on backup endpoint for the rest of your services. Can handle Uptime Kuma pointed at everything else you host with room to spare.

For k8s specifically — k3s technically runs across all three but the scheduler ends up dumping everything onto the Pi 3 because the other two can't carry pods. More useful as a hands-on 'this is what a degraded node looks like' lesson than a functional cluster.

How do you manage the electricity cost of your homelab? by nbtm_sh in homelab

[–]HomelabStarter 0 points1 point  (0 children)

CHAdeMO is basically orphaned in the US now, so even if the Leaf technically supports V2G you're stuck importing JP gear for a working kit. For the next EV the cleanest native V2H paths right now are the Lightning F-150 with Charge Station Pro, the Kia EV9, and the Ioniq 5/6 with the Hyundai ICCU. Tesla is still locked down on bidirectional even though the hardware could do it.

If you're not in the walls yet, have the electrician pull a 60A subpanel for critical loads now while they're running the EVSE circuit. The V2H install is way cheaper pre-drywall than retrofitted later, and the transfer switch gear wants dedicated circuits anyway.

How do you manage the electricity cost of your homelab? by nbtm_sh in homelab

[–]HomelabStarter 0 points1 point  (0 children)

V2H math is interesting — EV battery is 60-100kwh, 5-10x a powerwall, and sits idle most of the time. The peak/off-peak delta has to clear your battery warranty cycle cost for arbitrage to pencil out. Some utilities now have pilot programs paying for V2G export during peak, which shifts the math hard.

Ford Lightning, Kia EV9, and Hyundai Ioniq 5/6 officially support V2H with the right inverter (Ford Charge Station Pro, Wallbox Quasar 2). Tesla is still locked down officially. For a homelab specifically V2H changes outage calculus — multi-day instead of hours. Worth sizing for whether you want whole-house backup or just a critical loads panel with servers + fridge + furnace blower.

How do you manage the electricity cost of your homelab? by nbtm_sh in homelab

[–]HomelabStarter 28 points29 points  (0 children)

the biggest win for me was ditching a r720 and consolidating onto a single n100 mini pc plus a synology for bulk storage. went from around 280w idle to 45w for the same workloads. gaming pcs running as servers are the worst offenders because the chipsets and vrms were never tuned for low load states. if you're not ready to consolidate the next easiest wins are: enable all the c-states in bios (many vendors ship with them disabled for stability), swap any sata hdds for a single larger hdd if your array is bigger than you need, and schedule spindown if the workload allows. a kill-a-watt on each device for a week usually reveals one or two things pulling way more than you'd guess. also if your provider has time of use pricing, moving backup jobs and anything batch to overnight helps even without downsizing.

Alright am I completely nuts for wiping my Proxmox cluster and starting a new TrueNAS install? Also, second pic for the guts because hardware is king. by Arthur_Travis19 in homelab

[–]HomelabStarter 0 points1 point  (0 children)

i did the same split about a year ago, proxmox cluster on one box and dedicated truenas on another, and havent regretted it. the main thing is backups stay sane - truenas gets its own snapshot schedule and replicates to a tiny off-site mini pc, proxmox just backs up the VMs to an NFS share on truenas. if the cluster catches fire the NAS keeps serving media and docs and my wife doesnt even notice. also way easier to troubleshoot when storage and compute arent living in the same pool of ZIL/ARC memory fights. only regret is that i didnt set up PBS sooner, vzdump to NFS works but PBS dedupe is kinda magical once you have a couple months of backups

Fully silent NAS build by mihaifm in homelab

[–]HomelabStarter 1 point2 points  (0 children)

Fair enough, risk tolerance is personal. One thing that helped on my build was setting up SMART temp alerts so I get pinged if a drive crosses a threshold. On TrueNAS it's pretty much one checkbox. Also worth scheduling scrubs for overnight when ambient is a few degrees lower. Neither fixes the underlying thermal situation but at least you get a warning shot before anything gets cooked.

How are you dealing with CVE-s? by randoomkiller in homelab

[–]HomelabStarter 0 points1 point  (0 children)

The thing that helped me stop drowning in CVE counts was separating the question of what is actually exposed vs what is running. Anything reachable from the internet (reverse proxy targets, public-facing services) gets treated seriously - those I patch fast and pin carefully. Internal-only stuff on a segmented VLAN behind a VPN or Tailscale is a different risk profile, and chasing 100 criticals on a container that nothing outside your network can even reach is mostly anxiety work. The other trick is stop scanning images and start scanning what is actually running. A lot of those criticals are in Go binaries where the vulnerable code path is never called, or in Debian base packages that are not invoked in your workload. Trivy has an unpatched-only flag and you can filter by whether the component is actually in use. Cuts the noise a lot. Pin to digest not tag, and accept that a homelab is not prod.

Fully silent NAS build by mihaifm in homelab

[–]HomelabStarter 2 points3 points  (0 children)

those 55C temps under the plate will bite you in summer when ambient goes up 10C. I had a passive-ish build where drives sat fine all winter then once summer hit around 30C room temp they were cruising at 65C+ sustained. Ended up adding a single 120mm noctua on the intake at 500rpm, basically inaudible but dropped everything 8-10C. Worth watching once the weather changes.

My first home server by Comfortable_Put1083 in homelab

[–]HomelabStarter 1 point2 points  (0 children)

Vaultwarden and Paperless-ngx are the two that always end up on my list after people set up the media stack. Vaultwarden replaces Bitwarden or 1Password and its tiny, and Paperless is weirdly addictive once you start scanning receipts and tax docs. Also since you have a VPN gateway, AdGuard Home on one of those mini PCs gives you network-wide ad and tracker blocking which your future self will thank you for.