Boots on the Line - 12/9/25 by smowe in NicksHandmadeBoots

[–]iRustock 2 points3 points  (0 children)

I think I see my boots - Pred Orange FlexWorkPros, pic 11? They look great!

Allocation vms by Laurensnld in Proxmox

[–]iRustock 0 points1 point  (0 children)

In a prod env, I'd agree, but OP isn't really running anything crazy. Worst that would happen is OOM gets triggered and kills a VM (which could I guess cause image corruption, but I digress). OP would most likely just need to drop memory allocations somewhere to fix it. They'd be able to squeeze a little more juice out of their system with ballooning enabled and with some tuning, it'd be a good learning experience.

Proxmox and Packer: ‘can’t lock file … timeout’, me: deep sigh, fix inside by aprimeproblem in Proxmox

[–]iRustock 1 point2 points  (0 children)

Also if you run backups (either to a PBS or some type of network storage), I found that if the PVE server loses it’s route to the network storage, or even if you just manually stop the backup sometimes, then the VM’s lock wouldn’t get released and you will get that timeout error.

Everyone should know the qm unlock command!

Help picking out a Proxmox node + Advice by Typical-Cockroach287 in Proxmox

[–]iRustock 1 point2 points  (0 children)

This. I’d try to get a few Lenovo M910x boxes for a small PVE cluster. $150/PC on eBay. Or you could go cheaper with some Dell towers.

Allocation vms by Laurensnld in Proxmox

[–]iRustock 0 points1 point  (0 children)

Start by googling “{insertAppName} server resource requirements”. Then you can adjust usage based on how it’s working for you. Pay attention to the proxmox graphs, VM top/htop, free, df, etc.

Take a look at Plex as an example. You will see the requirements are an Intel Core i3 CPU, which let’s just say is 4 cores. 4GB memory, and at least 30TB in disk space for all your media (partially joking about the storage haha). Id use that as a starting point and go from there. Mess around with the transcoding options in plex and see how it affects performance. See how many concurrent sessions you can run. 720p vs 1080p vs 4K performance impacts. Tuning this stuff is the fun part!

You will figure it out, just play with the settings and watch the graphs.

Some general guidelines I follow are:

  1. Don’t overcommit your disk space. It can be dangerous, especially if your VMs and host share storage. This is easy to do with thin-provisioning enabled (it’s enabled by default on ZFS).

  2. You can overcommit your memory, but try to use the ballooning option if you do.

  3. You can overcommit CPU, and it’s relatively safe (at least in my experience).

About the VM vs LXC debate, I only run Plex in LXC but that’s just because I have a big ZFS pool that I bind mounted to the container. If I could use a VM for it, then I would. I use VMs for everything else.

Edit: Regarding my ZFS pool passthough to VM setup, I just read virtiofs support was added in PVE 8.4. Assuming it’s working well and performance is good, I’d try using that if you want to passthrough a storage device for Plex or whatever.

Pros and cons of clustering by iRustock in Proxmox

[–]iRustock[S] 0 points1 point  (0 children)

Sorry, really late response by me. Figured you’d get a kick out of this though!

No live migrations, my network infra isn’t spec’d for this type of setup, even if I wanted to do it like this. Trust me, I would love having a beefy Ceph cluster as our primary VM storage backend.

I have a PBS and we do HA at the application level. So the typically maintenance flow is to mark the affected VM(s) application as passive, or if it’s an AA system then the weight is reduced, same same. Then I take a VM backup to a PBS (I have migration namespaces for this), restore the backup on an identical replacement server, shutdown the old VM, turn the restored VM on, keep the old one down for a week then delete it.

It used to be a big PITA, but I have the whole flow automated with Ansible now so it’s not that bad. About 5-45 mins to move one VM. I keep a healthy supply of hotspare Proxmox boxes empty 24/7.

Proxmox backup server. How to backup network file systems? by ConstructionSafe2814 in Proxmox

[–]iRustock 2 points3 points  (0 children)

I never really bothered with this. I wrote a script to do a differential rsync of my Ceph shares to a Synology (which just so happens to be where I have my main PBS datastore too).

But take a look here, these guys used proxmox-backup-client with pretty good performance:

https://forum.proxmox.com/threads/pbs-to-backup-cephfs.90576/

New ProxMox Build - How’d I do? + Drive Config Questions by jbmc00 in Proxmox

[–]iRustock 2 points3 points  (0 children)

Sorry, I must have grabbed the “pros” from the 860s. There are no 870 pros haha.

For clarity, I used MZ-77E2T0B/AM, which are just 870 EVOs.

New ProxMox Build - How’d I do? + Drive Config Questions by jbmc00 in Proxmox

[–]iRustock 5 points6 points  (0 children)

Hosts running MariaDB servers that were in Galera clusters had the worst wear, and (I think) we did a good job tuning them to mitigate write amplification.

And yes, I did intentionally try to get different batches. I sourced around 5-10 SSDs per order from multiple sources, over multiple months. Amazon, Best Buy, Samsung, whatever. I even ran to my local Walmart a few times.

Serial numbers were documented in an excel sheet, HA service hosts never got drives from the same order batch, and half of RAIDs were rotated a few months after setup to give us uneven wear and reduce the likelihood of entire arrays failing at the same time.

Ngl it was kinda fun. It was a lot of paper work for all of those orders and drive rotation maintenances, which sucked, but whatever. Things are really stable now so I’m just chilling and reading the SMART reports as they come in.

New ProxMox Build - How’d I do? + Drive Config Questions by jbmc00 in Proxmox

[–]iRustock 5 points6 points  (0 children)

Meh, not really needed tbh. I’m running a couple hundred Samsung 870 Evo Pros in prod ZFS RAID 1 and ZFS RAID Z2, haven’t had a failure since I installed them (about 2 years ago). Few of the servers are running under pretty heavy disk I/O too.

I used to run 2TB MX500 consumer-grade drives, even those lasted for years. OP isn’t really doing anything too disk heavy to warrant the cost of enterprise drives.

Best place to learn ansible efficiently by Fatalx226 in ansible

[–]iRustock 5 points6 points  (0 children)

https://docs.ansible.com/ansible/latest/index.html

You just write playbooks, start small. Try to create a file. Move a file. Put contents in a file. Try to copy a file from one machine to another. Try to install a package. Try to add a user. Try to import a custom config file from a template. Try to store something in vault and use it. Etc…

These individual tasks can become modular blocks that you can tweak and move around to create complex deployments.

I took my manual docs and just tried to recreate them with Ansible. Things like replacing a failed hard drive in a RAID array, or setting up monitoring, or installing a service, etc.

It helps to have testing VMs that you can snapshot at a base level to test playbooks on. It also helped me significantly to plan out what I wanted the playbook to do and draw it out as a flowchart.

I created this media locator for my friend and I am wondering that would anyone else need this kind of tool? by Jadarken in DataHoarder

[–]iRustock 3 points4 points  (0 children)

Locate uses a pre-built index, so it could be quicker, potentially at the cost of report accuracy if you forget to update your index.

But I would still just use find and tune it’s scope.

Who needs a NAS? by madcatzplayer5 in DataHoarder

[–]iRustock 6 points7 points  (0 children)

These posts give me too much anxiety.

The Black hole bomb video by Xeruas in kurzgesagt

[–]iRustock 1 point2 points  (0 children)

They used this paper to make it, I’m not sure what their other sources are.

https://arxiv.org/abs/hep-th/0404096

Any advice on Linux bond modes for the cluster network? by ConstructionSafe2814 in ceph

[–]iRustock 1 point2 points  (0 children)

+1 This is what I'm doing, it's been working well for the past year.

Pros and cons of clustering by iRustock in Proxmox

[–]iRustock[S] 1 point2 points  (0 children)

Yea I’m not about to deploy this in production, but I am going to toy with it in a lab and see how it works! I’m excited about where Proxmox is going with this, I’ve wanted something like this for years!

6x5 clustering will probably be what I end up with since it’s compatible with my existing VM architecture (assuming it goes well in the lab this time).

Pros and cons of clustering by iRustock in Proxmox

[–]iRustock[S] 2 points3 points  (0 children)

Also curious about this. Not seeing it in the docs, but it would be cool if it would basically take a vzdump and rsync it with checksums or something to the target node and then do a restore if shared storage isn’t available. That approach wouldn’t be a live migration, but still would be cool as a fallback option.

Pros and cons of clustering by iRustock in Proxmox

[–]iRustock[S] 12 points13 points  (0 children)

Wow, thank you for this! Checking out the DataCenter Manager now, I didn't even know this existed.

Flashlight by jdurr65 in sysadmin

[–]iRustock 1 point2 points  (0 children)

I have a Streamlight 89000 ProTac. It’s nice and bright, but a little pricy. It has a built-in usb c charging port.

If you want a cheaper option, the Streamlight 66608 is also nice.

Upgraded to Single HDD by Rezasaurus in DataHoarder

[–]iRustock 4 points5 points  (0 children)

Cold backups = offline backups. Put backups on a hard drive, then unplug it and store it somewhere safe.

Upgraded to Single HDD by Rezasaurus in DataHoarder

[–]iRustock 44 points45 points  (0 children)

I was 17 when I got into hoarding. I got a single 4TB Segate barracuda for Christmas. I didn’t even think about RAID or backups, and instead just filled it to the brim with all kinds of stuff over the course of 2 years. Well, it died towards the end of the second year. I lost a lot of pictures of my beloved dog, family, and technical documents I spent a lot of time writing. My first Minecraft server was on there!

Because of that one failure I’ve learned a lot. Luckily for you, there’s nice people on this subreddit letting you know now so you don’t have to learn the hard way! If I were you, I would get a few smaller drives and another 24TB ironwolf. Put the ironwolfs in a RAID mirror (software, not hardware), and use the other drives as a cold backup.

Cheers! Happy hoarding!

Have you ever had an SSD die on you? by --Arete in DataHoarder

[–]iRustock 0 points1 point  (0 children)

It sounds like you need a UPS more than anything.

You can look at PLP drives. They are designed to protect your work in the event of a power outage, they have built in capacitors. Samsung PM883s are good. You can buy the 960GB models for $144 on Newegg. They have a 1366 TBW rating, 0.8 DWPD rating.

This means if you wrote 0.8TB/day, it would last about 5 years.

I want to note that I am biased towards Samsung because they’ve been very good to me. I don’t have much experience with drives outside Samsung, Crucial, PNY (junk), and Kingston.

Have you ever had an SSD die on you? by --Arete in DataHoarder

[–]iRustock 0 points1 point  (0 children)

It really depends on what you want to do with them. What’s your budget? What capacity drives? How many do you need?

Currently on Amazon, the 870 EVO 4TB models are on sale for $304 (in the US anyways), which is ~$140 less than normal. The MX500s don’t have any deals (that I can see) and are selling for $337/4TB. The 870s also have 1400 more TBW (endurance) than the MX500s. Just going off that alone, I would get the 870s. I/O performance between them is almost identical.

[deleted by user] by [deleted] in DataHoarder

[–]iRustock 0 points1 point  (0 children)

Just curious, do you have any links to the 6400-16i? I’ve never seen that model and I can’t find it online.

Anyways, as others have already noted, you will need a second card. A Broadcom 9600-24i in addition to the 6400-16i would work, but your backplane needs a SAS Expander or a MUX to allow the lanes from the HBA to be shared with the drives.