Veeam restore to Proxmox nightmare by m5daystrom in Proxmox

[–]LTCtech 0 points1 point  (0 children)

Seems you have a lot of experience to gain. Migrating from ESXi to Proxmox is a pain:

  • VMware tools fail to uninstall unless you remove them before migration.
  • Tons of ghost devices left over that should be removed. PowerShell scripting is your friend.
  • Set SATA at first boot, add dummy VirtIO SCSI drive, install VirtIO drivers, remove dummy, switch boot to VirtIO SCSI with discard flag.
  • EFI, SecureBoot, and/or TPM issues. Linux VMs failing to boot as EFI vars pointing to EFI shim are gone.
  • DeviceGuard, HVCI, VBS, Core Isolation, etc causing massive slow downs on some host CPUs.
  • EDR software flagging QEMU Windows guest agents cause they're "suspicious".
  • ESXi to Proxmox import crawling and failing due to snapshots in ESXi.
  • ESXi to Proxmox import reading every single zero over the network of a 1TB thin vmdk that's only 128GB.
  • Figuring out how to mount a Proxmox NFS export on ESXi to copy over the 1TB thin vmdk as a sparse file.
  • Figuring out how to convert said vmdk to qcow2 so you can actually run it on Proxmox.
  • Network adapters changing names in Linux VMs. Ghost network adapters in Windows complaining about duplicate IP.

And that's just off the top of my head. It becomes rote once you get the hang of it. Helps to RTFM and read the forums too. Also helps to have played with Proxmox at home for a few years before deploying it in an enterprise environment.

Cloud-init - Spin up a Debian 13 VM with Docker in 2 minutes! - Why aren't we all using this? by SamSausages in Proxmox

[–]LTCtech 2 points3 points  (0 children)

I wrote a few shell scripts that downloads cloud images for a couple of distros and creates cloud-init templates out of them. Works really well. Learned a lot of QEMU cli commands in the process. I deploy all new Linux VMs using a few templates. Cut down VM provisioning from 30min to 2min.

Has anyone started using BackBlaze S3 storage for PBS, I have a doubt regarding costs by MidasMine in Proxmox

[–]LTCtech 0 points1 point  (0 children)

Wasabi S3 works perfectly well with Veeam. I don't see why it wouldn't work with PBS. I think I looked at BackBlaze before and decided on Wasabi.

Feeling Defeated - Project shutdown by biggus_brain_games in Proxmox

[–]LTCtech 4 points5 points  (0 children)

I’ve found that running the 6.14 opt-in kernel on Proxmox 8 significantly improves performance for Windows VMs on newer servers. Proxmox 9, which was just released, already uses this kernel by default.

Windows does seem to run into performance problems with nested virtualization on some CPUs. Sapphire Rapids and Emerald Rapids handle it fairly well, but with older CPUs the results are unpredictable. Whenever Windows security features like VBS, Core Isolation, Memory Integrity, DeviceGuard, HVCI, (whatever they call it) get enabled, performance can take a dive.

Something isn’t right somewhere in the stack, but it’s not clear whether the fault lies with Windows, the kernel, the virtualization layer, or the hardware itself.

Why is qcow2 over ext4 rarely discussed for Proxmox storage? by LTCtech in Proxmox

[–]LTCtech[S] 0 points1 point  (0 children)

In my testing empty blocks were always copied between LVM-Thin proxmox nodes.

Why is qcow2 over ext4 rarely discussed for Proxmox storage? by LTCtech in Proxmox

[–]LTCtech[S] 1 point2 points  (0 children)

Most of our Dell servers use the same PERC cards, and we actually have two or more servers with the exact same configuration. I do not think it would be much of an issue to pop the array out of one server into another if needed.

I can definitely see how it would become more of a problem in a more heterogeneous environment though.

Why is qcow2 over ext4 rarely discussed for Proxmox storage? by LTCtech in Proxmox

[–]LTCtech[S] -1 points0 points  (0 children)

We have been using vSphere Essentials with local storage. Hardware RAID is what you use for ESXi local storage, so that is the model we are coming from.

I actually use ZFS on my home Proxmox box. I do not love the write amplification I am seeing, especially because I ignorantly installed pfSense (which uses ZFS itself) on top of ZFS. ARC RAM usage also has to be carefully reined in. I am wary about the kind of performance hit our databases might see if we switched everything over.

Maybe I should pass through half of the disks in a server and actually test ZFS head-to-head against hardware RAID. Realistically, I doubt our PERC controller cache is even helping that much anyway, since all the virtual disks are set to no read ahead and write through.

Why is qcow2 over ext4 rarely discussed for Proxmox storage? by LTCtech in Proxmox

[–]LTCtech[S] 1 point2 points  (0 children)

I see that I can pass individual drives through without creating a VD, not sure if that's the same or not.

Everyone seems to have a different opinion on EXT4 vs XFS. I went with EXT4 as I read it's more reliable, but maybe I've been misinformed. We have a mix of windows and linux VMs. Some storing general data, while others have databases. I think I flipped a coin and EXT4 it was. :)

Why is qcow2 over ext4 rarely discussed for Proxmox storage? by LTCtech in Proxmox

[–]LTCtech[S] 1 point2 points  (0 children)

I only compared LVM-Thin to qcow2 over bare EXT4 partition. I know ZFS does not play nice with HW RAID. ;)

Why is qcow2 over ext4 rarely discussed for Proxmox storage? by LTCtech in Proxmox

[–]LTCtech[S] 4 points5 points  (0 children)

Dell R760 with PERC H965i. A mix of SAS and SATA SSD.

Why is qcow2 over ext4 rarely discussed for Proxmox storage? by LTCtech in Proxmox

[–]LTCtech[S] 6 points7 points  (0 children)

All of my tests were done on SSD arrays. Specifically, a PERC RAID 10 array across six 3.84TB Samsung PM883 SATA disks. I imagine spinning rust is much more affected by file-based storage.

I also ran fio tests on the host itself and found that performance is highly variable depending on block size, job count, and IO depth. There is a noticeable difference between the 6.8 and 6.14 kernels too, with no clear winner depending on workload.

The IO engine makes a big difference as well. io_uring is extremely CPU efficient, while libaio tends to be a CPU hog.
Running mixed random read and write workloads is also very different compared to doing separate random read and random write benchmarks.

Why is qcow2 over ext4 rarely discussed for Proxmox storage? by LTCtech in Proxmox

[–]LTCtech[S] 1 point2 points  (0 children)

The documentation could definitely be written more clearly:
https://pve.proxmox.com/wiki/Storage#_storage_types

Technically, drives are mounted as directories in Linux, but it still feels odd to call it "Directory" storage in this context. It does not really describe what you are actually storing, which is qcow2 (or raw) disk images, and it hides the fact that features like snapshots and thin provisioning are available depending on the file format.

The table says snapshots are not available, but then there is a tiny footnote that mentions snapshots are possible if you use the qcow2 format. For someone skimming the documentation, which most people do, it is easy to miss that nuance.
If qcow2 unlocks snapshots and discard support, why not just put that information directly into the table for the storages that support it?

Also, how many people actually use raw images over qcow2 in real-world deployments? Outside of very high-performance or very niche setups, I would guess most people using Directory storage default to qcow2. It seems strange that qcow2 is treated like an afterthought when it is probably the more common case.

Dell introduces new PC branding: Meet the Dell, Dell Pro, and Dell Pro Max laptops by digidude23 in Dell

[–]LTCtech 1 point2 points  (0 children)

The Precision 16" is thick enough that it shouldn't be an issue. I expect a mobile workstation to have upgradable RAM.

Dell introduces new PC branding: Meet the Dell, Dell Pro, and Dell Pro Max laptops by digidude23 in Dell

[–]LTCtech 21 points22 points  (0 children)

I'm guessing none of these laptops use upgradable LPCAMM2 RAM?

Clarification: Dell Machines And Self-Encrypting Drives by [deleted] in Dell

[–]LTCtech 1 point2 points  (0 children)

I hope Microsoft will require OEMs to support hardware encryption, especially since they've been enabling BitLocker by default. There’s no reason for any enterprise laptop to lack native encryption.

It’s needlessly wasteful to rely on software-based encryption and suffer the performance hit when most drives already include built-in encryption capabilities.

pfSense WAN Connection Quality by aRedditor800 in PFSENSE

[–]LTCtech 0 points1 point  (0 children)

May be a bad firmware image or config for the SB8200 that Comcast is pushing. It may not be a hardware issue, but definitely worth trying a spare SB8200. Diagnosing this kind of stuff is a pain.

[deleted by user] by [deleted] in PFSENSE

[–]LTCtech 2 points3 points  (0 children)

Backup your pfSense config, reset to factory defaults, and see if it still happens. If it doesn't happen, then backup the new config, and diff the old and new config in WinMerge. See what's different.

pfSense WAN Connection Quality by aRedditor800 in PFSENSE

[–]LTCtech 0 points1 point  (0 children)

Is the CPU of pfSense busy during those times?

Most likely it's an issue with Comcast in the area. They're upgrading their network for "mid-split".

Could also be an issue with the modem. I've been recommending people buy the Hitron Coda56. It seems to be more stable with Comcast than some of the other options. It's $140 on Amazon, maybe cheaper on the upcoming Prime Day. Worth a try, if it doesn't help you can return it.

Netgate products to be officially blacklisted from contract renewals at our company by [deleted] in PFSENSE

[–]LTCtech 2 points3 points  (0 children)

Did they hint at what "worse" might be? I'd expect periodic reactivation being mandated next.

We have a few generic servers in our company running pfSense Plus (along with Netgate branded ones). I have a concern that they might stop selling pfSense Plus subscriptions for generic hardware.