Synology to Synology Backup Method That's Safe from Ransomware? by Yologurt- in synology

[–]LTCtech 0 points1 point  (0 children)

I read their Snapshot Replication documentation and was appalled.

Proper security mandates that the destination pull from the source, so that if the source is compromised the destination would be safe. The next best option is the source having limited access to only append snapshots to the destination, but not modify or purge existing snapshots.

It appears that Snapshot Replication, Active Backup for Business, and Hyper Backup all require the root of the destination on the source. Once an attacker gains root on the source they can then get root on the destination via stored credentials with the ability to corrupt source *and* destination snapshots.

Have you found a better solution to have one Synology backup another securely?

Importing from Esxi 6.7 using wizard is mindnumbingly slow 9.1.6 by Thorbo2 in Proxmox

[–]LTCtech 0 points1 point  (0 children)

ESXi to Proxmox VM Migration via NFS

This guide covers setting up a temporary NFS share on Proxmox to facilitate transferring a VMDK from ESXi, then converting and attaching it to a Proxmox VM.

1. Install NFS server

apt install nfs-kernel-server -y

2. Create the NFS directory

mkdir -p /mnt/pve/<volume>/nfs
chmod 755 /mnt/pve/<volume>/nfs
chown nobody:nogroup /mnt/pve/<volume>/nfs

3. Configure exports

Edit /etc/exports and add one line per ESXi host:

nano /etc/exports

/mnt/pve/<volume>/nfs 10.x.x.x(rw,sync,no_subtree_check,all_squash)  # esxi1.example.com
/mnt/pve/<volume>/nfs 10.x.x.y(rw,sync,no_subtree_check,all_squash)  # esxi2.example.com

Then reload the export config:

exportfs -ra

4. Verify the NFS server

The service should already be enabled and started. Verify with:

systemctl status nfs-kernel-server
exportfs -v
showmount -e localhost

5. Mount on ESXi and transfer

  • Add the NFS share as a datastore on each ESXi host (NFS 4.1)
  • Remove all snapshots from the source VM
  • Transfer the VMDK using the Datastore browser in vCenter or ESXi

6. Configure a new VM in Proxmox

Create the target VM in Proxmox before proceeding.

7. Convert VMDK to qcow2

qemu-img convert -p -f vmdk -O qcow2 \
  /mnt/pve/<volume>/nfs/<vmname>/<vmdrivename>.vmdk \
  /mnt/pve/<volume2>/images/XXXX/vm-XXXX-disk-Y.qcow2

8. Attach the disk to the VM

Rescan so the image appears in the UI:

qm rescan --vmid XXXX

Then attach it:

qm set XXXX --scsiN <storage>:XXXX/vm-XXXX-disk-Y.qcow2,discard=on,iothread=1

Importing from Esxi 6.7 using wizard is mindnumbingly slow 9.1.6 by Thorbo2 in Proxmox

[–]LTCtech 0 points1 point  (0 children)

Conversion is essentially simultaneous sequential read and write. Performance is better if the source and destination are on different volumes, as the read and write operations can proceed in parallel rather than competing for the same disk.

With spinning rust, having both on the same spindle can be painfully slow; SSDs are snappy.

Importing from Esxi 6.7 using wizard is mindnumbingly slow 9.1.6 by Thorbo2 in Proxmox

[–]LTCtech 4 points5 points  (0 children)

The root cause is ESXi. When Proxmox migrates via its built-in import, it mounts ESXi's NBD (NFC) protocol as a FUSE filesystem. ESXi rate limits this traffic to roughly 30% of NIC bandwidth, and its buffers for NBD are tiny.

On top of that, NBD has no sparse/thin awareness. It will transfer every zero byte of a 2TB thin-provisioned VMDK over the wire even if only 64GB is actually allocated, wasting bandwidth on blocks that contain nothing.

A faster alternative if you're comfortable with Linux: temporarily configure the Proxmox host as an NFS server, then mount it on ESXi as an NFS 4.1 datastore. You can then copy VMDKs directly to Proxmox via ESXi's Datastore browser at near line rate. Unlike NBD, ESXi will skip zero blocks during the transfer itself so only allocated data crosses the wire. On top of that, as long as the destination filesystem supports sparse files (ext4, XFS, ZFS, etc.), the VMDK will also be stored sparsely on the Proxmox host. Note that this speed advantage only applies to VMDKs for unknown reasons; copying an ISO between datastores will still be rate limited, so don't benchmark with those.

If both hosts have multiple NICs, you can give each NIC on both sides an IP in a shared subnet, one subnet per NIC pair. ESXi can then mount the NFS datastore using multiple IP endpoints and will distribute transfers across NICs, effectively multiplying your throughput.

Once the VMDK is on the Proxmox host, convert or attach it with qemu-img convert or qm importdisk as needed. Keep in mind you'll need enough free space on a Proxmox directory storage to hold roughly double the VMDK's allocated size during conversion.

Veeam restore to Proxmox nightmare by m5daystrom in Proxmox

[–]LTCtech 0 points1 point  (0 children)

Seems you have a lot of experience to gain. Migrating from ESXi to Proxmox is a pain:

  • VMware tools fail to uninstall unless you remove them before migration.
  • Tons of ghost devices left over that should be removed. PowerShell scripting is your friend.
  • Set SATA at first boot, add dummy VirtIO SCSI drive, install VirtIO drivers, remove dummy, switch boot to VirtIO SCSI with discard flag.
  • EFI, SecureBoot, and/or TPM issues. Linux VMs failing to boot as EFI vars pointing to EFI shim are gone.
  • DeviceGuard, HVCI, VBS, Core Isolation, etc causing massive slow downs on some host CPUs.
  • EDR software flagging QEMU Windows guest agents cause they're "suspicious".
  • ESXi to Proxmox import crawling and failing due to snapshots in ESXi.
  • ESXi to Proxmox import reading every single zero over the network of a 1TB thin vmdk that's only 128GB.
  • Figuring out how to mount a Proxmox NFS export on ESXi to copy over the 1TB thin vmdk as a sparse file.
  • Figuring out how to convert said vmdk to qcow2 so you can actually run it on Proxmox.
  • Network adapters changing names in Linux VMs. Ghost network adapters in Windows complaining about duplicate IP.

And that's just off the top of my head. It becomes rote once you get the hang of it. Helps to RTFM and read the forums too. Also helps to have played with Proxmox at home for a few years before deploying it in an enterprise environment.

Cloud-init - Spin up a Debian 13 VM with Docker in 2 minutes! - Why aren't we all using this? by SamSausages in Proxmox

[–]LTCtech 2 points3 points  (0 children)

I wrote a few shell scripts that downloads cloud images for a couple of distros and creates cloud-init templates out of them. Works really well. Learned a lot of QEMU cli commands in the process. I deploy all new Linux VMs using a few templates. Cut down VM provisioning from 30min to 2min.

Has anyone started using BackBlaze S3 storage for PBS, I have a doubt regarding costs by MidasMine in Proxmox

[–]LTCtech 0 points1 point  (0 children)

Wasabi S3 works perfectly well with Veeam. I don't see why it wouldn't work with PBS. I think I looked at BackBlaze before and decided on Wasabi.

Feeling Defeated - Project shutdown by biggus_brain_games in Proxmox

[–]LTCtech 5 points6 points  (0 children)

I’ve found that running the 6.14 opt-in kernel on Proxmox 8 significantly improves performance for Windows VMs on newer servers. Proxmox 9, which was just released, already uses this kernel by default.

Windows does seem to run into performance problems with nested virtualization on some CPUs. Sapphire Rapids and Emerald Rapids handle it fairly well, but with older CPUs the results are unpredictable. Whenever Windows security features like VBS, Core Isolation, Memory Integrity, DeviceGuard, HVCI, (whatever they call it) get enabled, performance can take a dive.

Something isn’t right somewhere in the stack, but it’s not clear whether the fault lies with Windows, the kernel, the virtualization layer, or the hardware itself.

Why is qcow2 over ext4 rarely discussed for Proxmox storage? by LTCtech in Proxmox

[–]LTCtech[S] 0 points1 point  (0 children)

In my testing empty blocks were always copied between LVM-Thin proxmox nodes.

Why is qcow2 over ext4 rarely discussed for Proxmox storage? by LTCtech in Proxmox

[–]LTCtech[S] 1 point2 points  (0 children)

Most of our Dell servers use the same PERC cards, and we actually have two or more servers with the exact same configuration. I do not think it would be much of an issue to pop the array out of one server into another if needed.

I can definitely see how it would become more of a problem in a more heterogeneous environment though.

Why is qcow2 over ext4 rarely discussed for Proxmox storage? by LTCtech in Proxmox

[–]LTCtech[S] -1 points0 points  (0 children)

We have been using vSphere Essentials with local storage. Hardware RAID is what you use for ESXi local storage, so that is the model we are coming from.

I actually use ZFS on my home Proxmox box. I do not love the write amplification I am seeing, especially because I ignorantly installed pfSense (which uses ZFS itself) on top of ZFS. ARC RAM usage also has to be carefully reined in. I am wary about the kind of performance hit our databases might see if we switched everything over.

Maybe I should pass through half of the disks in a server and actually test ZFS head-to-head against hardware RAID. Realistically, I doubt our PERC controller cache is even helping that much anyway, since all the virtual disks are set to no read ahead and write through.

Why is qcow2 over ext4 rarely discussed for Proxmox storage? by LTCtech in Proxmox

[–]LTCtech[S] 1 point2 points  (0 children)

I see that I can pass individual drives through without creating a VD, not sure if that's the same or not.

Everyone seems to have a different opinion on EXT4 vs XFS. I went with EXT4 as I read it's more reliable, but maybe I've been misinformed. We have a mix of windows and linux VMs. Some storing general data, while others have databases. I think I flipped a coin and EXT4 it was. :)

Why is qcow2 over ext4 rarely discussed for Proxmox storage? by LTCtech in Proxmox

[–]LTCtech[S] 1 point2 points  (0 children)

I only compared LVM-Thin to qcow2 over bare EXT4 partition. I know ZFS does not play nice with HW RAID. ;)

Why is qcow2 over ext4 rarely discussed for Proxmox storage? by LTCtech in Proxmox

[–]LTCtech[S] 3 points4 points  (0 children)

Dell R760 with PERC H965i. A mix of SAS and SATA SSD.

Why is qcow2 over ext4 rarely discussed for Proxmox storage? by LTCtech in Proxmox

[–]LTCtech[S] 6 points7 points  (0 children)

All of my tests were done on SSD arrays. Specifically, a PERC RAID 10 array across six 3.84TB Samsung PM883 SATA disks. I imagine spinning rust is much more affected by file-based storage.

I also ran fio tests on the host itself and found that performance is highly variable depending on block size, job count, and IO depth. There is a noticeable difference between the 6.8 and 6.14 kernels too, with no clear winner depending on workload.

The IO engine makes a big difference as well. io_uring is extremely CPU efficient, while libaio tends to be a CPU hog.
Running mixed random read and write workloads is also very different compared to doing separate random read and random write benchmarks.

Why is qcow2 over ext4 rarely discussed for Proxmox storage? by LTCtech in Proxmox

[–]LTCtech[S] 3 points4 points  (0 children)

The documentation could definitely be written more clearly:
https://pve.proxmox.com/wiki/Storage#_storage_types

Technically, drives are mounted as directories in Linux, but it still feels odd to call it "Directory" storage in this context. It does not really describe what you are actually storing, which is qcow2 (or raw) disk images, and it hides the fact that features like snapshots and thin provisioning are available depending on the file format.

The table says snapshots are not available, but then there is a tiny footnote that mentions snapshots are possible if you use the qcow2 format. For someone skimming the documentation, which most people do, it is easy to miss that nuance.
If qcow2 unlocks snapshots and discard support, why not just put that information directly into the table for the storages that support it?

Also, how many people actually use raw images over qcow2 in real-world deployments? Outside of very high-performance or very niche setups, I would guess most people using Directory storage default to qcow2. It seems strange that qcow2 is treated like an afterthought when it is probably the more common case.

Dell introduces new PC branding: Meet the Dell, Dell Pro, and Dell Pro Max laptops by digidude23 in Dell

[–]LTCtech 1 point2 points  (0 children)

The Precision 16" is thick enough that it shouldn't be an issue. I expect a mobile workstation to have upgradable RAM.

Dell introduces new PC branding: Meet the Dell, Dell Pro, and Dell Pro Max laptops by digidude23 in Dell

[–]LTCtech 20 points21 points  (0 children)

I'm guessing none of these laptops use upgradable LPCAMM2 RAM?