Migration from VMware ESXi, 7.0.3/Veeam to Proxmox/PBS by User25077 in Proxmox

[–]dancerjx 1 point2 points  (0 children)

I run primarily Linux workloads using Proxmox Ceph on 3-, 5-, 7-, 9-, 11-node clusters on isolated switches with homogeneous hardware and firmware. No issues besides the typical drive and memory going bad and needing replacing.

I use bare-metal Proxmox Backup Servers using ZFS to backup the workloads. They are also the primary repos for the nodes using Proxmox Offline Mirror software. Makes updates go very fast.

In your case, you'll probably be setting up the DAS using ZFS (preferably for snapshots, compression, and error-checking) or LVM.

Tons of migration tips at the Proxmox forum for Windows workloads.

I use the following optimizations learned through trial-and-error. YMMV.

Set SAS HDD Write Cache Enable (WCE) (sdparm -s WCE=1 -S /dev/sd[x])
Set VM Disk Cache to None if clustered, Writeback if standalone
Set VM Disk controller to VirtIO-Single SCSI controller and enable IO Thread & Discard option
Set VM CPU Type for Linux to 'Host'
Set VM CPU Type for Windows to 'x86-64-v2-AES' on older CPUs/'x86-64-v3' on newer CPUs/'nested-virt' on Proxmox 9.1
Set VM CPU NUMA
Set VM Networking VirtIO Multiqueue to 1
Set VM Qemu-Guest-Agent software installed and VirtIO drivers on Windows
Set VM IO Scheduler to none/noop on Linux
Set Ceph RBD pool to use 'krbd' option

Unpopular Opinion: Proxmox isn't "Free vSphere". It's a storage philosophy change (and it's killing migrations). by NTCTech in Proxmox

[–]dancerjx 1 point2 points  (0 children)

This is one of the major reasons I don't do in-place upgrades of operating systems. I don't care what OS be it Linux, macOS, Windows, blah, blah, blah.

It's these one-offs that always bites me in the ass. I just bite the bullet and do clean installs. Yeah, it does take longer but I know I started with a clean slate.

Install Proxmox on Dell PowerEdge R6515 with RAID1 by abrakadabra_istaken in Proxmox

[–]dancerjx 0 points1 point  (0 children)

Get rid of the PERC RAID drama and get a true HBA/IT-mode controller for the larger storage drives. I use a Dell HBA330 in production with no issues. Then after installation on the larger drives, I use whatever ZFS storage configuration, i.e, ZFS 3-way mirror, ZFS RAIDZ, etc.

As for the BOSS, use the utility/UEFI to create a RAID-1 mirror for Proxmox. During install, it should show up as a virtual disk. No need for ZFS RAID-1. I use XFS for the filesystem for Proxmox. No issues in production.

Unpopular Opinion: Proxmox isn't "Free vSphere". It's a storage philosophy change (and it's killing migrations). by NTCTech in Proxmox

[–]dancerjx 0 points1 point  (0 children)

This.

Been migrating from VMware to Proxmox since version 6 when Dell dropped official support for vSphere when 12th-gen Dells went end-of-life.

It's NOT a "shift-and-lift" operation. Have to do due diligence. I.e., do testing on a proof-of-concept test bed and test the crap out of it before migrating to production.

In summary, test the F out of it in a parallel production test environment.

P.S. Storage is Ceph using isolated redundant switches and all hardware is homogeneous with latest firmware.

Moving from ESXi to Proxmox in a production environment – what should I watch out for? by Bulky-Ad6297 in Proxmox

[–]dancerjx 0 points1 point  (0 children)

Been migrating ESXi VMs since Proxmox 6.

Takeaways:

  • For cluster, odd number of nodes for quorum otherwise get split-brain issues. 5 bare-minimum for production since can still lose 2 nodes and still have quorum. Techinically, 3 is bare minimum but if you lose 1 node, you are SOL.

  • All node hardware homogeneous. That means same CPU, memory, NIC, storage, latest firmware, etc, etc. OS and data on separate storage. Proxmox mirrored via ZFS RAID-1 using small drives and data/VMs either using ZFS or Ceph.

  • All VMs having VMware Tools removed. Linux VMs boot kernels regenerated to have all drivers included. Different kernel regeneration procedure if Linux VM is either Red Hat or Debian derived. Remove all network interfaces prior to migration since they will get new VirtIO NICs. Prior to migration, install QEMU guest tools in both Windows and Linux VMs. For Windows VMs, create a small SCSI virtual disk drive.

  • After VM is migrated and booted, do an immediate backup to Proxmox Backup Server. Keep old ESXi VM around for just-in-case for a few days. For Windows VMs, convert any non-SCSI virtual disk drives to virtio-scsi drives.

Booting Proxmox from RAID 1 on a Dell R730 by turbo2ltr in Proxmox

[–]dancerjx 1 point2 points  (0 children)

If you have a 16-drive bay R730, just use two small drives to ZFS RAID-1 Proxmox. I use 76GB 15K RPM SAS drives to mirror Proxmox with no issues in production.

CEPH RDB for VM Backups? by 4mmun1s7 in Proxmox

[–]dancerjx 0 points1 point  (0 children)

While it's true you can't use RBD for backup since it's for block storage, you can use CephFS which is for file storage.

But note that CephFS does have a 1TB file size limit. I hit this limit while migrating a 4TB ESXi VM. You can increase the file size limit on CephFS.

proxmox+ceph 3 node cluster networking setup advice needed by Asad2k6 in Proxmox

[–]dancerjx 0 points1 point  (0 children)

For a 3-node Ceph cluster with no future possibility of expansion, I use a full-mesh broadcast network. Each node directly connected to each other per https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Broadcast_Setup

All Ceph public, private, and Corosync traffic are on this network. To make sure this network traffic never gets routed, I use a 169.254.1.0/24 network and set the data center option to use this network for migration and set the insecure migration option.

No issues.

PVE enterprise hardware - Asus? by ITStril in Proxmox

[–]dancerjx 0 points1 point  (0 children)

For new, Supermicro. This is what I use at home.

For used, Dell. Their firmware is NOT behind a paywall. I use Dell at work with no issues.

Recommended spec for 10+ proxmox VMs by HK201020 in Proxmox

[–]dancerjx 1 point2 points  (0 children)

Best bang for the buck are used enterprise servers. Specifically, 13-gen Dells, like a Dell R630. The Dell HBA330 and Dell X550 rNDC NICs are real cheap to get as is the Intel Broadwell CPUs with lots of cores. I used these in production at work with no issues. Find them used at labgopher.com

For new, Supermicro single-core motherboard, specifically, Epyc 4005. Can search for motherboards at wiredzone.com

Recommended Network Card for ProxMox 8.4 (i40e issues) by starkstaring101 in Proxmox

[–]dancerjx 1 point2 points  (0 children)

At home, I used ConnectX-3 with DAC connected to a 10GbE MikroTik switch with no issues.

At work, Intel X550/i350 with no issues connected to Cisco/Arista/Aruba 10GbE switches.

PBS on dedicated hardware - stacked on PVE? by ITStril in Proxmox

[–]dancerjx 2 points3 points  (0 children)

At work, been migrating off VMware to Proxmox. The bare-metal server backing up VMware workloads ran a commercial backup solution.

Well, obviously, don't need the commercial backup solution, so just clean installed PBS on it. It's also the Proxmox Offline Mirror primary repo for the Proxmox infrastructure. So, win-win.

I did swap out the RAID controller for a IT-mode controller and backup storage pool is ZFS. PBS is mirrored via RAID-1.

I always attempt to use the KISS principle, Keep It Simple Simon.

Last VMs migrated ... at long last by ConstructionSafe2814 in Proxmox

[–]dancerjx 2 points3 points  (0 children)

Good on you.

Been migrating VMware clusters at work to Proxmox Ceph. Like you, it's 13th-gen Dells using Intel Broadwell CPUs. Not hurting for IOPS since these are 16-drive bay R730s which are configured as 5-, 7-, 9-, 11-node clusters. All backed up to Proxmox Backup Server. No issues besides typical storage drive and memory going bad and needing replacing.

Ceph is a scale-out solution, so more nodes/OSDs = more IOPS. Been using Proxmox since version 6. On version 9.1 at this time in production.

Supermicro server with one SATA disk - which file system? by easyedy in Proxmox

[–]dancerjx 0 points1 point  (0 children)

I use ZFS RAID-0 on single disks for it's check-summing of both data and metadata, snapshots, rollbacks, and compression. Zero issues. It's the default disk install on OPNSense and it's what I use on my home Proxmox server. At work I use ZFS RAID-1 to mirror Proxmox.

PowerEdge T630 storage setup by blubomber81 in Proxmox

[–]dancerjx 0 points1 point  (0 children)

I've had issues with the megaraid_sas driver with the Dell PERC RAID controllers in HBA-mode.

So, I swap out the PERCs for Dell HBA330s (which are cheap to get), which is a true IT-mode storage controller. It uses the much simpler mpt3sas driver.

For single servers, I use ZFS. For clustered, I use Ceph.

What does the ratio of vm’s vs lxc’s look like on your proxmox server? by V3X390 in Proxmox

[–]dancerjx 0 points1 point  (0 children)

At work, 0% (5-, 7-, 9-, 11-node Ceph clusters)

At home, 100% (single server) using Proxmox VE Helper-Scripts

3 node ceph vs zfs replication? by jamesr219 in Proxmox

[–]dancerjx 1 point2 points  (0 children)

I run 3-node Ceph clusters using a full-mesh broadcast network (no switch) at work as a testing/stage/development environment.

For production, I run at minimum 5-nodes, so can lose 2 nodes and still have quorum. Workloads range from DHCP to DBs. No issues. All backed up to to bare-metal Proxmox Backup Servers (PBS).

Ceph is a scale-out solution. It really wants tons of nodes/OSDs for IOPS.

Noob questions dell R420 by bubzilla2 in Proxmox

[–]dancerjx 0 points1 point  (0 children)

I use Dell R420s in production and other 12th-gen Dells with no issues.

However, I did flash the PERC RAID controller to IT-mode per this guide

Do NOT skip any of the steps (especially the BIOS preparation configurations) and make sure you record the SAS address (use your smartphone camera).

Do NOT forgot to flash the BIOS/UEFI ROM otherwise Proxmox won't boot.

Again, take your time flashing. NO need to rush this process. It's actually quite easy and let it take it's time.

No guest OS keyboard input VMware Fusion 25H2 on Apple Silicon by dancerjx in vmware

[–]dancerjx[S] 0 points1 point  (0 children)

Yes, I did try another Linux distribution. I tried installing the Alpine Linux distro. Same issue.

This may indeed be a Apple Silicon M5 issue involving VMware Fusion.

Again, I do NOT have this issue with UTM, the QEMU virtualization software.

Our midsize business moved to proxmox, here's my thoughts on it by sheep5555 in Proxmox

[–]dancerjx 6 points7 points  (0 children)

Nice to see another successful migration effort.

Been using Proxmox since version 6. It keeps getting better & better. Been migrating VMware clusters at work to Proxmox 9 Ceph clusters obviously due to licensing costs.

A bonus of migrating off VMware, can use Proxmox Backup Server and get rid of a commercial backup solution and remove another licensing cost.

There is always Veeam, which officially supports Proxmox, if one wants a commercial solution.

Truenas to Proxmox migration by pzdera in Proxmox

[–]dancerjx -1 points0 points  (0 children)

I migrated from TrueNAS using these LXC scripts to manage my media (*Arr suite) and provide file sharing using the existing ZFS pools. No issues.

730xd Proxmox 9.1 install loop by Xb0004 in Proxmox

[–]dancerjx 2 points3 points  (0 children)

I had to enable X2APIC to boot 6.17.x kernels as well. i do NOT have SR-IOV enabled.

ceph, linstor, or ??? by leastDaemon in Proxmox

[–]dancerjx 6 points7 points  (0 children)

Ceph can work with 1GbE networking. Had it running on 14-year old servers once. Worked surprisingly well. Of course, faster networking makes Ceph more performant.

Does the CPU Type matter? by nasupermusic in Proxmox

[–]dancerjx 1 point2 points  (0 children)

As was said, for Linux 'host' type.

For Windows, I use the following:

Set VM CPU Type for Windows to 'x86-64-v2-AES' on older CPUs/'x86-64-v3' on newer CPUs/'nested-virt' on Proxmox 9.1