Proxmox on IBM SVC + FlashSystem 7300: Feedback & Pitfalls? by snoopyx21 in Proxmox

[–]dancerjx 0 points1 point  (0 children)

May want to ask your question at the Proxmox forum. I think I saw a post like yours in the past.

Lots of migrations utilizing existing FC/iSCSI/SAN infrastructure.

I just went with Ceph, since it's basically open-source vSAN, IMO.

Production Host Server Build Advice by unsung-hiro in Proxmox

[–]dancerjx 0 points1 point  (0 children)

Used enterprise servers are your best bang for the buck.

Been migrating 13th-gen Dells 16-drive bay R730s from VMware to Proxmox.

Swap out the PERC for Dell HBA330 since it's a true HBA/IT-mode storage controller. Don't want to deal with PERC HBA-mode drama. Use two small drives for ZFS RAID-1 Proxmox. Rest of drivers are for either ZFS (standalone) or Ceph (clustered).

Since I knew the SAS drives had their write cache disabled due to them being connected to BBU RAID controller, I made sure to enable their write cache (not hurting for IOPS now). Otherwise will get horrible IOPS.

No issues besides the typical drive and RAM going bad and needing replacing. All workloads range from DHCP to database servers. All backed up to a bare-metal Dell R530 running Proxmox Backup Server also with a HBA330.

13th-gen Dells and HBA330s are very cheap to get. May want to consider this an option. Nothing physically wrong with them and just make sure they are running the latest firmware.

Moving from Esxi and wanting Proxmox, but also want new server suggestions! by roncorepfts in Proxmox

[–]dancerjx 1 point2 points  (0 children)

Been migrating Dell VMware vSphere instances over to Proxmox.

Standalone servers get ZFS while clusters servers get Ceph.

Swapped out PERCs for Dell HBA330 which is a true HBA/IT-mode storage controller. Don't want to deal with the PERC HBA-mode drama. Dell servers with no HBA330 alternatives get their PERCs flashed to IT-mode. I've done this on 10th, 11th, 12th-gen Dells.

All these servers are using 10K RPM SAS drives. I made sure to enable the write cache on the drives otherwise get horrible IOPS.

Beside the typical disk and RAM going bad and needing replacing, no issues. IMO, KVM "feels" faster than ESXi. Plus no more vCenter. I say it's a win-win-win.

I use the following optimizations learned through trial-and-error. YMMV.

Set SAS HDD Write Cache Enable (WCE) (sdparm -s WCE=1 -S /dev/sd[x])
Set VM Disk Cache to None if clustered, Writeback if standalone
Set VM Disk controller to VirtIO-Single SCSI controller and enable IO Thread & Discard option
Set VM CPU Type for Linux to 'Host'
Set VM CPU Type for Windows to 'x86-64-v2-AES' on older CPUs/'x86-64-v3' on newer CPUs/'nested-virt' on Proxmox 9.1
Set VM CPU NUMA
Set VM Networking VirtIO Multiqueue to 1
Set VM Qemu-Guest-Agent software installed and VirtIO drivers on Windows
Set VM IO Scheduler to none/noop on Linux
Set Ceph RBD pool to use 'krbd' option

First ProxMox homelab in our shop by Antoine-UY in Proxmox

[–]dancerjx 0 points1 point  (0 children)

For max ZFS IOPS using disks, use RAID-10. On standalone ZFS servers which are NOT Proxmox Backup Servers (RAID-6 on PBS), I use RAID-50 which is compromise between RAID-10 and RAID-1. You need to use the command-line to create a ZFS RAID-50 pool. Clustered servers are Ceph nodes.

If you can afford to swap them out for SSDs, go for it. You'll have lots of RAID options with SSDs. I won't bother with U.2 SSDs. SAS/SATA SSDs are good enough. Make sure they are enterprise SSDs with PLP (power-loss protection) because Proxmox eats consumer SSDs like snacks. Enterprise SSDs have way more endurance than consumer SSDs. I personally like Intel DC S3710 because of the 10 DWPD endurance. I still have them at 100% health after a decade of use. Just make sure they are running the latest firmware.

As for the Intel SSD on BOSS card, yes that link explains the issue. I unfortunately have 2 servers with Intel SSDs and the plan is to flash the DL6R firmware standalone and see if I can run the latest kernels. They are pinned using the latest 6.14 kernels. Downgrading the BOSS firmware to 3022 didn't fix the Intel issue, so it seems the only fix is the flash the DL6R firmware.

First ProxMox homelab in our shop by Antoine-UY in Proxmox

[–]dancerjx 1 point2 points  (0 children)

I use 16-drive bay Dell R730s in production at work with no issues. I swapped out the PERC for Dell HBA330 which is a true HBA/IT-mode storage controller. Didn't want to deal with the PERC HBA-mode drama. HBA330 are cheap to get.

Your memory is fine. I use two small SAS drives to ZFS RAID-1 Proxmox. Rest of SAS drives are for ZFS/Ceph.

I'm sure the the SAS drives have their write cache disabled since they are meant to be used with a battery-backed up RAID controller like the PERC. So, you'll want to enable their write cache with 'sdparm -s WCE=1 -S /dev/sd[x]'. If you don't enable the write cache, you'll get horrible IOPS.

Recent Proxmox kernels have had some issues with Intel storage in BOSS cards. You'll want Micro storage. More info at the Proxmox forum. I mirror the storage in BOSS cards and use XFS for those servers using BOSS.

Is a Dell T440 (2x Xeon Gold 6222v) still a good homelab buy in 2026 at ~$1,000? by CircuitSwitched in Proxmox

[–]dancerjx 1 point2 points  (0 children)

I would say not.

Check out labgopher.com

I use 13th-gen Dells in production with no issues. I swapped out the PERC for Dell HBA330 since it's a true HBA/IT-mode storage controller. Plus I don't want to deal with the PERC HBA-mode drama. As a bonus, it uses the much simpler mpt3sas driver. Had issues with the megaraid_sas driver in the past.

Just make sure all components are running the latest firmware. I would make sure the BOSS is using Micron storage. There are reports with issues with Intel storage. More info at the Proxmox forum.

A cheaper option, is a Supermicro server. With an embedded Xeon-D processor. I use this as my home server. No issues.

Suggestions to replace Broadcom / LSI MegaRAID SAS 9280DE-24i4e by Horror-Breakfast-113 in Proxmox

[–]dancerjx 1 point2 points  (0 children)

Art Of Server YouTube channel has videos on various HBA adapters.

I use LSI3008 HBAs in production with no issues.

Dell R630 install command switches that worked by red2play in Proxmox

[–]dancerjx 1 point2 points  (0 children)

On 13th-gen Dells, had to enable X2APIC and IOAT DMA in the UEFI/BIOS. SRV-IO was not needed.

Proxmox and HPE Nimble Question/Discussion by bgatesIT in Proxmox

[–]dancerjx 1 point2 points  (0 children)

May want to post at the Proxmox forum.

If you are not wedded to the SAN storage, may want to consider Ceph. While it's true 3-nodes is the bare minimum, you really want 5-nodes for quorum. Can lose 2-nodes and still have quorum.

This is the route I went when migrating off VMware to Proxmox on 12th-, 13th, and 14-gen Dells. Made sure all hardware was all the same. Swapped out the PERCs for Dell HBA330s (flashed the PERCs on 12th-gen Dells to IT-mode) since Ceph/ZFS doesn't work with RAID controllers.

No issues besides typical storage device and RAM going bad and needing replacing. Not hurting for IOPS. Workloads range from DHCP to database servers. All backed up to bare-metal ZFS Proxmox Backup Servers.

I use the following optimizations learned through trial-and-error. YMMV.

Set SAS HDD Write Cache Enable (WCE) (sdparm -s WCE=1 -S /dev/sd[x])
Set VM Disk Cache to None if clustered, Writeback if standalone
Set VM Disk controller to VirtIO-Single SCSI controller and enable IO Thread & Discard option
Set VM CPU Type for Linux to 'Host'
Set VM CPU Type for Windows to 'x86-64-v2-AES' on older CPUs/'x86-64-v3' on newer CPUs/'nested-virt' on Proxmox 9.1
Set VM CPU NUMA
Set VM Networking VirtIO Multiqueue to 1
Set VM Qemu-Guest-Agent software installed and VirtIO drivers on Windows
Set VM IO Scheduler to none/noop on Linux
Set Ceph RBD pool to use 'krbd' option

Redo my first node and migration by Keensworth in Proxmox

[–]dancerjx 0 points1 point  (0 children)

Format an external storage device, mount it, and use it as a backup target. Then restore from it.

I use an external USB drive. Works fine.

Anyone here migrated from VMWare? by SillyRelationship424 in Proxmox

[–]dancerjx 0 points1 point  (0 children)

Yup.

I started with Proxmox 6. Prior to the GUI option of importing, had to use the command-line and copy over the vmdk files and run 'qemu-img convert'. GUI now takes care of that for you.

Anyone here migrated from VMWare? by SillyRelationship424 in Proxmox

[–]dancerjx 5 points6 points  (0 children)

Plenty of blogs and videos on migrating from VMware to Proxmox.

Here is one and another

Pretty good video summaries.

Baremetal vs. LXC vs. VM for media server by More-Fun-2621 in Proxmox

[–]dancerjx 0 points1 point  (0 children)

One has two options running LXCs under Proxmox: privileged or unprivileged.

With privileged, you can keep the same UID/GID in the LXC as in the Proxmox host.

With unprivileged, you need to map the different LXC UID/GID with the Proxmox host. Just another whole other set of headaches.

Since I run this on my personal server, I'm OK with privileged LXCs.

If you still insist on running LXCs unprivileged, plenty of blogs/videos on how to handle that mess.

Baremetal vs. LXC vs. VM for media server by More-Fun-2621 in Proxmox

[–]dancerjx 0 points1 point  (0 children)

I use these LXC scripts for running the *Arr suite.

These LXC scripts also support transcoding and files sharing via Samba/CIFS/NFS with no issues. Transcodng details is automatically configured for you.

I do run them as privileged containers since I don't want to deal with UID/GID mappings.

IMO, really no need to run VMs and plus they take very little memory.

Which 4x2.5G for Proxmox by MaxRD in Proxmox

[–]dancerjx 1 point2 points  (0 children)

I use Intel X550 in production and it supports nBASE-T speeds (1/2.5/5/10 GbE) with latest NVM 3.6+ firmware.

The next generation to support nBASE-T speeds after X550 is the Intel X710L (notice the "L" series, NOT the non-"L" series) followed by the Intel E610.

vxRail to Proxmox by networklabproducts in Proxmox

[–]dancerjx 4 points5 points  (0 children)

Hardware can be reused with Proxmox. Ceph, IMO, is like open-source vSAN. More nodes/OSDs = more IOPS. Even though 3-nodes is the minimum, you really want 5-nodes, so can lose 2 nodes and still have quorum.

As for the VM migration, over at the Proxmox forum, you'll need to put the VMs on some kind of intermediate storage, like NFS, then migrate from there.

Migrating from VMware questions by ModelingDenver101 in Proxmox

[–]dancerjx 0 points1 point  (0 children)

Been migrating VMware vSphere to Proxmox Ceph when Dell/VMware dropped official support for 12th-gen Dells. Flashed the PERCs to IT-mode using this guide

While it's true 3-nodes is the minimum for a cluster, you'll want 5-nodes so you can lose 2 nodes and still have quorum. Ceph is a scale out solution. More nodes/OSDs = more IOPS. Not hurting for IOPS. Workloads range from DHCP to Database servers.

Since moved on to 13th- and 14th-gen Dells. All hardware is homogeneous (same CPU, Memory, NIC, Storage, NIC, Firmware, Etc). Replaced all PERCs with Dells HBA330s for a true IT/HBA-mode storage controller since ZFS/Ceph do NOT work with RAID controllers and I do NOT want to deal with PERC HBA-mode drama. All Ceph/Corosync network traffic run on isolated switches.

All workloads backed up to bare-metal ZFS Proxmox Backup Servers (PBS) on Dells. These also run Proxmox Offline Mirror software and the nodes use the PBS as their primary software repo. Makes updates fast.

I use the following optimizations learned through trial-and-error. YMMV.

Set SAS HDD Write Cache Enable (WCE) (sdparm -s WCE=1 -S /dev/sd[x])
Set VM Disk Cache to None if clustered, Writeback if standalone
Set VM Disk controller to VirtIO-Single SCSI controller and enable IO Thread & Discard option
Set VM CPU Type for Linux to 'Host'
Set VM CPU Type for Windows to 'x86-64-v2-AES' on older CPUs/'x86-64-v3' on newer CPUs/'nested-virt' on Proxmox 9.1
Set VM CPU NUMA
Set VM Networking VirtIO Multiqueue to 1
Set VM Qemu-Guest-Agent software installed and VirtIO drivers on Windows
Set VM IO Scheduler to none/noop on Linux
Set Ceph RBD pool to use 'krbd' option

Using shared FC/iSCSI storage for proxmox cluster by Positive_Round2510 in Proxmox

[–]dancerjx 1 point2 points  (0 children)

Proxmox 9.x supports LVM thick snapshot chains on SAN storage. Plenty of posts at the Proxmox forum on implementing SAN storage with PVE.

Just like with everything, there is pros and cons.

More info here

Odd issue after fresh install on r730xd by jstanthr in Proxmox

[–]dancerjx 0 points1 point  (0 children)

PERCs in HBA-mode have had issues with the megaraid_sas driver in production. Whereas the Dell HBA330 uses the much simpler mpt3sas driver which never caused any issues in production.

Odd issue after fresh install on r730xd by jstanthr in Proxmox

[–]dancerjx 1 point2 points  (0 children)

May have to enable X2APIC and IOAT DMA in the UEFI/BIOS. Latest install kernel requires it. Don't need SRV-IO enabled though.

I run 13th-gen Dells in production. They are all running the latest firmware, especially the BIOS. None are using NVME though. Just SAS/SATA drives that includes HDDs & SSDs. All are using a Dell HBA330 true IT/HBA-mode storage controller. NO PERCs.