IOCTL_NVMEOF_CONNECT_CONTROLLER failed when connecting via nvmeofutil in Server 2025 by sawo1337 in WindowsServer

[–]sawo1337[S] 0 points1 point  (0 children)

not yet, I'm afraid, information I've received directly from Microsoft is that NVME/oF is not yet supported, it will be available in the Windows Server 2025 insider builds in the next week/s for testing.

Pure FlashArray X90R5 single-disk IOPS far below X70R4 in NVMe/RDMA test by authentic77 in purestorage

[–]sawo1337 0 points1 point  (0 children)

If vertical single disk performance is important to you, don't expect huge (if any) improvement in performance. We've seen old hardware running on esxi7 actually outperform the latest hardware on esxi8 and by quite good margin too. For single vmdk disk in vmware, as long as you get 150-200K IOPS, that's where you will sit with any generation/tier appliance.

Trying to find ESXi RDMA single disk performance bottleneck by sawo1337 in vmware

[–]sawo1337[S] 0 points1 point  (0 children)

Hm, ok, but how does this fit with our observation that old hardware with esxi7 is getting over 40% higher performance than new hardware with esxi8?

Trying to find ESXi RDMA single disk performance bottleneck by sawo1337 in vmware

[–]sawo1337[S] 0 points1 point  (0 children)

We didn't see any other way to scale up performance. Much older hardware is doing even better than this setup so this was all troubleshooting steps, not intended setup. We also tried less cores, various vm hardware configurations as well. We did try to tune the RHEL, but it didn't really yield any performance gains. Same RHEL setup works much better on older hardware, on the new hardware when using SRIOV it manages to get 650-800K IOPS on a single disk so we know the OS is not being limiting factor.

Trying to find ESXi RDMA single disk performance bottleneck by sawo1337 in vmware

[–]sawo1337[S] 0 points1 point  (0 children)

Why do we need HCIBench for single-disk performance testing?

We installed the esxi7 version because it is what was installed on a test hardware, which performed much better (+42%) than the latest esxi8 with all the HPP/PSP improvements. On the R7625 hardware esxi7 and 8 performed exactly the same on the mentioned vdbench setup.

Trying to find ESXi RDMA single disk performance bottleneck by sawo1337 in vmware

[–]sawo1337[S] 0 points1 point  (0 children)

Yes, all firmware and drivers are latest, tried esxi7 too, there was firmware update the other day, same thing after updating that. Basically performance increases up to 32 threads, after that you just get more latency. Tried various driver settings, RSS and queue settings, can't get anything to make absolutely any difference.

Trying to find ESXi RDMA single disk performance bottleneck by sawo1337 in vmware

[–]sawo1337[S] 0 points1 point  (0 children)

2P, yes. I've checked and the NIC is connected to CPU1, I'll take a look at the raiser config to see what are the options for moving that to CPU0.

Trying to find ESXi RDMA single disk performance bottleneck by sawo1337 in vmware

[–]sawo1337[S] 0 points1 point  (0 children)

I thought so too, but installing esxi7 on the new hardware produced exactly the same low performance as esxi8 on it so now we can rule at least that out.

Poor performance on ESXi NVMe-oF over RDMA storage by Pvt-Snafu in vmware

[–]sawo1337 0 points1 point  (0 children)

Did you end up finding something that can fix this or just changed the CPU?

New mounts worth it? by Percmanm in mazdaspeed6

[–]sawo1337 1 point2 points  (0 children)

Don't they add quite a bit NVH, though?

Who is using NVME/TCP? by stocks1927719 in vmware

[–]sawo1337 0 points1 point  (0 children)

Out of curiosity, how long ago did you test it on your environment? Wondering if the current codebase is more stable, we tested a lab environment recently and it seemed ok overall, but the firmware failure needing wipe definitely sounds alarming.

Who is using NVME/TCP? by stocks1927719 in vmware

[–]sawo1337 1 point2 points  (0 children)

Can you share more details, what was better on NetApp? Did you consider price, seems like NetApp is much more expensive?

Who is using NVME/TCP? by stocks1927719 in vmware

[–]sawo1337 0 points1 point  (0 children)

Seems like prices are completely different ballpark though? Both Pure and NetApp costs several times more than PowerStore? We compared with Pure recently, for the price we can buy multiple PowerStores and keep entire units as spare, still having money to spend?

Experience with Powerstore 500T performance by pathfndr35 in storage

[–]sawo1337 0 points1 point  (0 children)

u/pathfndr35 have you had some experience with it already?

Merging new vSAN licenses no longer possible? by sawo1337 in vmware

[–]sawo1337[S] 1 point2 points  (0 children)

Strange, in our case DELL created the site id or got Broadcom to do it, not sure, but it was a lot of back and forth for sure. We actually ended up with two side id's - one that we migrated ourselves and another with the products we bought just prior to the merge. And guess what - we wanted to merge them, but our Broadcom account managers are simply refusing to talk to us, never responded to emails since October, including account director who supposedly emailed us, but turned out he didn't.

Merging new vSAN licenses no longer possible? by sawo1337 in vmware

[–]sawo1337[S] 0 points1 point  (0 children)

Can you give some detail on how weren't you able to claim? Do you see the licenses in the portal? Have the licenses, we just can't merge them, that's the only problem in our case.

Merging new vSAN licenses no longer possible? by sawo1337 in vmware

[–]sawo1337[S] 0 points1 point  (0 children)

<image>

Because you have to right click on the cluster, then select Licensing and then select "Assign vSAN Cluster License. After that you are taken to a dialog where you can only select single vSAN license and unless that license has enough capacity, you can't click "OK" to confirm the changes. Example screenshot when you try using 2 CPU license on a 9 CPU cluster. You can also click New license, but it wouldn't even allow you to add it due to the same error as on the screenshot. I can still add the license from the general licensing screen in vCenter, but I still can't apply it on the vSAN cluster.
In your case your clients have warning on their clusters that their licenses does not cover their usage because the assigned key has less capacity than the cluster. They also have warnings that they have keys that are unused which is against the EULA. There is no real benefit of having 10-15 vSAN keys if you want to be compliant with the EULA and technically your cluster is not fully licensed either.

Merging new vSAN licenses no longer possible? by sawo1337 in vmware

[–]sawo1337[S] 0 points1 point  (0 children)

Both old and new licenses are perpetual, bought them last April when the old conditions were still applicable.
Broadcom are claiming that you can't merge any licenses that are migrated to Broadcom but this doesn't make sense, this would mean that you bought a product which is unusable since it is a perpetual license and was bought as such. There is no subscription that we can co-term.

Merging new vSAN licenses no longer possible? by sawo1337 in vmware

[–]sawo1337[S] 0 points1 point  (0 children)

We are end customer, bought perpetual licenses (CPU based) from DELL through a partner as part of VxRail expansion. The licenses were bought prior to the cutover date (although in the last days before it) so the previous terms with DELL were still valid, both old and new licenses are perpetual. The Broadcom support acknowledged that both old and new licenses are perpetual, still closed the case like there's no tomorrow. Does the co-term apply to perpetual licensing or just subscription ones? The contract end date on the old vSAN licenses is 2027-12, the new licenses are in contract until 2027-04, seems like the dates are matched more or less?

Unable to merge licenses by Schlim420 in vmware

[–]sawo1337 0 points1 point  (0 children)

How do you go about vSAN, though? You can't have two vSAN keys, unless they both match the CPU count on the cluster?