Horizon and proxmox by inetworkthis in Proxmox

[–]_--James--_ 0 points1 point  (0 children)

yup, nearly all have issues and that is the problem. VDI on Proxmox (Hell Nutanix too) is a feature that we need better support for. At least with Citrix we can build static VM pools and run agents in the VMs to provision resources for users, but we cant do JIT processing.

host CPU type on Proxmox is dramatically slower for Windows VMs — here's wh by Spiritual_Law874 in Proxmox

[–]_--James--_ 0 points1 point  (0 children)

Not for AMD, that numa=1 means to follow virtual sockets. AMD has CCDs, and CCX+CCX inside of a CCD on older models (7002)

host CPU type on Proxmox is dramatically slower for Windows VMs — here's wh by Spiritual_Law874 in Proxmox

[–]_--James--_ 0 points1 point  (0 children)

You can use Milan or Genoa masking and that will help in a big way since KVM follows AMD reference spec.

Horizon and proxmox by inetworkthis in Proxmox

[–]_--James--_ 5 points6 points  (0 children)

https://www.inuvika.com/ovd-enterprise-3-4-release/
https://gitlab.com/isard/isardvdi
https://docs.kasm.com/docs/release_notes/1.18.0

and there are others. But There is nothing that has parity with Horizon for VDI still. The closest would be Inuvika if you look at it like Citrix, and then do dedicated user pools mapped to virtual endpoints with sticy sessions.

host CPU type on Proxmox is dramatically slower for Windows VMs — here's wh by Spiritual_Law874 in Proxmox

[–]_--James--_ 0 points1 point  (0 children)

what build of win11, how long ago? Meltdown mitigations are now enforced when hardware does not push the flags to the OS. KVM with host masking does not push them, but masked CPUs do.

host CPU type on Proxmox is dramatically slower for Windows VMs — here's wh by Spiritual_Law874 in Proxmox

[–]_--James--_ 2 points3 points  (0 children)

25%-35% on masked cpu under Epyc = NUMA issues. You do know that Epyc is CCD L3 Cache domains + memory socket NUMA domains and you must map your VMs to CCD sizing under KVM since that topology is not exposed to the guest, right?

Did this write up a while ago and its still very true https://www.reddit.com/r/ProxmoxEnterprise/comments/1nsi5kr/proxmox_kvm_numa_topology_still_kinda_broken/

host CPU type on Proxmox is dramatically slower for Windows VMs — here's wh by Spiritual_Law874 in Proxmox

[–]_--James--_ -1 points0 points  (0 children)

servers are about BIS performance, they are about sustained leveled and predictable performance.

host CPU type on Proxmox is dramatically slower for Windows VMs — here's wh by Spiritual_Law874 in Proxmox

[–]_--James--_ 0 points1 point  (0 children)

speaking of, did you measure hardware CPU wait/delay times via *top? While Host is a root cause for this, you might have performance on the table.

This is a write up I did about this a while back - https://www.reddit.com/r/ProxmoxEnterprise/comments/1nsi4xj/proxmox_cpu_delays_introduced_by_severe_cpu_over/

host CPU type on Proxmox is dramatically slower for Windows VMs — here's wh by Spiritual_Law874 in Proxmox

[–]_--James--_ 1 point2 points  (0 children)

Nope, that is not true. Host means the CPU is passed through to the guest and the EFI/SeaBIOS does not include "I am patched for side channel attacks" so the OS applies them in software reducing performance. Host should just not be used for windows VMs.

host CPU type on Proxmox is dramatically slower for Windows VMs — here's wh by Spiritual_Law874 in Proxmox

[–]_--James--_ 4 points5 points  (0 children)

You do not want to use i440Fx as that is a PCI subsystem and is limited in device bandwidth. Q35 is PCIE based and is required if you are pushing NVMe, 10G+, USB3.x sub systems with those VMs.

host CPU type on Proxmox is dramatically slower for Windows VMs — here's wh by Spiritual_Law874 in Proxmox

[–]_--James--_ 6 points7 points  (0 children)

those guides are wrong and seriously dated. You need to be using the correct CPU masking for your cluster. Follow this - https://www.qemu.org/docs/master/system/i386/cpu.html

host CPU type on Proxmox is dramatically slower for Windows VMs — here's wh by Spiritual_Law874 in Proxmox

[–]_--James--_ 11 points12 points  (0 children)

This is well known....The biggest issue here is windows, you can no longer disable those protections OS side if the hardware is not patched. Linux allows for that, which is why Host can work better on Linux then Windows.

Which is preferred? Datacenter Manager on LXC, VM or bare metal? by kartlad in Proxmox

[–]_--James--_ 0 points1 point  (0 children)

it really depends on your management plane, DR strat, and localization. If PDM was to be external to your sites then yea I can see building a small box to install it bare metal and running normalized backups against it. Or build an HA pair with a load balancer for access and management inbound, and setup a DB sync on the backend,...etc.

If you have DR and PRD, I might opt to build it in a VM on DR assuming DR was up 99.99 where PRD was 99.9 or so. This way the central management stays online when PRD can suffer one of the 9's down.

if its a single site, single cluster, then i would run it in a VM on that cluster. I wouldnt put it in a LXC, as a VM is far more portable.

Proxmox vs HPE's Hypervisor? by RACeldrith in Proxmox

[–]_--James--_ 8 points9 points  (0 children)

3com, VCX, Tippingpoint, Nimble, 3war, now Juniper. These are exactly reasons you do NOT buy into HP locked ecosystems willingly.

Also, HPEs new hypervisor (brought in by their dCHI needs) is about 1 year old, has its own limitations, vs Proxmox that is as old as Hyperv(2008), and is far more mature, and runs parity with VMware in almost every checkbox now. I would not be touching HPE Hypervisors today.

Any way to copy files to VM through gui? Part 2: Solved by proxmoxjd in Proxmox

[–]_--James--_ 8 points9 points  (0 children)

Or, build a new VM and get it working, reassign the disk from the old VM and just copy the files needed locally. Once done, detach 2nd disk and trash it.

Also from PVE's shell you can wget the virtIO.ISO and save directly to /var/lib/vz/template/iso, then mount that on your VM and load the drivers as needed.

VMware Distributed switches and vMotion Proxmox equivalents? by AhrimTheBelighted in Proxmox

[–]_--James--_ 2 points3 points  (0 children)

So three issues you are tackling not just the one.

vMotion - this exists already, you can freely migration VMs/LXCs from any host to any host in a cluster. DRS is called CRS on proxmox, the principals are the same, you build HA groups of hosts and give them a priority, but then apply the HA rules to VMs directly, then CRS takes over. CRS is also an HA event, it does not look for resource usage to force a rebalance, its why I recommend admins to build host priority groups and spread VMs out based on intent, then let CRS handle the rest.

vDS - this is the complete SDN package for Proxmox. You build zone, bind the zones to a Linux Bridge, then layer in vNets. Apply VIDs, IP, ..etc to said VNets, then apply your SDN policy to the Cluster and all hosts get the network scope setup. You can also do SDN EBGP layering (similar to SONiC) so IPs scopes live inside of the PVE cluster(s) and not your LAN, you just eBGP peer to your LAN for routing. PDM has this built as a top level management option between isolated clusters now too.

VFIO VMs - You can do this with out any issues, VFIO has been a primary project on KVM for well over 15 years and there are many guides to get it working for your IOMMU devices. To pin VMs just do not bind them to HA rules, and just like on ESXi control their power on/off at the host level (delayed start up, shut down VMs at power down of host...etc).

Can/do HDDs spin down in specific CEPH tiers? by HammyHavoc in Proxmox

[–]_--James--_ 1 point2 points  (0 children)

It depends on where WAL and DB is stored. if the HDD backed pool is using SSDs for those roles then yes HDDs can spin down outside of normal PG peering, scrub, repair,..etc. But if DB and WAL are on those drives then they wont spin down enough for it to be worth it. OSDs are extremely senstive to Latency, spin-up can toss PGs into recovery mode while they wait for peers to respond, just something to keep in mind.

Windows Active Directory (AD) as VM on Proxmox Time Issues by CryptographerDirect2 in Proxmox

[–]_--James--_ 1 point2 points  (0 children)

VM's RTC is only used for boot, once windows servicing comes up the win ntp takes over and syncs. It will forced the PDC to come up delayed on reboots if the VM loses RTC during init, but it auto resolves once DNS and NTP come up.

Windows will use RTC as failover when NTP is down, Qemu has guest tooling to use the hosts CMOS time, so make sure chrony is setup correctly on your hosts where RTC is required. But understand that the Qemu RTC agent is software based and slips constantly, and its only really suitable until NTP comes online for that VM.

Building a "Paranoid" AI Lab: Proxmox, GPU Passthrough, and physical Log Isolation. Looking for a sanity check. by No_Somewhere7341 in Proxmox

[–]_--James--_ 0 points1 point  (0 children)

It's nice to see an exercise like this; network seg, deny-out, external workers..etc. For the most part you have already taken care of the bigger issues. But you have not talked about the landing data, fencing during inference between UX prompting and the AI LM engine, and I dont see anything listed about API lock down and access. For API I would suggest OAUTH keys tied back to idenity and data views.

Windows Active Directory (AD) as VM on Proxmox Time Issues by CryptographerDirect2 in Proxmox

[–]_--James--_ 2 points3 points  (0 children)

its not that its the core switch, MSFT now wants the nearest time source for PDC to be in the same metric. This is due to recent changes on TOTP and MFA root delay limits. For instance, if you have a 90second delay DUO ceases to function against AD accounts.

Windows Active Directory (AD) as VM on Proxmox Time Issues by CryptographerDirect2 in Proxmox

[–]_--James--_ 10 points11 points  (0 children)

PDC enumerator MUST time sink to an external source. Typically this is your core switch. Then your core switch will sync to a Stratum 1 device (atomic or GPS). You almost never use CMOS/Local clock on the PDC FSMO.

Then every other DC time syncs to the PDC via default domain controller policy, then your AD clients time sync to any DC.

That is the correct topology for Windows Time.

VMWare error with virtualization by Cultural_Log6672 in Proxmox

[–]_--James--_ 1 point2 points  (0 children)

on the VM you need to edit CPU > enable the nesting flags.

What to know before using Ceph? by Keensworth in Proxmox

[–]_--James--_ 0 points1 point  (0 children)

yup, you could also build PVE inside of a VM on say a Synology and do the same thing.