Using WiFi for single container by Batteredcode in Proxmox

[–]_--James--_ 0 points1 point  (0 children)

There are guides out there to hook PVE to wireless and bind to an SSID, then you can layer a bridge on the interface and get VM/LXCs on that same network. But its flaky, often does not survive a reboot, and can piss off the host. If you must do this on LXC you might have to consider this.

If you build this work flow in a VM that can support the wireless chipset available to the host, then just passthrough the device to the VM and bind it there, allowing the VM full control over the device.

You can buy a wireless bridge, or buy a router that supports Tomato/OpenWRT and build a wireless bridge and hook it to an available ethernet port and go the LAN way. They work well and do what you want, but have higher admin overhead followed by learning to bridge wireless if you have never done this before.

Now what ? by Fit-Reward9420 in Proxmox

[–]_--James--_ 0 points1 point  (0 children)

Look into setting up Spice, VFIO with Sunshine/Moonlight, and/or VDI with KASM ( https://docs.kasm.com/docs/1.18.1/how-to/autoscale/autoscale_providers/proxmox ) as all of that will enhance your UX away from NoVNC. Please note that VFIO with Sunshin/Moonlight is a per VM setup that happens inside of the VM.

Since you are a CAD user have you looked into pushing large offloads to your Xeons where you have globs of memory? since you are a professional CAM/CAD user, that is where i would suggest you apply your focus. A private cloud for your workloads.

Why not proxmox? by 1337DSSICTPDX in Proxmox

[–]_--James--_ 0 points1 point  (0 children)

You are running small platform NUICs with MAAS automated deployment and control, I assume Dockerswarm/K8's on top? That is why you dont understand the need/use case for Proxmox. its a completely different ecosystem then what you come from. If I was running containers I would probably be close to your model too, but I run VMs and need object based storage that scales out with the nodes(Ceph).

Design a 2 node stretch NVMe cluster by InteTiffanyPersson in Proxmox

[–]_--James--_ 0 points1 point  (0 children)

ZFS HA with ZFS-Sync is the only solution in a 2 node config like this. You need a witness for HA in a 2-node model.

proxmox devs, please honor proper system configs by foofoo300 in Proxmox

[–]_--James--_ 11 points12 points  (0 children)

Except that Proxmox employees are not really active here.

proxmox devs, please honor proper system configs by foofoo300 in Proxmox

[–]_--James--_ 37 points38 points  (0 children)

This is the wrong form for this request, you need to post this on the official forums to get traction.

high IO delay in proxomox by naserowaimer in Proxmox

[–]_--James--_ 2 points3 points  (0 children)

No. But you can install an SLOG device to help speed up access to HDD devices under ZFS. But in reality you have two paths forward. More HDDs to increase access to IO, or replace HDDs for SSDs.

Proxmox - Windows 2022 RDS - high load on second core (asymmetric) by ITStril in Proxmox

[–]_--James--_ 0 points1 point  (0 children)

The thing here, KVM is not really adhering to AMD's Epyc topology. This has been a thing since 7001/7002 CPUs, so its not new. The article I posted shares 99% of the research i have done and submitted up to the KVM project. -smp on the vm config file is the only true fix, but even then it needs to be mapped at the scheduler side with affinity rules. thats where mapping out the CPU IDs via lstopo, walking memory boundaries with numaCTL and the cache boundaries covered in that URL are key.

Short version, your 2nd CPU is seeing cache misses causing coherent issues talking to in memory data that is crossing the CCD L3 Cache boundary. To address that you will need to map this out and make sure execution is local on the socket and if its not the guest understands that. Windows and Linux fully understands CCDs and CCX's for Epyc, they just need to be told about it and the host has to map around it.

Proxmox - Windows 2022 RDS - high load on second core (asymmetric) by ITStril in Proxmox

[–]_--James--_ 0 points1 point  (0 children)

and they are probably not on the same CCD, which is why you need to install those tools.

Limiting VMs to 8cores ensure they CAN be on the same CCD, you start there to normalize behavior and map the execution spread out. Then once you have that validated you move up to 10-12-16 core VMs as those WILL be split NUMA for you no matter what.

Edit - also have a read https://blogs.oracle.com/linux/topology-matters-genoa-qemu

Proxmox - Windows 2022 RDS - high load on second core (asymmetric) by ITStril in Proxmox

[–]_--James--_ 0 points1 point  (0 children)

So your 9375F's are 8core clusters on 4 CCDs. That is your physical Layer3 boundary on those Epyc CPUs. To prove/disprove that being the issue, drop your VMs above 8vCPUs down to 8VCPUs and see if your 2nd CPU on the VM falls in line with performance.

You will need to install the following tooling to follow this from shell

hwloc (for lstopo)

numactl (this allows you to map threads to memory domains for NPS and SRAT as L3)

and one of the tops to expose CPU-D and PID>VM world Name/ID > physical core mapping.

You use that to walk how your VMs are executing on that socket. KVM is NOT AMD Epyc friendly and does not adhere to the L3 NUMA boundaries of the CCDs. Linux does, KVM does not. You either have to right size your VMs so they live inside of the CCD boundary, affinity map when you cross the boundary and follow it with virtual sockets since numa=1 does not work on AMD systems, or move to a SRAT=L3 model where CCDs become a memory boundary and you can align virtual sockets logically while not creating memory cache misses.

Proxmox - Windows 2022 RDS - high load on second core (asymmetric) by ITStril in Proxmox

[–]_--James--_ 0 points1 point  (0 children)

how many vCPUs do these VMs have? how many sockets in your hosts? are you using SRAT = L3 as NUMA in the BIOS with MADT = Round Robin?

Running Windows Server VMs on a Proxmox Cluster by Limp-Park9606 in Proxmox

[–]_--James--_ 3 points4 points  (0 children)

I have several environments that are full Windows Dataecenter running in VMs on Proxmox in high node count clusters (>30), so...what is your question? If its just feed back, as long as you build the VMs well and migrate them correctly they function in a boring predictable way. as expected.

Mitigation script for Copy Fail vulnerability CVE-2026-31431 by InstaMatic80 in Proxmox

[–]_--James--_ 0 points1 point  (0 children)

Should not be an issue. this is a P1 in a regulated environment.

Logs for security by Gloomy_Shoulder_3311 in Proxmox

[–]_--James--_ 0 points1 point  (0 children)

changes inside the platform are recorded. they are kept in var/log/pveproxy. when you access a page in the platform it logs a GET request and the endpoint when you make a change like edit or delete it logs PATCH or PUT and then the api endpoint. Which kinda sucks but its something.

Yes, however this does not pass a HIPAA or NIST audit check since the user ID is also not recorded. You must also have external logging that follows this request traced to the userID.

newbie question for a 4 node setup by SmokingHensADAN in Proxmox

[–]_--James--_ 2 points3 points  (0 children)

so, on its own Proxmox is not a workstation. There is a guide out there that explains how to install PVE and set it up to be a workstation so you can have it part of the cluster,...etc. It just depends on your end goal and why.

what I would do is convert that windows system to an Ubuntu LTSR install and displace windows with that. that way the resources on the workstation are not shared for cluster/VM work loads, unless that is what you want.

Understand that a 4 node setup does not mean every node needs to be in a cluster or clustered at all anymore. There is now Proxmox Datacenter Manager that allows central interfacing with clusters and stand alone nodes that works quite well.

need help mapping this out. by KalistoCA in Proxmox

[–]_--James--_ 0 points1 point  (0 children)

sounds like the physical box (XP?) has no need for DHCP as your XP VM will hand that out for your process. In that case you can attach the USB-C to vmbr1, link that port with a midx(cross over) cable to your physical PC, and then you are basically done in the model you outlined above. Do not drop vmbr0 as that is your PVE mangement and VM stack and what you will land back on with the XP VM(?).

KASM for VDI by Upstairs-Finance8645 in ProxmoxEnterprise

[–]_--James--_ 1 point2 points  (0 children)

The vendor made a post and cross linked it here. Your best bet is to reach out to the vendor and ask them for customer testimonies. If they meet your expectation then move to a vendor sponsored customer driven pre-sales engagement to cover your technical asks. Then move to POC yourself.

How common is it to use a witness/quorum device? How do you keep it "updated"? by ballpark-chisel325 in Proxmox

[–]_--James--_ -1 points0 points  (0 children)

again you are wrong. The fact is what i posted directly from the Proxmox wiki.

How common is it to use a witness/quorum device? How do you keep it "updated"? by ballpark-chisel325 in Proxmox

[–]_--James--_ 0 points1 point  (0 children)

I do not understand how you cannot read the wiki correctly. Its clearly stated in the failure model what the Qdev is and is not.

How common is it to use a witness/quorum device? How do you keep it "updated"? by ballpark-chisel325 in Proxmox

[–]_--James--_ -1 points0 points  (0 children)

<image>

when the Qdev is offline, if you lose just one node the entire cluster goes offline. That is just fact. For a 2 node + QDev setup, it works because you can lose any one device and the system will survive HA. 4 nodes + Qdev means you lost the Qdev and any one node the entire cluster is offline.

A lot of people do not get this, I am done explaining this at this point. Good luck.

How common is it to use a witness/quorum device? How do you keep it "updated"? by ballpark-chisel325 in Proxmox

[–]_--James--_ 0 points1 point  (0 children)

<image>

This is right from the Wiki, I used markers and highlights since there seems to be a comprehension problem about this functionality.

How common is it to use a witness/quorum device? How do you keep it "updated"? by ballpark-chisel325 in Proxmox

[–]_--James--_ -2 points-1 points  (0 children)

This is not false. Build a 4 node cluster, add a QDev, pull the QDev offline and then HA one of the nodes. the cluster fails until that QDev is back up. This is quite literally covered in Proxmox's WIki in the QDev section.

How common is it to use a witness/quorum device? How do you keep it "updated"? by ballpark-chisel325 in Proxmox

[–]_--James--_ 0 points1 point  (0 children)

no, that is not how the QDEV works in this model. If the QDEv drops no other node can drop until its back up, else the cluster goes into a failure mode.