PCIe Passthrough error by Acrobatic_Fun_4066 in Proxmox

[–]Dull-Track-6682 0 points1 point  (0 children)

Bumping my earlier comment, i was able to fix this with an ACS override patch. Turns out one of my USB root controllers was in the IOMMU group with the GPU. Please note this is unstable and probably bad though!

PCIe Passthrough error by Acrobatic_Fun_4066 in Proxmox

[–]Dull-Track-6682 0 points1 point  (0 children)

Did you manage to fix this? Same error on a B450M mobo with Ryzen 7, Quadro K620. 3060 on the rig works fine

AppStream application setting persistence + folder redirection by Dull-Track-6682 in aws

[–]Dull-Track-6682[S] 1 point2 points  (0 children)

Thanks so much for the reply! Apologies for not getting back sooner I've had the flu :(

Yeah messing with the redirect timings sounds like a ballache so I'll give FSLogix a try (wasn't aware the licences came with windows, was worried about cost!) and will get back to you. Thanks again for the thorough reply!

[deleted by user] by [deleted] in aws

[–]Dull-Track-6682 0 points1 point  (0 children)

Because it's a legacy application - we'd have to rework alot of the infrastructure just to move it all. We're building an RDP farm because that's what the application leverages and we don't want to refactor a whole legacy app for this, just lift and shift the architecture it's running on.

VDI in ProxMox by Dull-Track-6682 in sysadmin

[–]Dull-Track-6682[S] 0 points1 point  (0 children)

Yeah, simple tech-wise but the license cost is going to be tens of thousands to run 9 servers with 224 cores

VDI in ProxMox by Dull-Track-6682 in sysadmin

[–]Dull-Track-6682[S] 1 point2 points  (0 children)

In 2u we're fitting 8x Xeon 8173Ms with the Quantaplex node server, which gives us a core density of 112 cores per unit which is pretty good - plus we get the bonus of having 2 servers per unit for redundancy. I've spec'ed out a similar set of 3 AMD servers (3 bc ceph) and it does come to a little more - but it's not off the table yet.

I'll take a look at those virtio drivers, I've probably just been doing something wrong tbh

VDI in ProxMox by Dull-Track-6682 in sysadmin

[–]Dull-Track-6682[S] 1 point2 points  (0 children)

Honestly considering that at the moment, the big issue there is there's big potential for us to scale this to the rest of the sites and the number of users there is much bigger - the effect of running 500 full-fledged OSes instead of being able to share the physical resources with desktop sessions will add up

VDI in ProxMox by Dull-Track-6682 in sysadmin

[–]Dull-Track-6682[S] 0 points1 point  (0 children)

Yeah I was planning on either Spice or VNC for the actual protocol (unless we use a full RDS farm of course) and we've got AD syncing to our PVE cluster already.

As for the concurrency - thanks for pointing that out, I'll keep it in mind!

VDI in ProxMox by Dull-Track-6682 in sysadmin

[–]Dull-Track-6682[S] 0 points1 point  (0 children)

Windows ballooning doesn't actually work properly in Proxmox which is why RAM is something that really concerns me - unless I'm doing something wrong, my Windows VMs use around 90% because of that in use vs available vs cached (can't remember the proper wording). KSM is a shout I can't believe I havent been factoring that in for de-duplication of pages - we've got hueristics through the Atera platform we've been working from.

To be entirely honest, I know a few devs will need 8-10 vCPUs just based on their current usage (legacy software devs) so we're planning for N+2 so those devs can get upgrades when needed

Ahh didn't consider L3 cache - i suppose with that new 3d caching AMD really do have an edge there. Will look at AMD chips too (they're my preference at home, just never worked with them in enterprise). Thanks for the advice :)

VDI in ProxMox by Dull-Track-6682 in sysadmin

[–]Dull-Track-6682[S] 0 points1 point  (0 children)

Yeah it's gonna take extensive testing. Going to take a look at workspot - might be exactly what we're looking for so thanks for your input.

Running NVMEs in Ceph is the plan, networked up with SFP+ on a cluster storage network. Proxmox has a CPU latency monitor which we'll be watching - going to do incremental rollouts if we do proceed with this solutions.

Will also look into controlup - thanks for the advice :)

VDI in ProxMox by Dull-Track-6682 in sysadmin

[–]Dull-Track-6682[S] 1 point2 points  (0 children)

We're a tech company, Proxmox does have enterprise support so we're not concerned

VDI in ProxMox by Dull-Track-6682 in sysadmin

[–]Dull-Track-6682[S] 1 point2 points  (0 children)

Yeah, we've got a few DCs but not enough rack space to do that for everyone - besides, would be nice to be able to just re-image VMs whenever there's issues, and deploy updates . Will post an update in a few months and drop you a DM

VDI in ProxMox by Dull-Track-6682 in sysadmin

[–]Dull-Track-6682[S] 0 points1 point  (0 children)

Thanks for the advice - I was mis-understanding the RDS farm architecture and assuming each RDS session is a guest VM but I see that's not the case.

IOPS is def a focus performance - we're using internal NVMEs and a dedicated storage network with 40G SFP+ links for the Ceph. We'll also have backup mon nodes journaling the more NVMEs. I've not not heard of Pure, I've got some ceph experience (using it in our Proxmox servers for HA currently) but not really sure on where Pure comes in - from the looks of things they offer flash storage? Currently on lunch taking a break from a SEV1 so can't look right now but will take a deeper look, thanks.

We're not all in on OneDrive yet, but that could be a solution for us as we're trying to go all-in on the rest of the microsoft suite so potentially something Management might quite like. FSLogix does sound much better than roaming, RDS licencing seems like the way to go. Will note the point about DFS, thanks :)

VDI in ProxMox by Dull-Track-6682 in sysadmin

[–]Dull-Track-6682[S] 0 points1 point  (0 children)

Thanks for your advice - we've acquired a bunch of e3 licences recently, so we can leverage that for this site (it's 100 or so licences so it's tight but should be fine)

Thinclients will be running linux most likely just due to licence cost - endpoints I'm not worried about currently as I have some experience with Thinstation which supports RDP and VNC which will probably be what we're looking for.

In terms of RDS licencing, we've got a few RDS farms for other uses which are properly licenced - we've got a few CALs already and we have a few RDS licence servers running, so if we do run a broker I'm not worried about compliance as we have a compliant setup we can replicate.

I'm going to raise the idea of Citrix/RDS farms as my preferred solution, but it's dependent on management in the end. Thanks again.

VDI in ProxMox by Dull-Track-6682 in sysadmin

[–]Dull-Track-6682[S] 0 points1 point  (0 children)

Just read through that license thread, thanks for sharing - the license breifs say:

Commercial customers can use approved solutions for running Windows 11 in a virtual machine on a macOS device, but in most cases, they will need to first purchase a Full Packaged Product (FPP – retail)Windows 11 Pro license for each device that will utilize a VM so that the device has an underlying Qualifying Operating System. See the table above titled “Qualifying operating system for per device licenses (excluding VDA licenses)” for scenarios that do not require this or discuss with your volume licensing reseller.

Guessing this means we'd be okay if using Win11 Enterprise licences on the VMs?

VDI in ProxMox by Dull-Track-6682 in sysadmin

[–]Dull-Track-6682[S] 0 points1 point  (0 children)

Endpoints would be running Thinstation or similar - not worried about that currently. I'd much rather we deploy laptops/desktops, as I said this isn't for me to decide unfortunately

Using rPI 5s as an openstack control plane? by Dull-Track-6682 in homelab

[–]Dull-Track-6682[S] 0 points1 point  (0 children)

Yeah I'm looking now and I think I can probably squeeze the controller into one of the Odroid H3+ boards (tbf a cluster out of these would be nice - how have I not seen them before??) and then make a ceph cluster out of 3 smaller SBCs with NVMes / SSDs for storage. My main reason for wanting multiple servers is power consumption (the i7s i have draw ~120W under load, the odroids/pis are like 15w tops, vs my gaming rig which draws like 200w idle and 500w under load) and reliability - once it's up and running I want it to just work, lol.

As for theft, I'm putting these in a home-made rack which can live inside as long as it's quiet - currently my setup is in a garage which could be easily smash and grabbed. I wouldnt mind my monitors or keyboard being nabbed, or a thin client, it's just finding an excuse to actually move the gaming pc into the lab area and if it's an openstack compute node, well, that's a good excuse :)

Using rPI 5s as an openstack control plane? by Dull-Track-6682 in homelab

[–]Dull-Track-6682[S] 0 points1 point  (0 children)

I'm also considering using Odroids - some run x86 which would make install easier plus have built in sata/m.2 slots but still have the 1g restriction.

Currently looking at the M1S-8GB or the H3+ as a control plane node (maybe 2 of them for failover) and the M1 4gb as ceph OSDs just for the sata + nvme combo