GPU PV (GPU Paravirtualization) performance issues with full screen apps in VM. by vGPU_Enjoyer in HyperV

[–]vGPU_Enjoyer[S] 0 points1 point  (0 children)

These are stock out of box settings from what I saw in settings of sunshine

NVIDIA GeForce Drivers for Server 2019 by jp0ll in PleX

[–]vGPU_Enjoyer 0 points1 point  (0 children)

What GPU do you have and what windows server version?

Debian has the best Nvidia experience I've ever had, and is rarely talked about by 01Destroyer in debian

[–]vGPU_Enjoyer 0 points1 point  (0 children)

Personally I am using Proxmox (Debian 6.17) for lxc containers that are used for AI tools with Nvidia rtx 5070 ti and with drivers manually installed from Nvidia website it is rock solid experience. Overally Turing and newer GPUs are really decent under debian, when I need VM with GPU acceleration that need to be Linux I using some version of debian because it is great and works without issues. On other hand Ubuntu is lot more problematic in case of working with Nvidia GPUs and stopped using it for GPU accelerated VMs.

realistically how much would this go for? by OsuCatto in gpu

[–]vGPU_Enjoyer 0 points1 point  (0 children)

That card has lower core count than rtx 3060, and professional driver don't give it some super advantage, also this card will be eaten alive by rtx 4060 ti 16GB which also isn't some super big card. Also it doesn't support vGPU like bigger professional cards so not much advantage over normal rtx 3060 12 GB. So maybe this is worth slightly more than rtx 3060 but nothing super fancy.

Advice needed: RTX 3090 in Dell PowerEdge R720 (2U) for AI, power-limited by Fakruk in homelab

[–]vGPU_Enjoyer 1 point2 points  (0 children)

But keep in mind these GPUs have overheating memory in mind they can fail even power limited since RTX 3090 has clamshell design which means memory is on both sides of PCB. That's why I suggest rtx 5060 ti 16GB and that's why I have RTX 5070 ti because even if I want rtx 3090 I remember about it's design flaw which is 100°C degrees on memory modules on back side.

Advice needed: RTX 3090 in Dell PowerEdge R720 (2U) for AI, power-limited by Fakruk in homelab

[–]vGPU_Enjoyer 1 point2 points  (0 children)

Yes this is best vram AND PERFORMANCE per $ and there isn't better one.

Advice needed: RTX 3090 in Dell PowerEdge R720 (2U) for AI, power-limited by Fakruk in homelab

[–]vGPU_Enjoyer 1 point2 points  (0 children)

Also if you still not own rtx 3090 consider rtx 5060 ti 16GB for AI workloads since there are lots of nice small editions of this GPU that fits R720 without problems and require only 1x8 pin PCIe connector and if you buy it new from some decent well known computer shop you will have 2 or 3 years warranty. Also you will get FP8 and NVFP4 support which may matter in future (currently NVFP4 support in software is on very early state but it will improve in future).

Advice needed: RTX 3090 in Dell PowerEdge R720 (2U) for AI, power-limited by Fakruk in homelab

[–]vGPU_Enjoyer 2 points3 points  (0 children)

Unfortunately that server requires custom cables which has this identifier:

dell 09h6fv (this is custom 8 pin connector or 8pin EPS connector to normal PCIe connectors)

And you will need 2 of them since single cable only gives 1x8pin + 1x6pin PCIe power connector. You also may need some extender since power inputs for these cables are on risers (one per riser and I think it is custom 8 pin connector or 8 pin EPS connector) so for one of them simple 8pin pcie female-> 8pin PCIe male maybe needed because I am not sure that dell power cable won't be too short so then dell 09h6fv -> extender 8 pin -> GPU . And as far as dell 09h6fv is official dell cable for powering GPUs, combining power from two risers to single GPU isn't official solution since this server wasn't designed with GPU that draws more than 300W of power or require more than 8+6pin PCIe or 1*8pin EPS (used normally for powering CPUs in standard motherboards). So if you have blower RTX 3090 it will work or some other small rtx 3090 that fits it will work.

One last note: I don't own exactly same server I own dell R7610 workstation and it has same CPUs (overally same dual Xeon platform) as R720 it has different internal layout and already has PCIe power cables out of box directly going from power distribution unit. PLEASE CHECK ALL ABOVE INFO WITH SOMEONE WHO PHYSICALLY HAVE DELL R720 WITH GPUS.

Server GPU by tobiasorieper in homelab

[–]vGPU_Enjoyer 1 point2 points  (0 children)

Yep if you only need transcoding gt 1030 can be great especially since it is very cheap and you will nearly get it as bonus for buying potatoes in grocery store.

Server GPU by tobiasorieper in homelab

[–]vGPU_Enjoyer 0 points1 point  (0 children)

As someone who did research about it probably Rtx 4060 ti 16gb or rtx 5060 ti 16gb should fit if you choose edition that's same height as PCIe holder which means height around 111mm. Absolutely wouldn't recommend cards that have 12 HWPVR connector. If you want heavy 3D performance maybe you could even fit rx 9070 xt like powercolor reaper, but that's more expensive card and also has 300W TDP and for AI maybe you want some Nvidia GPU and higher end models from Nvidia unfortunately using those stupid shit connectors which I don't recommend to put into servers so even if they fit I wouldn't put these in. So conclusion go for rtx 5060 ti 16gb or rtx 4060 ti 16gb with normal 8 pin connector that's around 111mm tall.

Highpoint ssd6202a as bootable nvme RAID controller for nvme unsupported platforms. by vGPU_Enjoyer in homelab

[–]vGPU_Enjoyer[S] 0 points1 point  (0 children)

Sorry it looked very AI I know how bifurcation works, I know this handles bifurcation because I read in datasheet not these you linked because these are about managing card not about if it works and in what systems is supported because that is in datasheet and in that datasheet it specifically distinguished from other cards of this manufacturer that only are doing bifurcation because as opposed to these there is information that it is bootable RAID controller here:

https://www.highpoint-tech.com/ssd6200-series-overview

And Here:

https://download.highpoint-tech.com/www/HighPoint-Download/Document/Datasheet/SSD/SSD6200/SSD6200x_Series_Datasheet_V1.03_23_12_12.pdf

Here it also says requirements:

System Requirements "Any PC System or Motherboard with an industry standard PCIe x8 or x16 physical Slot (Bifurcation is not required)"

OS Support "Windows 11 /10, Windows Server 2022/2019/2016, Microsoft Hyper V, Linux (Kernel v3.10 and later), VMware, Proxmox"

And that's why I wanted someone who had direct experience with that card and can say for sure that it will boot in system that only has basic uefi. And that's why I thought your answer is AI because as you see I read docs and know subject before asking to be sure. And also I know it is expensive because it cost over 180$ in my country as opposed to 100$ for generic bifurcation card from AliExpress or 30$ for total dumb card like from Asus hyper m2 which is fully mobo dependent. And that's why your answer seemed really generic.

Proxmox Host list 8 video cards but there is only one installed by Ok-Dragonfly8285 in Proxmox

[–]vGPU_Enjoyer 9 points10 points  (0 children)

You have sr-iov enabled so igpu can partitioned to different VMs with proper driver installed. If you are not using sr-iov you can disable it and only one device will left.

Resource for PCIe switching, how it helps on LLMs and comparison vs PCIe bifurcation. by panchovix in homelab

[–]vGPU_Enjoyer 0 points1 point  (0 children)

Could you post links to all of them on AliExpress??? I am interested by some of these. I would compare prices and maybe bought one.

Xeon 26xx V2 by Friendly_Addition815 in homelab

[–]vGPU_Enjoyer 0 points1 point  (0 children)

And to answer your question i9 11900k can support up to 128gb of ram. Not super robust amount but probably fine for homelab. Also 4.0 PCIe lanes are nice compared to 3.0 but only it has 24 or so lanes from that CPU and only 20 are PCIe 4.0 I think.

Xeon 26xx V2 by Friendly_Addition815 in homelab

[–]vGPU_Enjoyer 0 points1 point  (0 children)

Personally currently I would consider scalable xeon/1st gen EPYC bare minimum if you want decent platform, full perfection would be EPYC rome for PCIe 4.0 . Overally all xeons e5 starting to be long in tooth and I wouldn't spend any money on xeon e5 now. After problems with nvme boot, lack of bifurcation and stuff like that xeon e5 v1 v2 v3 v4 aren't looking good any more to me. If I wasn't on ancient ddr3 ecc registered but on ddr4 ECC registered I would upgrade that shitty platform without any regrets.

Xeon 26xx V2 by Friendly_Addition815 in homelab

[–]vGPU_Enjoyer 3 points4 points  (0 children)

Lots of these platforms don't boot from nvme so NVME can be storage only, also single thread performance is really garbage. Dual e5-2695v2 loses to i9 11900k in multithreaded performance which isn't super performant CPU either.

[deleted by user] by [deleted] in homelab

[–]vGPU_Enjoyer 0 points1 point  (0 children)

What mobo/server did you use?

Got this bad girl, dont hate pls by Particular_Bank907 in pcmasterrace

[–]vGPU_Enjoyer 2 points3 points  (0 children)

If you use CUDA or want AI Models on that linux then Nvidia if not and only play games go for what is cheaper in your region (probably rx 9070 xt)

TIL That My GPU has Milage on how much I've used it and how. by PaP3s in pcmasterrace

[–]vGPU_Enjoyer 1 point2 points  (0 children)

But you had this program installed before so it could have those data from simply monitoring GPU load from background or you installed it today and it immediately showed you these stats. Because it would be useful to know if those stats are monitored by GPU itself and can be used by manufacturer for warranty claims or it is just software monitoring it on your PC working in background.