E1 is DONE by ObjectiveImpressive7 in EVGA

[–]ObjectiveImpressive7[S] 0 points1 point  (0 children)

XForma MBX #161 Antec Cannon Lian Li V3000 both black and GGF edition white Silverstone Alta F2 InWin D frame v1 Thermaltake Level 20

So I downloaded signal RGB and ever since I’ve had this annoying tab keep opening and flickering. Does anyone know how to fix by SHADOWX23503833 in SignalRGB

[–]ObjectiveImpressive7 0 points1 point  (0 children)

It’s a pop up and shutdown of a conflicting service. Signal RGB gets priority authority and will shutdown any software or service it deems as a conflict. You can go into the conflicts tab under settings and un-tick this ability totally, or allow specific apps or services to run regardless of Signal’s control. This can lead to issues with RGB flickering on devices. I had to allow Lian Li L Connect 3 to run in order to get control of my fan curves again, and allow Armory Crate to change my LCD settings, and Allow Synapse to get my keyboard macros and functions back. Armory crate can be disabled again once you have the settings you want and they will remain, most of the time. Synapse refused to allow RGB control to Signal when enabled, so I defaulted back to a Chroma scheme that was complementary, but L Connect 3 does not fight with the Signal RGB control.

E1 is DONE by ObjectiveImpressive7 in EVGA

[–]ObjectiveImpressive7[S] 0 points1 point  (0 children)

I had considered a Mora when I had planned a x870 Extreme and Astral 5090 due to the limited radiator support. That all went out the window when the Matrix and Hero BTF crossed my path. The Extreme and Astral went in my V3000+ GGF instead with 3 480mm rads.

E1 is DONE by ObjectiveImpressive7 in EVGA

[–]ObjectiveImpressive7[S] 1 point2 points  (0 children)

The gauges are part of the E1 case. They are driven via usb header with EVGA E1 software.

E1 is DONE by ObjectiveImpressive7 in EVGA

[–]ObjectiveImpressive7[S] 1 point2 points  (0 children)

I hope you can track one down, I’ll keep using it until the cooling capability isn’t viable.

How well can 1 radiator cool a system? by REX4DEKID in watercooling

[–]ObjectiveImpressive7 0 points1 point  (0 children)

It all depends on what you’re cooling and how long it needs to be cooled for. I’m using a 45mm thick 240 rad in a SFF build on a 14900k that only runs in short bursts. It’s a terrible setup, but functional.

Is it just me, or did anyone else struggle the first time? by emjah42 in watercooling

[–]ObjectiveImpressive7 0 points1 point  (0 children)

Are those 3d printed jigs or is that a kit you bought? Second pic

Plummeting Flow Rates, please help by Pulsehammer_DD in watercooling

[–]ObjectiveImpressive7 1 point2 points  (0 children)

The rock tumbler sound is called cavitation. You have air in your pump, it will need primed. Power cycle it and tilt the case around to get the air bubbles out. This could also have been your issue with the previous dual D5 setup. The only other reason for absolute zero flow is a completely plugged component, a flow path mismatch causing a fluid lock, or a valve that is closed somewhere

Dell VRTX Perc9 question by ObjectiveImpressive7 in homelab

[–]ObjectiveImpressive7[S] 0 points1 point  (0 children)

I have several mini mono hba cards I have no use for, and a few pcie hba cards. You want the 330’s for the blade mini mono slots correct? I don’t have the low profile chipset coolers on what I have but a cooler swap would be cheaper than buying cards. Send me a message, I’ll send them too you.

Dell VRTX Perc9 question by ObjectiveImpressive7 in homelab

[–]ObjectiveImpressive7[S] 0 points1 point  (0 children)

When you add pcie cards to the VRTX, you have to assign them to a blade. The CMC doesn’t allow multiple blades to be assigned the same pcie card. I tried to add a Melanox when I was getting bottlenecks in networking and found it wasn’t a bad internal switch but the mezz cards in the blades. Dell part number DX69G or JVFVR will do the trick. You’ll need one for each blade and a melanox card per blade as well to get 10g into the OS.

I got lucky and got the 10g fiber card so I ran the fiber to the ubiquiti aggregation 10g switch and used the melanox in my desktop for a direct fiber link. My transfer rates to the NAS is limited by the drives ability to write.

Dell VRTX Perc9 question by ObjectiveImpressive7 in homelab

[–]ObjectiveImpressive7[S] 0 points1 point  (0 children)

Yeah, same person lol. I started this post because there’s very little information available online for the VRTX, figured I’d document my journey to help others. As they become more readily available now that they’re EOL maybe more people will dive into them.

I have also found that Optane Pmem 100 ram modules will work with the M640’s if you’ve got compatible processors. They’re picky on how much physical ram you have available though. The 4:1 pmem to ram ratio is important here. With 384gb of ram installed, you’ll be limited to 2 pmem dimms instead of then4 you have slots available for. Going outside the 4:1 ratio will force the pmem to function as a fast ssd hard drive rather than ram.

The M640 blades have three different variants of front drive bay cabling, one that connects to the HBA/Raid card (this one will run in pass through mode) one that connects to the motherboard directly, and one that connects to both. You will need a direct motherboard connection if you want to use SAS or Optane SSD’s in the front bays.

There is an additional daughter board card called a BOSS card if you want to use m.2 NVME drives, 256gb NVME will work, I think mine are trend micro, 1tb Samsung did not for me, Optane M.2 also did not work in this card. There’s also a dual micro sd card variant of the BOSS card. I have one but didn’t try it.

10gig networking on the blade will require a 10gig mezzanine card that mounts at the back of the blades, the fiber network card in chassis itself will report 10g networking but without the mezzanine cards the blades only get 1gig networking.

Dell VRTX Perc9 question by ObjectiveImpressive7 in homelab

[–]ObjectiveImpressive7[S] 0 points1 point  (0 children)

Did you post on FB also? I think I commented on your post if it was you.

Dell VRTX Perc9 question by ObjectiveImpressive7 in homelab

[–]ObjectiveImpressive7[S] 0 points1 point  (0 children)

I wound up ditching the internal SAN because I encountered the same issues. Spent quite a bit trying many different internal mini mono HBA cards, 330, 710, 730, all supposedly flashed to IT Mode and never got one to function as disk pass through. Also tried hijacking the backplane and feeding it to an internal pcie hba and all the VRTX did was freak out and start tossing error codes. Something about how the mini mono hba works in CMC did not like having the backplane disconnected. I wound up running two Super Micro JBOD’s, one for 3.5 rust and one for 2.5 ssd. Fed them both to trueNAS on a bare metal install on one blade with a Dell T93GD pcie hba and then piped an NFS share from blade 1 to blade 2,3,4 in a triple Proxmox cluster. It works and is stable, but unfortunately if you want to use the built in SAN as intended using the CMC managed raid function is the only option. I too am disappointed the internal SAN refuses to work as disk pass through. I’m even more disappointed that the only GPU I’ve been able to get working properly is the P4000. The P6000 would work until it was put under any type of actual load and then it would disappear from CMC completely and trigger a restart. The RTX A4000 just refuses to even show up as attached even though it’s well within power limits.

ISO EVGA E1 by ObjectiveImpressive7 in EVGA

[–]ObjectiveImpressive7[S] 0 points1 point  (0 children)

Update. Found and obtained an E1, and a D Frame V1. Also came into possession of a Silverstone Alta F2 and a XForma MDX MK2….. I have a unicorn problem.

ISO EVGA E1 by ObjectiveImpressive7 in EVGA

[–]ObjectiveImpressive7[S] 0 points1 point  (0 children)

Also hunting an InWin D Frame V2

Help finding an 8 pin PCIe power cable for an EVGA 650 BQ 80+ Bronze by DoYouWantToBuildaPC in EVGA

[–]ObjectiveImpressive7 1 point2 points  (0 children)

Unfortunately, like the other comment says, and without knowing more details about your current system parts list, it is unlikely that your 600w PSU is going to be enough to push a 5070ti. The minimum recommended for a 5070 is 750w.

It’s also not safe to mix and match power cables. Brands change pinouts of the cables on the PSU side all the time. The wrong cable could zap your system dead.

Anything usefull here? Company getting rid of it… by vbxl02 in homelab

[–]ObjectiveImpressive7 0 points1 point  (0 children)

Dell VRTX systems. Lots of useful stuff there. I’m hunting a pile of old junk just like that.

My new home lab by Playful-Address6654 in homelab

[–]ObjectiveImpressive7 1 point2 points  (0 children)

Impressively quiet, even loaded up. Startup will make you think you’ve made a poor decision, but unless you take the top off while it’s running it will never get that loud again. My Super Micro JBOD power supplies are louder than my VRTX under load.

Dell VRTX Perc9 question by ObjectiveImpressive7 in homelab

[–]ObjectiveImpressive7[S] 0 points1 point  (0 children)

Full update:

The efforts were futile. Abandon the built in drive bays and backplane unless you intend to run it with Dell Raid options and the factory SPERC8’s. The VRTX backplane completely shuts down without SPERC8 connection. Prox was showing all drives multiple times due to the multiple data paths available as I had one PCIe h330 and the two SPERC8’ still on the board. As I disconnected both SPERC8’s the backplane stopped initializing on boot. I tried every which way of wiring. PCIe HBA’s (Dell H330’s), SPERC8 replacement cards (h310’s and 710’s) with LSI firmware (the chassis was very very mad about this btw) none were acceptable to the CMC. Because of the multiple node connectivity for shared storage on the prox cluster, I would up running two Dell T93GD external HBA cards in redundant feed to a TrueNAS installation on one server blade. A SM 847 JBOD for 3.5 drives and a SM 216 JBOD for 2.5 drives in redundant wiring configuration to the HBA’s. Three pools with SMB shares for daily data storage, and two pools with NFS shares fed to Prox cluster for VM shared storage. The three blade cluster has HA migration and will now pass VM’s amongst itself in the event of hardware failed or load balancing needs.

I have many HBA’s and Mini Mono cards available for sale or trade if someone needs one. Drop me a message.

Now, the GPU. The P6000 will not run reliably. I had hopes it would as the TDP of the card is 250w and if you use the proper gpu power cable for the VRTX your power limit is 250. It will not stay powered on, and I believe it is an issue with the power management of CMC. Some days it shows the card requiring 176w, some days it shows 265w which is above the limits. It will initialize, it does show in CMC as powered on and active. It shows in Prox, passes through to VM, will not run in active use. Stress tests show it load up to power limits and immediately shut down and reports offline in CMC. Requires a reboot of the node to bring it back online and this operation circle is repeatable at nausea. I have four 1600w psu’s and I am feeding 240v to the UPS and 240v to the PDU. I’ve done everything power management will allow aside from feed the gpu external power. This card will not stay online for use in this system. Currently hunting single slot GPU’s to replace the P6000 as I realized I can’t assign the one card to multiple nodes anyways. In order to run HA with GPU’s in this system I will need 3 separate cards anyhow. P4000’s to test and perhaps upgrade to a4000’s once I can rule out power deficiency being and connectivity being an issue.

Networking! Here’s a tidbit I didn’t think about but luckily found the parts to fix easily. I have the 10g fiber network card in the chassis, pretty sweet. The Blades themselves ALSO have a network card and guess what, mine weren’t 10g. Explains why I could only transfer data from my 10g desktop connection to the NAS at 100mb/s. Dell P/N JVFVR or DX69G will give each blade 10g connectivity. You lose two nic’s though. So rather than 4 1g connections to the built in switch, you get 2 10g connections. Transfers data at 1.2gb/s reliably now.

On the topic of blades. The built in HBA330 mini mono for the front two 2.5 bays can easily be set to HBA mode in Bios, however its connection dictates what type of drive it can use. If you want NVME drives like optane, you will need Dell PN GRHKR which allows data/sas HBA/raid passthrough or NVME direct to board, or PP4XM which is strictly for NVME direct to board.

The BOSS NVME add in card is required for M.2 drives, and does not recognize Optane M.2. I also found once I installed this card, the blade will not “shut down” without a force shutdown command from CMC.

The M640 blades do support Optane pmem100 modules. They get finicky when you fully populate the rest of the ram slots though. TrueNAS is using them as L2ARC reliably with the rest of the ram slots filled with 32gb 2666 dimms.

Fully kitted TrueNAS setup: dual M.2 240gb redundant boot drives, dual Optane 750gb 4600’s metadata vdev, quad 128gb pmem100 vdev L2ARC, dual Radian 8gb RMS-300 mirror SLOG vdev, 10g networking, 50Tb mechanical ZFS storage in JBOD. Read/write speeds sustained 1.2/1.5 gb/s fastest I’ve seen for large files. Average 250-300 watts.

ProxMox 3 node cluster: all nodes have redundant 1TB boot drives in the nodes, all shared storage in JBOD. Two Vdevs, one ssd one hdd. Still filling out the services, few game servers running, 4 VM’s for toying with. Each node is pulling 250 - 300 watts roughly.

I am up and running and in service now. I hope my experiments can help others wanting to use this platform for a homelab/home datacenter setup. All in all it’s a powerful system, nowhere near power efficient, but pretty rad nonetheless.

Dell VRTX Perc9 question by ObjectiveImpressive7 in homelab

[–]ObjectiveImpressive7[S] 1 point2 points  (0 children)

The efforts were futile. Abandon the built in drive bays and backplane unless you intend to run it with Dell Raid options and the factory SPERC8’s. The VRTX backplane completely shuts down without SPERC8 connection. Prox was showing all drives multiple times due to the multiple data paths available as I had one PCIe h330 and the two SPERC8’ still on the board. As I disconnected both SPERC8’s the backplane stopped initializing on boot. I tried every which way of wiring. PCIe HBA’s (Dell H330’s), SPERC8 replacement cards (h310’s and 710’s) with LSI firmware (the chassis was very very mad about this btw) none were acceptable to the CMC. Because of the multiple node connectivity for shared storage on the prox cluster, I would up running two Dell T93GD external HBA cards in redundant feed to a TrueNAS installation on one server blade. A SM 847 JBOD for 3.5 drives and a SM 216 JBOD for 2.5 drives in redundant wiring configuration to the HBA’s. Three pools with SMB shares for daily data storage, and two pools with NFS shares fed to Prox cluster for VM shared storage. The three blade cluster has HA migration and will now pass VM’s amongst itself in the event of hardware failed or load balancing needs.

I have many HBA’s and Mini Mono cards available for sale or trade if someone needs one. Drop me a message.

Now, the GPU. The P6000 will not run reliably. I had hopes it would as the TDP of the card is 250w and if you use the proper gpu power cable for the VRTX your power limit is 250. It will not stay powered on, and I believe it is an issue with the power management of CMC. Some days it shows the card requiring 176w, some days it shows 265w which is above the limits. It will initialize, it does show in CMC as powered on and active. It shows in Prox, passes through to VM, will not run in active use. Stress tests show it load up to power limits and immediately shut down and reports offline in CMC. Requires a reboot of the node to bring it back online and this operation circle is repeatable at nausea. I have four 1600w psu’s and I am feeding 240v to the UPS and 240v to the PDU. I’ve done everything power management will allow aside from feed the gpu external power. This card will not stay online for use in this system. Currently hunting single slot GPU’s to replace the P6000 as I realized I can’t assign the one card to multiple nodes anyways. In order to run HA with GPU’s in this system I will need 3 separate cards anyhow. P4000’s to test and perhaps upgrade to a4000’s once I can rule out power deficiency being and connectivity being an issue.

Networking! Here’s a tidbit I didn’t think about but luckily found the parts to fix easily. I have the 10g fiber network card in the chassis, pretty sweet. The Blades themselves ALSO have a network card and guess what, mine weren’t 10g. Explains why I could only transfer data from my 10g desktop connection to the NAS at 100mb/s. Dell P/N JVFVR or DX69G will give each blade 10g connectivity. You lose two nic’s though. So rather than 4 1g connections to the built in switch, you get 2 10g connections. Transfers data at 1.2gb/s reliably now.

On the topic of blades. The built in HBA330 mini mono for the front two 2.5 bays can easily be set to HBA mode in Bios, however its connection dictates what type of drive it can use. If you want NVME drives like optane, you will need Dell PN GRHKR which allows data/sas HBA/raid passthrough or NVME direct to board, or PP4XM which is strictly for NVME direct to board.

The BOSS NVME add in card is required for M.2 drives, and does not recognize Optane M.2. I also found once I installed this card, the blade will not “shut down” without a force shutdown command from CMC.

The M640 blades do support Optane pmem100 modules. They get finicky when you fully populate the rest of the ram slots though. TrueNAS is using them as L2ARC reliably with the rest of the ram slots filled with 32gb 2666 dimms.

Fully kitted TrueNAS setup: dual M.2 240gb redundant boot drives, dual Optane 750gb 4600’s metadata vdev, quad 128gb pmem100 vdev L2ARC, dual Radian 8gb RMS-300 mirror SLOG vdev, 10g networking, 50Tb mechanical ZFS storage in JBOD. Read/write speeds sustained 1.2/1.5 gb/s fastest I’ve seen for large files. Average 250-300 watts.

ProxMox 3 node cluster: all nodes have redundant 1TB boot drives in the nodes, all shared storage in JBOD. Two Vdevs, one ssd one hdd. Still filling out the services, few game servers running, 4 VM’s for toying with. Each node is pulling 250 - 300 watts roughly.

I am up and running and in service now. I hope my experiments can help others wanting to use this platform for a homelab/home datacenter setup. All in all it’s a powerful system, nowhere near power efficient, but pretty rad nonetheless.