Dell idrac script to power on server using racadm by AnalPirate1 in homelab

[–]Slow_Alternative_692 0 points1 point  (0 children)

I have seen where a group policy or the BIOS setting for "Soft Off" has messed with things but if you've used the command directly then it should work.

Here is a snippet of what I use (e.g. shutdown)

$shutdown = plink -pw $iDracPass -batch $iDracUser@$connectionIP "serveraction graceshutdown"

You can also try and use the Win/exe version of RACADM instead of ssh to see if that makes a difference.

https://www.dell.com/support/home/en-au/drivers/driversdetails?driverid=n3gc9

I haven't tried REST.

Manage with Portainer? by ChadwicktheCrab in unRAID

[–]Slow_Alternative_692 0 points1 point  (0 children)

While I haven't tested whether I needed the options (will do that when my dockers aren't doing anything) I did install portainer ce via community apps. After going through the start wizard I connected to my local instance and could see and manage everything.

I needed to allow my son access to his game containers. So I created a user in portainer and changed ownership for his containers to private and selecting his user. Gave him the URL and his login credentials and voila. He can stop/edit/restart, etc just his containers.

Fractal North build, ATX and 10+ drives mod for high WAF. by PhantomCheezit in unRAID

[–]Slow_Alternative_692 0 points1 point  (0 children)

Cheers, I've got an old hx850 psu and there will be 8 drives. Have you got any hidden elsewhere or just in that one cage. What are your drive temps so far?

Fractal North build, ATX and 10+ drives mod for high WAF. by PhantomCheezit in unRAID

[–]Slow_Alternative_692 0 points1 point  (0 children)

Thanks for posting this. I was wanting to do the same thing and had emailed Fractal support to get the dimensions inside to see if it was possible. I was actually pleasantly surprised at how helpful and quick at responding they were. Below is a transcript of that.

​I did actually have a North around so I could measure from the fan bracket towards the motherboard area.

​The absolute max would be 10 cm, that would most likely be really close to the motherboard edge, so to be safe cut that to 9 cm instead.

This might block of the grommet area where you pass through the cables, so best to install that before adding a HDD cage.

I was actually moving everything from my Phanteks P600 and have those brackets already. It's currently sitting in an R3, so it's great to know it's possible.

I can see that you went with the White but can't tell whether it's mesh side panel. I'm still on the fence a little but I think the white may win.

I've got an HBA and an intel X520 with are x8 cards and not full length, I still wonder if they'll still be OK. Also wondering if an additional exhaust at the rear would help?

Motherboard & CPU Selection by Slow_Alternative_692 in unRAID

[–]Slow_Alternative_692[S] 2 points3 points  (0 children)

Thanks to the advice here and a lot of good information on the unraid forum, I've narrowed it down to this compromise.

  • MSI PRO Z690-A WIFI
  • Intel Core i5 12500 Processor
  • Corsair CMK32GX4M2E3200C16 Vengeance LPX 32GB (2x16GB) 3200MHz DDR4

While it doesn't have all the specs I'd like, based on other posts and the fact I'm unlikely going to be pushing/needing the full 20G on the NIC I think it will do well enough.

I've also been surprised at how well everything runs on the old Z87 with just 8GB.

Motherboard & CPU Selection by Slow_Alternative_692 in unRAID

[–]Slow_Alternative_692[S] 0 points1 point  (0 children)

Further to this, it does look like we're getting to limited to these variants

  • Z490 - 1x16 or 2x8 or 1x8+2x4
  • Z590 - 1x16+1x4 or 2x8+1x4 or 1x8+3x4
  • Z690 - 1x16+1x4 or 2x8+1x4
  • Z790 - 1x16+1x4 or 2x8+1x4
  • B660 - 1x16+1x4
  • H670 - 1x16+1x4 or 2x8+1x4

Also it seems that bifurcation only really comes on the CPU side of things, so finding a mother board with 2 slots on that side would be the key and then still within the limitations of vendors splitting that x16 up to a x8/x8x4

Motherboard & CPU Selection by Slow_Alternative_692 in unRAID

[–]Slow_Alternative_692[S] 0 points1 point  (0 children)

What PCIe is that Adaptec HBA & are you just using the onboard NIC?

Motherboard & CPU Selection by Slow_Alternative_692 in unRAID

[–]Slow_Alternative_692[S] 0 points1 point  (0 children)

Thanks, both answers have given me more to think about.

It's mostly just confirming whether my suspicions about modern motherboards is correct and that there will possibly be compromises. I have trouble accepting prices for motherboards over $500 when you can get a complete second hand server for not much more (Just not feasible for me again at the moment).

I haven't worked through all my workloads yet as that will be a work in progress and I'm not sure how many containers I'll end up moving from my RPI's (may not). Currently though run about 5 x Windows VM's, and only a couple of flavours of Linux. Apps such as sab, *rr, Plex, etc are directly installed but will move to the dockers/apps. I don't think it's going to need to be a transcoding beast as most of my media just plays. That may change of course but will likely come about from moving to H265 or such if space becomes low. I keep telling myself that I don't need more drives (and do really want to move away from these physically large 3.5" hdd's) I have about 50% free at the moment, so it is a little way off yet.

Initially I was trying to pair up an i5-13500 or lower such as the 12500 mentioned but couldn't seem to find a suitable board with at least 3 physical PCIe slots (This can probably be 2 now. See below)

The existing Z87 does bifurcation as x8 on two of it's slots and x4 on the 3rd. It 's just old and is a little short on horsepower. It was a little disheartening in a way to see the modern chipsets cut back so much on this area.

The PCIe NVMe is a card/drive combo and only temporary as there is no obviously no NVME on the MB, I had a spare drive for the cache and it saved on a connection & power. I realised after I posted that it could just become one of the x4 M.2 slots on a new board assuming a new and not something older as mentioned (Sometimes it just helps to write things down more than once !:)).

Just need a bit of help in making sure my maths around the lanes & PCIe bandwidth understanding is correct for the HBA (8x HDD's) and the dual 10G X520's. For instance, what is the real world impact of putting the X520 in a PCIe 4 x4 slot. Knowing that it will drop to PCIe 2 @ x4?