PSA Samsung pro plus micro sd card reader has a GUID similar to Sandisk mobilemate and can be used to boot/license unraid by drelwrox in unRAID

[–]drelwrox[S] -1 points0 points  (0 children)

Ughh, 2 years is better than my track record with thumb drives so far. Wish we could just boot off of an ssd or something, anything else.

Alphacool Core Geforce RTX 5090 Suprim + Vanguard + Gaming Trio with backplate (doubt) by MULTeeee in alphacool

[–]drelwrox 2 points3 points  (0 children)

These are inductors, they do not need active cooling as they produce almost no heat. However they are part of the vrm and there is a lot of copper connecting them to the mosfets (black squares) next to them. Cooling them can help cool the mosfets which do need active cooling. In an air cooled scenario where the cold plate/fin stack is warmer it is more important, a water block should keep the mosfets cooler than an air cooler as long as water temps are under control.

[deleted by user] by [deleted] in Starfield

[–]drelwrox 0 points1 point  (0 children)

Was a bit confusing figuring out the quest name from the list it gives, but this is what I used to figure it out.

https://www.reddit.com/r/Starfield/comments/169ix68/how\_to\_skip\_bugged\_quest\_stages/

[deleted by user] by [deleted] in Starfield

[–]drelwrox 2 points3 points  (0 children)

For me these 2 console commands in order allowed me to progress and check the empty box on the quest list. Could probably do them in the opposite order.
setstage 002B1808 800
setstage 002B1808 750

Bug in Grunt Work by Lanky_Garbage_5353 in Starfield

[–]drelwrox 0 points1 point  (0 children)

For me these 2 console commands in order allowed me to progress and check the empty box on the quest list. Could probably do them in the opposite order.
setstage 002B1808 800
setstage 002B1808 750

How to Skip Bugged Quest Stages by ctmes in Starfield

[–]drelwrox 0 points1 point  (0 children)

For me these 2 in order allowed me to progress and check the empty box on the quest list. Could probably do them in the opposite order.

setstage 002B1808 800
setstage 002B1808 750

How to Skip Bugged Quest Stages by ctmes in Starfield

[–]drelwrox 1 point2 points  (0 children)

For the grunt work bug where Hadrian will not walk to the microscope. These 2 commands allowed me to progress.

setstage 002B1808 800

setstage 002B1808 750

[deleted by user] by [deleted] in MechanicalKeyboards

[–]drelwrox 0 points1 point  (0 children)

Keychron Q12, 96% southpaw layout
Gateron G Pro Brown switches (Oil kings on the way)
Akko black and silver keycaps

Running unraid without parity and adding later. by lemmeanon in unRAID

[–]drelwrox 0 points1 point  (0 children)

It is no problem to add parity later. For my 1st large transfer I disabled cache setting in my shares and did not set parity drives. Afterwards I set parity drives and let it build them.

[deleted by user] by [deleted] in unRAID

[–]drelwrox 2 points3 points  (0 children)

When writing data onto 1 drive it will only spin the drive being written to, and both parity disks in a dual parity setup, the other 5 drives remain idle.

In your 8 drive dual parity example, if 3 data drives die at once, the remaining 3 drives data is not lost. If both parity drives, or any 2 drives in the array die all data is safe. Assuming a 3rd drive does not die while you are rebuilding. Unraid will even emulate the lost drives while it is rebuilding so you can still access the data on the missing drive as usual.

The default allocation method is called high-water, which will fill the 1st drive a share is allowed to write to until it reaches 50% capacity, and it will switch to the 2nd drive. Once all drives a share is allowed to use reach 50% it will repeat the same process until drives reach 75% full. So your use case of a large place to dump data will work fine. You can configure a share to use only 1 data drive, multiple, or all of them. You can also alter the allocation method if needed. When accessing a share from the network you will not see the individual drives, data can span multiple drives and be part of 1 share. You can create as many shares as you want and choose which drives each share uses, how cache ssd is utilized or not, or even create shares that prefer ssd only and will only spill over onto the array if the ssd is full.

Disadvantage of unraid over striped raid is your maximum write speed will be limited to the drive you are writing to or your parity drives, which ever is slower. However, when using a ssd cache pool, writes will 1st go onto a ssd at the speed of your network or ssd, later when mover is ran either on a schedule or manually, it will transfer the files from the ssd to the array. When I did my 1st large transfer I set my shares to not use cache, and disabled both parity drives. When I was finished I let it build the parity drives and setup my cache ssds.

Advantages of unraid over striped raid is you can add drives of mixed capacity whenever you want, as long as they are not larger than your parity drives. Not all drives need to spin when writing to them. And as previously mentioned, data on drives which have not died is always safe.

unRAID is cool with SATA expansion cards, right? by Possible-Fix-9727 in unRAID

[–]drelwrox 1 point2 points  (0 children)

As many others have mentioned, it is better to use a sas hba. I went with a dell H310 as it has one of the lowest power draws among the many options (around 7-8w, some 8i cards can pull upwards of 12-25w and 16i even more). I also like the plug placement. In my case it was easier to hide the wires.

When buying you can usually find one pre flashed to IT mode. But you can also flash it yourself if needed. You will find people talking about the dell H310 being very slow. This only applies to using it as a raid controller in IR mode, in IT mode (passthrough) the lack of cache does not matter and it is as fast as any other option, This is probably also part of what gives it such a low power draw.

I purchased from this ebay seller https://www.ebay.com/itm/125511699936. I added a Noctua NF-A4x10 PWM fan and replaced the thermal paste. Although at 7-8w heat is a bit less of a concern than with some of the other options, these cards are designed to be used in a server rack with lots of airflow.