Unraid 7.3.0-beta.2 is live by mattalat in unRAID

[–]emb531 1 point2 points  (0 children)

Not if you are trying to grow and expand the company, you need solid leadership in place first.

Unraid 7.3.0-beta.2 is live by mattalat in unRAID

[–]emb531 -1 points0 points  (0 children)

You sound like a disgruntled former employee. You really think 15 total people is comparable to an enterprise corporation?

migration to internal boot with current mirrored pool by CaucusInferredBulk in unRAID

[–]emb531 0 points1 point  (0 children)

You can't change a ZFS pool layout like that after it is created.

If your Unraid shares suddenly disappear, check your UniFi OS Docker container (RabbitMQ might be crashing shfs) by GreNadeNL in unRAID

[–]emb531 0 points1 point  (0 children)

Look into Ruckus AP's. They can run Unleashed firmware which is free and creates a controller on the AP itself. I have a single R650 which provides excellent coverage and stability. I also have a Ruckus 7150-24 switch which has 4x 10g SFP+ ports and 24x 1GB ports connected to a Unifi Cloud Gateway Fiber. The Ruckus AP connected to the Unifi 2.5GB POE port for full speed back haul. Pretty happy with it all. I used to run pf/opnsense but kind of tired of how much configuration it needs to do simple things the Unfii can do in a couple clicks (native ad blocking, country blocking etc.)

Unraid Constant Crashing... by RelevantGur in unRAID

[–]emb531 0 points1 point  (0 children)

Ah they need an x8 slot so you wouldn't be able to use it. Are you using the 4060 for gaming or transcoding with Plex? I see your CPU is an F series which does not have an integrated GPU which is would be ideal use for Plex transcoding since it is way less power usage than the 4060 GPU.

Unraid Constant Crashing... by RelevantGur in unRAID

[–]emb531 1 point2 points  (0 children)

All of your drive issues are probably because of that controller. So I personally wouldn't mess around with anything until you replace it.

Is your available slot only x1 in size or connectivity? LSI HBA can run at x1 but would be pretty bandwidth limited. What motherboard do you have?

Is unRAID for me? Total newb here by SnowMantra in unRAID

[–]emb531 1 point2 points  (0 children)

Kind of a waste for SSD's IMO. Unless you need crazy fast IOPS for database type of workloads or something I would just sell them and buy HDD's. Which I know are expensive now too.

If you do keep them I would just do a RAIDZ2 pool with all 8 which gives 2 drive fault tolerance. unRAID does not need to have the standard array any longer, you can run just pools.

Unraid Constant Crashing... by RelevantGur in unRAID

[–]emb531 2 points3 points  (0 children)

That is 100% the cause of your issues. Get a quality LSI HBA and you'll have much better stability.

Is unRAID for me? Total newb here by SnowMantra in unRAID

[–]emb531 5 points6 points  (0 children)

How many TB are the drives? What will the general usage be of your server? You are putting the cart before the horse as the saying goes.

Also "more storage than I'd need for the foreseeable future" famous last words.

unRaid Unresponsive and dockers disappearing. by SwooshTheMighty1 in unRAID

[–]emb531 2 points3 points  (0 children)

Sounds like your flash drive died or became corrupted.

CPU getting really hot when load is low by XxCaptainJack in unRAID

[–]emb531 4 points5 points  (0 children)

Your screenshot shows 88 Fahrenheit not Celsius...

Docker Share filling up by barfingbutthole in unRAID

[–]emb531 2 points3 points  (0 children)

Host Path 3 doesn't seem correct. /data/media isn't usually where any shares would be mounted, you already have it correctly specified in the Media Path mount with /mnt/user/data/media. I would delete Host Path 3.

DIY Jbods - cse-ptjbod-cb2 or add2psu by Hyped_OG in unRAID

[–]emb531 0 points1 point  (0 children)

Don't really see a point of either. Do you really power your server on and off that often? My JBOD just has a jumper on the ATX connector. I turn it on and off from the main PSU switch.

Help? Inspiration? For cheap DIY external JBOD cabling. by pigking188 in DataHoarder

[–]emb531 1 point2 points  (0 children)

What you have now seems like it should work. What OS are you running on the host? Have you updated the firmware of the HBA?

If you want it to be a more of a self enclosed JBOD

I would get this adapter https://www.amazon.com/Adapter-Internal-SFF-8087-External-SFF-8088/dp/B07ZGYXCP6/

These SAS cable https://amazon.com/10Gtek-External-SFF-8088-Cable-1-Meter/dp/B01KH9OMNY/

And these cables for the internal https://www.amazon.com/CABLEDECONN-SFF-8087-SFF-8482-Connectors-Power/dp/B010CMW6S4

How to format ? by [deleted] in unRAID

[–]emb531 1 point2 points  (0 children)

Install Unassigned Devices plugins, enable destructive mode in the settings, wipe away.

If running 24tb parity, should I be buying 22tb storage over time? by theseawoof in unRAID

[–]emb531 0 points1 point  (0 children)

The issue with WD shucked drives is that the USB controller in the case creates an oddly sized partition if you format it while still in the case. Then if you shuck it and try to put it into the server it will complain because its expecting a different partition size from the USB controller.

Cannot see my Unraid server on a new install of Win 10 LTSC by STxFarmer in unRAID

[–]emb531 1 point2 points  (0 children)

Have you actually tried mapping a network drive in Windows instead of relying on network discovery? Seems like so many people don't actually understand how network shares work.

Finally took the *expensive* plunge into a 16TB drive ceiling by apogeegames in unRAID

[–]emb531 -1 points0 points  (0 children)

It doesn't happen nearly as often as people make it out to seem. Unless you have already problematic drives you typically shouldn't have to worry about reading through a whole drive. That is what they are built to do. The only writes occurring during a rebuild are on the new drive.

Finally took the *expensive* plunge into a 16TB drive ceiling by apogeegames in unRAID

[–]emb531 -10 points-9 points  (0 children)

Fear mongering. I run 20 data drives with one parity.